Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
S
schaertl_andreas
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container registry
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
supervision
schaertl_andreas
Commits
a548fb7a
Commit
a548fb7a
authored
4 years ago
by
Andreas Schärtl
Browse files
Options
Downloads
Patches
Plain Diff
report: review applications
parent
7e38b25d
Branches
Branches containing commit
No related tags found
No related merge requests found
Changes
3
Show whitespace changes
Inline
Side-by-side
Showing
3 changed files
doc/report/applications-preds.tex
+2
-2
2 additions, 2 deletions
doc/report/applications-preds.tex
doc/report/applications.tex
+123
-114
123 additions, 114 deletions
doc/report/applications.tex
doc/report/references.bib
+39
-1
39 additions, 1 deletion
doc/report/references.bib
with
164 additions
and
117 deletions
doc/report/applications-preds.tex
+
2
−
2
View file @
a548fb7a
...
@@ -12,8 +12,8 @@
...
@@ -12,8 +12,8 @@
ORDER BY DESC(?count)
ORDER BY DESC(?count)
\end{lstlisting}
\end{lstlisting}
\caption
{
SPARQL query that returns a list of all
\caption
{
SPARQL query that returns a list of all
\texttt
{
predicate
}
s used in the backing store. We include
\texttt
{
predicate
}
s used in the backing store. We include
the
the
\texttt
{
ulo
}
prefix s
o
the results are printed in a
\texttt
{
ulo
}
prefix s
uch that
the results are printed in a
concise human readable format.
}
\label
{
fig:preds-query
}
concise human readable format.
}
\label
{
fig:preds-query
}
\end{subfigure}
\end{subfigure}
\vspace
{
0.5cm
}
\vspace
{
0.5cm
}
...
...
This diff is collapsed.
Click to expand it.
doc/report/applications.tex
+
123
−
114
View file @
a548fb7a
...
@@ -3,7 +3,7 @@
...
@@ -3,7 +3,7 @@
With programming endpoints in place, we can now query the data set
With programming endpoints in place, we can now query the data set
containing both Isabelle and Coq exports stored in
{
GraphDB
}
. We
containing both Isabelle and Coq exports stored in
{
GraphDB
}
. We
experimented with the following applications that talk to a GraphDB
experimented with the following applications that talk to a GraphDB
e
ndpoint.
E
ndpoint
, our second contribution
.
\begin{itemize}
\begin{itemize}
\item
Exploring which ULO predicates are actually used in the
\item
Exploring which ULO predicates are actually used in the
...
@@ -12,8 +12,8 @@ endpoint.
...
@@ -12,8 +12,8 @@ endpoint.
(Section~
\ref
{
sec:expl
}
).
(Section~
\ref
{
sec:expl
}
).
\item
Providing an interactive interface for browsing the data
\item
Providing an interactive interface for browsing the data
set. Our implementation is limited to listing basic
set. Our implementation is limited to listing basic
information
information
about contributors and their work.
about contributors and their work
(Section~
\ref
{
sec:exp
}
)
.
\item
We investigated queries that could be used to extend the
\item
We investigated queries that could be used to extend the
system into a larger tetrapodal search system. While some
system into a larger tetrapodal search system. While some
...
@@ -38,56 +38,54 @@ is given in verbatim in Figure~\ref{fig:preds-query}. Our query
...
@@ -38,56 +38,54 @@ is given in verbatim in Figure~\ref{fig:preds-query}. Our query
yields a list of all used predicates together with the number of
yields a list of all used predicates together with the number of
occurrences (Figure~
\ref
{
fig:preds-result
}
). Looking at the results,
occurrences (Figure~
\ref
{
fig:preds-result
}
). Looking at the results,
we find that both the Isabelle and the Coq data sets only use subsets
we find that both the Isabelle and the Coq data sets only use subsets
of the predicates provided by
the ULO ontology
. The full results are
of the predicates provided by
~
{
ULO
}
. The full results are
listed in
listed in
Appendix~
\ref
{
sec:used
}
. In both cases, what stands out is
Appendix~
\ref
{
sec:used
}
. In both cases, what stands out is
that either
that both
export
s
use less than a third of all available ULO~predicates.
export use
s
less than a third of all available ULO~predicates.
We also see that the Isabelle and Coq exports use different
We also see that the Isabelle and Coq exports use different
predicates. For example, the Isabelle export contains organizational
predicates. For example, the Isabelle export contains organizational
meta information such as information about paragraphs and sections in
meta information such as information about paragraphs and sections in
the source document while the Coq export only tells us the filename of
the source document while the Coq export only tells us the filename of
the original Coq source. That is not particularly problematic as long
the original Coq source. Both exports have their particularities and
as we can trace a given object back to the original source.
with more and more third party libraries exported to ULO one has to
Regardless, our results do show that both exports have their own
assume that this heterogeneity will only grow. In particular we want
particularities and with more and more third party libraries exported
to point to the large number of predicates which remain unused in both
to ULO one has to assume that this heterogeneity will only grow. In
Isabelle and Coq exports. A user formulating queries for ULO might be
particular we want to point to the large number of predicates which
oblivious to the fact that only subsets of exports support given
remain unused in both Isabelle and Coq exports. A user formulating
predicates.
queries for ULO might be oblivious to the fact that only subsets of
exports support given predicates.
\input
{
applications-preds.tex
}
\input
{
applications-preds.tex
}
We expect the difference between Coq and Isabelle exports to be caused
We expect the difference between Coq and Isabelle exports to be caused
by the difference in source material. It is only natural that
by the difference in source material. It is only natural that
different third party libraries expressed in different languages with
different third party libraries expressed in different languages with
different
featur
es will result in different
ULO~predicates. However, we
different
capabiliti
es will result in different
want to hint at the fact that this could also be an omission in
ULO~predicates. Regardless, we want to hint at the possibility that
the exporter code that originally
generated the RDF~triplets we
this could also be an omission in
the exporter code that originally
imported
. This shows the importance of writing good
exporters.
generated the RDF~triplets
. This shows the importance of writing good
Exporters t
ak
ing existing libraries
and outputting
ULO~triplets
must
exporters.
Exporters t
ranslat
ing existing libraries
to
ULO~triplets
lose as little information as possible to ensure good results in
a
must
lose as little information as possible to ensure good results in
larger tetrapodal search system.
a
larger tetrapodal search system.
Our first application gave us an initial impression of the structure
Our first application gave us an initial impression of the structure
of currently available organizational knowledge formulated in ULO
of currently available organizational knowledge formulated in ULO
triplets. Whether caused by the difference in formal language or
triplets. Whether caused by the difference in formal language or
because of omissions in code that produce ULO~triplets, we must not
because of omissions in code that produce
s
ULO~triplets, we must not
expect predicates to be evenly distributed in the data set. This is
expect predicates to be evenly distributed in the data set. This is
something to keep in mind especially as the number of ULO exports
something to keep in mind
,
especially as the number of ULO exports
increases.
increases.
\subsection
{
Interactive Exploration
}
\subsection
{
Interactive Exploration
}
\label
{
sec:exp
}
Our second application wants to illustrate how to browse an ULO data
Our second application wants to illustrate how to browse an ULO data
set interactively. For this purpose, we developed a web front end that
set interactively. For this purpose, we developed a web front end that
allows us to browse contributions by author. For this
tool, we used
allows us
ers
to browse contributions by author. For this
application,
the RDF4J Endpoint, the application itself is implemented in
we used
the RDF4J Endpoint, the application itself is implemented in
Java. Figure~
\ref
{
fig:appss
}
shows four screenshots of the current
Java. Figure~
\ref
{
fig:appss
}
shows four screenshots of the current
version. Once the user has selected a given contribution, we list
version. Once the user has selected a given contribution, we list
some basic information about th
at object of formal knowledge
, such as
some basic information about th
e selected object
, such as
type
type
(e.g.
\
lemma or corollary) and references to other
objects in the
(e.g.
\
lemma or corollary) and
name as well as
references to other
knowledge base.
objects in the
knowledge base.
\input
{
applications-screenshots.tex
}
\input
{
applications-screenshots.tex
}
...
@@ -95,24 +93,29 @@ This application is interesting because similar approaches could be
...
@@ -95,24 +93,29 @@ This application is interesting because similar approaches could be
applied when designing IDE~integration for working with organizational
applied when designing IDE~integration for working with organizational
knowledge. Given some object in a document identified by an URI, we
knowledge. Given some object in a document identified by an URI, we
can look at how that object is connected to other objects in the
can look at how that object is connected to other objects in the
knowledge graph. We can also connect the dots back to the original
knowledge graph.
source. In our case, we implemented translation from the
\texttt
{
ulo:sourceref
}
predicate as used in the Isabelle
We can also connect the dots back to the original source. In
export~
\cite
{
uloisabelle
}
to the original Isabelle source
particular, we took a look at the
\texttt
{
ulo:sourceref
}
property
(Figure~
\ref
{
fig:isrc
}
). This is convenient for the user, but required
defined for many objects in the knowledge base.
some extra work for us as application implementors. The reason for
\texttt
{
ulo:sourceref
}
is ``the URI of the physical location (e.g.,
this is that the format of
\texttt
{
ulo:sourceref
}
is not well-defined,
file/URI, line, column) of the source code that introduced the
rather it is up to implementors of library exporters how to format
subject''~
\cite
{
uloonto
}
and in use in the Isabelle
source references. The consequence is that for each export, developers
export~
\cite
{
uloisabelle
}
. Making the connection from ULO~object to
of front ends will need to write custom code for finding the original
original Isabelle document is convenient for the user, but required
source. Maybe it would be a reasonable requirement for
some extra work from us as application implementors. The reason for
\texttt
{
ulo:sourceref
}
to have a well-defined format, ideally a
this is that the format of the
\texttt
{
ulo:sourceref
}
property is not
``real'' URI on the open web, if that is applicable for a given
well-defined, rather it is up to implementors of library exporters how
to format source references. The consequence is that for each export,
developers of front ends will need to write custom code for finding
the original source. Maybe it would be a reasonable requirement for
\texttt
{
ulo:sourceref
}
to have a well-defined format, ideally an
actually reachable URI on the open web, if that is applicable for a given
library.
library.
In summary, w
hile translating from
\texttt
{
ulo:sourceref
}
to ori
ental
W
hile translating from
\texttt
{
ulo:sourceref
}
to ori
ginal URI
URI
introduced some extra work, implementing this application was easy
introduced some extra work, implementing this application was easy
and
and
straight-forward. Implementing similar features for other environments
straight-forward. Implementing similar features for other environments
should not be very difficult.
should not be very difficult.
\subsection
{
Querying for Tetrapodal Search
}
\label
{
sec:tetraq
}
\subsection
{
Querying for Tetrapodal Search
}
\label
{
sec:tetraq
}
...
@@ -125,38 +128,43 @@ proof of concept implementations and evaluate their applicability.
...
@@ -125,38 +128,43 @@ proof of concept implementations and evaluate their applicability.
\subsubsection
{
Elementary Proofs and Computed Scores
}
\subsubsection
{
Elementary Proofs and Computed Scores
}
Our first
query~
$
\mathcal
{
Q
}_
1
$
illustrates how
w
e can
compute
We start with
query~
$
\mathcal
{
Q
}_
1
$
which
illustrates how
on
e can
arithmetic scores for
some node
s in our knowledge graph.
compute
arithmetic scores for
object
s in our knowledge graph.
Query~
$
\mathcal
{
Q
}_
1
$
asks us to ``
\emph
{
[f]ind theorems with
Query~
$
\mathcal
{
Q
}_
1
$
asks us to ``
\emph
{
[f]ind theorems with
non-elementary proofs
}
.'' Elementary proofs can be understood as
non-elementary proofs
}
.'' Elementary proofs can be understood as
those proof that are considered easy and obvious~
\cite
{
elempro
}
. In
those proof that are considered easy and
consequence,~
$
\mathcal
{
Q
}_
1
$
has to search for all proofs which are
obvious~
\cite
{
elempro,elemtest
}
. In consequence,~
$
\mathcal
{
Q
}_
1
$
has
not trivial. Of course, just like any distinction between ``theorem''
to search for all proofs which are not trivial. Of course, just like
and ``corollary'' is going to be somewhat arbitrary, so is any
any distinction between ``theorem'' and ``corollary'' is going to be
judgment about whether a proof is easy or not.
somewhat arbitrary, so is any judgment about whether a proof is easy
or not. While we do understand elementary as easy here, we must not
omit that there also exist concrete definitions of ``elementary
proofs'' in certain subsets of mathematics~
\cite
{
elempro, el0,
el1
}
. As we will see, our naive understanding of elementary proofs
results in interesting insights regardless.
Existing research on proof difficulty is either very broad or specific
Existing research on proof difficulty is either very broad or specific
to one problem. For example, some experiments showed that students and
to one problem. For example, some experiments showed that students and
prospective school teachers ha
ve
problems with notation
,
term
prospective school teachers ha
d
problems with notation
and
term
rewriting and required prerequisites~
\cite
{
proofund,
proofteach
}
, none
rewriting and
were missing
required prerequisites~
\cite
{
proofund,
of which seems applicable for grading individual
formal proofs for
proofteach
}
, none
of which seems applicable for grading individual
difficulty. On the other hand, there is
research on rating proofs for
formal proofs for
difficulty. On the other hand, there is
very
individual subsets of problems, e.g.
\
on the satisfiability of a given
specific research on rating proofs for some given subsets of problems,
CNF formula. A particular example is focused on heuristics for how
e.g.
\
on the satisfiability of a given CNF formula. A particular
long a SAT solver will take
to find a solution for a
example is focused on heuristics for how
long a SAT solver will take
given~
{
CNF
}
. Here, solutions that take long
are considered
to find a solution for a
given~
{
CNF
}
. Here, solutions that take long
harder~
\cite
{
proofsat
}
.
are considered
harder~
\cite
{
proofsat
}
.
\noindent\emph
{
Organizational Aspect.
}
A first working hypothesis
\noindent\emph
{
Organizational Aspect.
}
A first working hypothesis
might be to assume that elementary proofs are short. In that case, the
might be to assume that elementary proofs are short. In that case, the
size, that is the number of bytes to store the proof, is our first
size, that is the number of bytes to store the proof, is our first
indicator of proof complexity. This is by no means perfect, as even
indicator of proof complexity. This is by no means perfect, as even
identical proofs can be represented in different ways that
might have
identical proofs can be represented in different ways that
have vastly
vastly
different size in bytes. It might be tempting to imagine a
different size in bytes. It might be tempting to imagine a
unique
unique
normal form for each proof, but finding such a normal form
normal form for each proof, but finding such a normal form
may very
might very
well be impossible. As it is very difficult to find a
well be impossible. As it is very difficult to find a
generalized
generalized
definition of proof difficulty, we will accept proof size
definition of proof difficulty, we will accept proof size
as a first
as a first
working hypothesis.
working hypothesis.
{
ULO
}
offers the
\texttt
{
ulo:external-size
}
predicate which will allow
{
ULO
}
offers the
\texttt
{
ulo:external-size
}
predicate which will allow
us to sort by file size. Maybe proof complexity also leads to quick
us to sort by file size. Maybe proof complexity also leads to quick
...
@@ -172,8 +180,8 @@ of measure of formula complexity, different proofs could be
...
@@ -172,8 +180,8 @@ of measure of formula complexity, different proofs could be
rated. Similarly, with narrative knowledge available to us, we could
rated. Similarly, with narrative knowledge available to us, we could
count the number of words, citations and so on to rate the narrative
count the number of words, citations and so on to rate the narrative
complexity of a proof. Combining symbolic knowledge, narrative
complexity of a proof. Combining symbolic knowledge, narrative
knowledge and organizational knowledge allow
s
us to find proofs
which
knowledge and organizational knowledge
should
allow us to find proofs
are probably
straight forward
.
which
are probably
easier than others
.
\input
{
applications-q1.tex
}
\input
{
applications-q1.tex
}
...
@@ -181,7 +189,7 @@ are probably straight forward.
...
@@ -181,7 +189,7 @@ are probably straight forward.
organizational aspect can be as simple as querying for all theorems
organizational aspect can be as simple as querying for all theorems
justified by proofs, ordered by size (or check time).
justified by proofs, ordered by size (or check time).
Figure~
\ref
{
fig:q1short
}
illustrates how this can be achieved with a
Figure~
\ref
{
fig:q1short
}
illustrates how this can be achieved with a
SPARQL query.
M
aybe we w
ant
to go one step further and calculate a
SPARQL query.
But m
aybe we w
ish
to go one step further and calculate a
rating that assigns each proof some numeric score of complexity based
rating that assigns each proof some numeric score of complexity based
on a number of properties. We can achieve this in SPARQL as recent
on a number of properties. We can achieve this in SPARQL as recent
versions support arithmetic as part of the SPARQL specification;
versions support arithmetic as part of the SPARQL specification;
...
@@ -195,7 +203,7 @@ and associated proofs. Naturally, this list is bound to be very
...
@@ -195,7 +203,7 @@ and associated proofs. Naturally, this list is bound to be very
long. A suggested way to solve this problem is to introduce some kind
long. A suggested way to solve this problem is to introduce some kind
of cutoff value for our complexity score. Another potential solution
of cutoff value for our complexity score. Another potential solution
is to only list the first~
$
n
$
results, something a user interface
is to only list the first~
$
n
$
results, something a user interface
would have to do either way (pagination~
\cite
{
pagination
}
). Either
would have to do either way (
i.e.
\
pagination~
\cite
{
pagination
}
). Either
way, this is not so much an issue for the organizational storage
way, this is not so much an issue for the organizational storage
engine and more one that a tetrapodal search aggregator has to account
engine and more one that a tetrapodal search aggregator has to account
for.
for.
...
@@ -204,10 +212,10 @@ Another problem is that computing these scores can be quite time
...
@@ -204,10 +212,10 @@ Another problem is that computing these scores can be quite time
intensive. Even if calculating a score for one given object is fast,
intensive. Even if calculating a score for one given object is fast,
doing so for the whole data set might quickly turn into a problem. In
doing so for the whole data set might quickly turn into a problem. In
particular, if we wish to show the
$
n
$
~objects with best scores, we do
particular, if we wish to show the
$
n
$
~objects with best scores, we do
need to compute scores for all relevant
triplets for that score
. In
need to compute scores for all relevant
objects first
. In
\emph
{
ulo-storage
}
, all scores we experimented with were easy enough
\emph
{
ulo-storage
}
, all scores we experimented with were easy enough
and the data sets small enough such that this did not become a
and the data sets small enough such that this did not become a
concrete problem. But in a larger tetrapodal search system, caching
or
concrete problem. But in a larger tetrapodal search system, caching
lazily or ahead of time computed results will probably we a
lazily or ahead of time computed results will probably we a
necessity. Which component takes care of keeping this cache is not
necessity. Which component takes care of keeping this cache is not
clear right now, but we advocate for keeping caches of previously
clear right now, but we advocate for keeping caches of previously
...
@@ -230,15 +238,15 @@ complexity). We need to consider where each of these three components
...
@@ -230,15 +238,15 @@ complexity). We need to consider where each of these three components
might be stored.
might be stored.
\noindent\emph
{
Symbolic and Concrete Aspects.
}
First, let us consider
\noindent\emph
{
Symbolic and Concrete Aspects.
}
First, let us consider
algorithms. Algorithms can be formulated as computer code which
algorithms. Algorithms can be formulated as computer code which
can be
can be
understood as symbolic knowledge (code represented as a
understood as symbolic knowledge (code represented as a
syntax tree)
syntax tree)
or as concrete knowledge (code as text
or as concrete knowledge (code as text
files)~
\cites
[pp. 8--9]
{
tetra
}{
virt
}
. Either way, we will not be
files)~
\cites
[pp. 8--9]
{
tetra
}{
virt
}
. Either way, we will not be
able
able
to query these indices for what problem a given algorithm is
to query these indices for what problem a given algorithm is
solving,
solving,
nor is it possible to infer properties as complex as
nor is it possible to infer properties as complex as
$
NP
$
-completeness
$
NP
$
-completeness automatically
. Metadata of this kind
needs to be
automatically in the general case~
\cite
{
np
}
. Metadata of this kind
stored in a separate index for organizational knowledge,
it being
needs to be
stored in a separate index for organizational knowledge,
the only fit.
it being
the only fit.
\noindent\emph
{
Organizational Aspect.
}
If we wish to look up properties
\noindent\emph
{
Organizational Aspect.
}
If we wish to look up properties
about algorithms from organizational knowledge, we first have to
about algorithms from organizational knowledge, we first have to
...
@@ -260,16 +268,17 @@ salesman problem or the sorting problem) in terms of theorems
...
@@ -260,16 +268,17 @@ salesman problem or the sorting problem) in terms of theorems
(propositions, types) that get implemented by a proof (algorithm,
(propositions, types) that get implemented by a proof (algorithm,
program).
program).
As algorithms make up an important part of certain areas of
As algorithms make up an important part of certain areas of research,
research, it might be reasonable to introduce native level support
it might be reasonable to introduce native level support for
for algorithms in ULO or separately in another ontology. An
algorithms in ULO or separately in another ontology. An argument for
argument for adding support directly to ULO is that ULO aims to be
adding support directly to ULO is that ULO aims to be universal and as
universal and as such should not be without algorithms. An
such should not be without algorithms. An argument for a separate
argument for a separate ontology is that what we understand as ULO
ontology is that what we understand as ULO data sets (Isabelle, Coq
data sets (Isabelle, Coq exports) already contain triplets from
exports) already contain predicates from other ontologies
other ontologies (e.g.
\
Dublin Core metadata~
\cite
{
dcreport,
(e.g.
\
Dublin Core metadata~
\cite
{
dcreport, dcowl
}
). In addition,
dcowl
}
) and keeping concepts separate is not entirely
keeping concepts separated in distinct ontologies is not entirely
unattractive in itself.
unattractive in itself as it can be less overwhelming for a user working
with these ontologies.
In summary, we see that it is difficult to service~
$
\mathcal
{
Q
}_
2
$
In summary, we see that it is difficult to service~
$
\mathcal
{
Q
}_
2
$
even though the nature of this query is very much one of
even though the nature of this query is very much one of
...
@@ -278,13 +287,13 @@ future ULO~exports to find a good way of encoding algorithmic problems
...
@@ -278,13 +287,13 @@ future ULO~exports to find a good way of encoding algorithmic problems
and solutions. Perhaps a starting point on this endeavor would be to
and solutions. Perhaps a starting point on this endeavor would be to
find a formal way of structuring information gathered on sites like
find a formal way of structuring information gathered on sites like
Rosetta Code~
\cite
{
rc
}
, a site that provides concrete programs that
Rosetta Code~
\cite
{
rc
}
, a site that provides concrete programs that
solve algorithm
s
problems.
solve algorithm
ic
problems.
\subsubsection
{
Contributors and Number of References
}
\subsubsection
{
Contributors and Number of References
}
Finally, query~
$
\mathcal
{
Q
}_
3
$
from literature
~
\cite
{
tetra
}
wants to
Finally, query~
$
\mathcal
{
Q
}_
3
$
from literature wants to
know ``
\emph
{
[a]ll areas of math that
{
Nicolas G.
\
de Bruijn
}
has
know ``
\emph
{
[a]ll areas of math that
{
Nicolas G.
\
de Bruijn
}
has
worked in and his main contributions
.
}
''
$
\mathcal
{
Q
}_
3
$
~is asking
worked in and his main contributions
}
''
~
\cite
{
tetra
}
.
$
\mathcal
{
Q
}_
3
$
~is asking
for works of a given author~
$
A
$
. It also asks for their main
for works of a given author~
$
A
$
. It also asks for their main
contributions, for example which particularly interesting paragraphs
contributions, for example which particularly interesting paragraphs
or code~
$
A
$
has authored. We picked this particular query as it
or code~
$
A
$
has authored. We picked this particular query as it
...
@@ -292,19 +301,18 @@ is asking for metadata, something that should be easily serviced by
...
@@ -292,19 +301,18 @@ is asking for metadata, something that should be easily serviced by
organizational knowledge.
organizational knowledge.
\noindent\emph
{
Organizational Aspect.
}
ULO has no concept of authors,
\noindent\emph
{
Organizational Aspect.
}
ULO has no concept of authors,
contributors, dates and so on. Rather, the idea is to take
contributors, dates and so on. Rather, the idea is to take advantage
advantage of the Dublin Core project which provides an ontology
of the Dublin Core project which provides an ontology for such
for such metadata~
\cite
{
dcreport, dcowl
}
. For example, Dublin Core
metadata~
\cite
{
dcreport, dcowl
}
. For example, Dublin Core provides us
provides us with the
\texttt
{
dcterms:creator
}
and
with the
\texttt
{
dcterms:creator
}
and
\texttt
{
dcterms:contributor
}
\texttt
{
dcterms:contributor
}
predicates. Servicing~
$
\mathcal
{
Q
}_
3
$
predicates. Servicing~
$
\mathcal
{
Q
}_
3
$
requires us to look for
requires us to look for creator~
$
A
$
and then list all associated
creator~
$
A
$
and then list all associated objects that they have worked
objects that they have worked on. Of course this
on. Of course this requires above authorship predicates to actually be
requires above authorship predicates to actually be in use. With
in use. With the Isabelle and Coq exports this was hardly the case;
the Isabelle and Coq exports this was hardly the case; running
running some experiments we found less than 15 unique contributors and
some experiments we found less than 15 unique contributors and
creators, raising suspicion that metadata is missing in the original
creators, raising suspicion that metadata is missing in the
library files. Regardless, existing ULO exports allow us to query for
original library files. Regardless, in theory ULO allows us to
objects ordered by authors.
query for objects ordered by authors.
\input
{
applications-q3.tex
}
\input
{
applications-q3.tex
}
...
@@ -315,11 +323,12 @@ of~$A$, that is those works that~$A$ authored that are the most
...
@@ -315,11 +323,12 @@ of~$A$, that is those works that~$A$ authored that are the most
important. Sorting the result by number of references might be a good
important. Sorting the result by number of references might be a good
start. To get the main contributions, we rate each individual work by
start. To get the main contributions, we rate each individual work by
its number of
\texttt
{
ulo:uses
}
references. Extending the previous
its number of
\texttt
{
ulo:uses
}
references. Extending the previous
{
SPARQL
}
, we can
query
the database for a ordered list of works,
{
SPARQL
}
query
, we can
ask
the database for a
n
ordered list of works,
starting with the one that has the most
starting with the one that has the most
references~(Figure~
\ref
{
fig:q2b
}
). We can formulate~
$
\mathcal
{
Q
}_
3
$
references~(Figure~
\ref
{
fig:q2b
}
). We see that one can
with just one SPARQL query. Because everything is handled by the
formulate~
$
\mathcal
{
Q
}_
3
$
with just one SPARQL query. Because
database, access should be about as quick as we can hope it to be.
everything is handled by the database, access should be about as quick
as we can hope it to be.
While the sparse data set available to use only returned a handful of
While the sparse data set available to use only returned a handful of
results, we see that queries like~
$
\mathcal
{
Q
}_
3
$
are easily serviced
results, we see that queries like~
$
\mathcal
{
Q
}_
3
$
are easily serviced
...
...
This diff is collapsed.
Click to expand it.
doc/report/references.bib
+
39
−
1
View file @
a548fb7a
...
@@ -462,3 +462,41 @@
...
@@ -462,3 +462,41 @@
urldate
=
{2020-10-05}
,
urldate
=
{2020-10-05}
,
url
=
{https://www.w3.org/Consortium/}
,
url
=
{https://www.w3.org/Consortium/}
,
}
}
@article
{
elemtest
,
title
=
{On mathematicians' different standards when evaluating elementary proofs}
,
author
=
{Inglis, Matthew and Mejia-Ramos, Juan Pablo and Weber, Keith and Alcock, Lara}
,
journal
=
{Topics in cognitive science}
,
volume
=
{5}
,
number
=
{2}
,
pages
=
{270--282}
,
year
=
{2013}
,
publisher
=
{Wiley Online Library}
}
@article
{
el0
,
title
=
{Number theory and elementary arithmetic}
,
author
=
{Avigad, Jeremy}
,
journal
=
{Philosophia mathematica}
,
volume
=
{11}
,
number
=
{3}
,
pages
=
{257--284}
,
year
=
{2003}
,
publisher
=
{Oxford University Press}
}
@incollection
{
el1
,
title
=
{The elementary proof of the prime number theorem: An historical perspective}
,
author
=
{Goldfeld, Dorian}
,
booktitle
=
{Number Theory}
,
pages
=
{179--192}
,
year
=
{2004}
,
publisher
=
{Springer}
}
@mastersthesis
{
np
,
title
=
{An attempt to automate np-hardness reductions via SO$\exists$ logic}
,
author
=
{Nijjar, Paul}
,
year
=
{2004}
,
school
=
{University of Waterloo}
}
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment