Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
S
schaertl_andreas
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container registry
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
supervision
schaertl_andreas
Commits
6a1c9512
Commit
6a1c9512
authored
4 years ago
by
Andreas Schärtl
Browse files
Options
Downloads
Patches
Plain Diff
report: explain components
parent
0831211e
No related branches found
No related tags found
No related merge requests found
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
doc/report/components.tex
+42
-6
42 additions, 6 deletions
doc/report/components.tex
doc/report/references.bib
+8
-0
8 additions, 0 deletions
doc/report/references.bib
with
50 additions
and
6 deletions
doc/report/components.tex
+
42
−
6
View file @
6a1c9512
...
...
@@ -3,15 +3,51 @@
With various ULO/RDF files in place we have the aim of making the
underlying data available for use with applications. For this, we
should first make out the various components that might be involved in
such a system. As a guide, figure~
\ref
{
fig:components
}
illustrates
the various components and their interplay.
such a system. As a guide, figure~
\ref
{
fig:components
}
illustrates the
various components and their interplay. We will now give an overview
over all involved components. Each component will later be discussed
in more detail, this section serves only for the reader to get a
general understanding of the developed infrastructure and its
topology.
\begin{figure}
[]
\begin{center}
\includegraphics
{
figs/components
}
\caption
{
Components involved in the
\emph
{
ulo-storage
}
system.
}
\label
{
fig:components
}
\end{center}\end{figure}
We will now give an overview over all involved components. Each
component will later be discussed in more detail, this section serves
only for the reader to get a general understanding of the developed
infrastructure and its topology.
\begin{itemize}
\item
ULO/RDF data is present on various locations, be it Git
repositories, available on web servers via HTTP or on the local disk
of a user. Regardless where this ULO/RDF data is stored, a
\emph
{
Collecter
}
collects these
{
ULO/RDF
}
. In the easiest case, this
involves cloning a Git repository or crawling a file system for
matching files.
\item
With streams of ULO/RDF files at the Collecter, this information
then gets passed to the
\emph
{
Importer
}
. The Importer imports
triplets from files into some kind of permanent storage. For use in
this project, the GraphDB~
\cite
{
graphdb
}
triplet store was natural
fit. In practice, both Collecter and Importer end up being one piece
of software, but this does not have to be the case.
\item
Finally, with all triplets stored in a database, an
\emph
{
Endpoint
}
is where applications access the underlying
knowledge base. This does not necessarily need to be any specific
software, rather the programming API of the database could be
understood as an endpoint of its own. However, some thought should
be put into designing an Endpoint that is convenient to use.
\end{itemize}
Additionally, one could think of a
\emph
{
Harvester
}
component. Before
we assumed that the ULO/RDF triplets are already available as
such. Indeed for this project this is the case as we worked on already
exported triplets from the Isabelle and Coq libraries. However, this
does not need to be the case. It might be desirable to automate the
export from third party formats to ULO/RDF and indeed this is what a
Harvester would do. It fetches mathematical knowledge from some
remote source and then provides a volatile stream of ULO/RDF data to
the Collecter, which then passes it to the Importer and so on. The big
advantage of such an approach would be that exports from third party
libraries can always be up to date and do not have to be initiated
manually.
This diff is collapsed.
Click to expand it.
doc/report/references.bib
+
8
−
0
View file @
6a1c9512
...
...
@@ -46,6 +46,14 @@
url
=
{https://rdf4j.org/}
,
}
@online
{
graphdb
,
title
=
{GraphDB 9.3 documentation}
,
organization
=
{Ontotext}
,
date
=
{2020}
,
urldate
=
{2020-06-16}
,
url
=
{http://graphdb.ontotext.com/documentation/free/}
}
@online
{
graphdbapi
,
title
=
{Using GraphDB with the RDF4J API}
,
organization
=
{Ontotext}
,
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment