Newer
Older
\section{Applications}\label{sec:applications}
With endpoints in place, we can now query the ULO/RDF
data set. Depending on the kind of application, different interfaces
and approaches to querying the database might make sense.
\subsection{Kinds of Applications}
Storing information in RDF triplets allows for any kind of queries,
meaning it is not optimized for any kind of application. For the sake
of this project, we tried out three categories of applications.
\begin{itemize}
\item Of course the initial starting point for this project was
the idea of tetrapodal search. Our first application
\emph{ulosearch} tires to offer an easy way of searching in the
ULO/RDF data set.
\item With lots of data in a database, it appears attractive to
visualize the data set in some kind graphical way in the
\emph{ulovisualize} application.
\item Finally, we want to experiment a bit. The available ULO/RDF
data sets are about proofs and theorems and should include links
between. It might be interesting to find out which proofs and
definitions are more important than others such that we can
create a kind of ranking of them. This is explored in the
\emph{ulorate} application.
\end{itemize}
\subsection{Database Interface}
For integrating the ULO/RDF data set into an existing application, it
probably is reasonable to directly query the data set using RDF4J.
That is, of course, assuming the existing co debase is based on the
{JVM}. If that is not the case, generating SPARQL queries is the
obvious choice.
The advantage of this approach is that connecting and interacting
with the database is straightforward. The disadvantage is that this
approach requires a deep understanding of structure of the underlying
ULO triplets.
\subsection{A Language for Organizational Data}
ULO/RDF is a subset of RDF. While it can be queried as just standard
RDF data, maybe it is helpful to design a query language only for
ULO/RDF triplets. Expressions in this particular query language could
then be converted to SPARQL or RDF4J expressions. Ideally this means
that (1)~the query language is intuitive and easy to use for this
specific use case and (2)~execution is still fast as the underlying
SPARQL database is already very optimized.