diff --git a/doc/report/applications.tex b/doc/report/applications.tex
index 257bb61fc5c64f6c442fe590bcfbef4ba19e4037..6607ec79c9772f92c6c28d6b6a23f050b4b77048 100644
--- a/doc/report/applications.tex
+++ b/doc/report/applications.tex
@@ -23,10 +23,10 @@ proof of concept implementations.
     distinction between ``theorem'' and ``corollary'' is arbitrary, so
     is any judgment about whether a proof is elementary or not.
 
-    A first working hypothesis might be to assume that easy proofs are
-    short. In that case, the size, that is the number of bytes to
-    store the proof, is our first indicator of proof
-    complexity. ULO/RDF offers the \texttt{ulo:external-size}
+    \textbf{Organizational Aspect} A first working hypothesis might be
+    to assume that easy proofs are short. In that case, the size, that
+    is the number of bytes to store the proof, is our first indicator
+    of proof complexity. ULO/RDF offers the \texttt{ulo:external-size}
     predicate which will allow us to sort by file size. Maybe small
     file size also leads to quick check times in proof assistants and
     automatic theorem provers. With this assumption in mind we could
@@ -34,14 +34,48 @@ proof of concept implementations.
     complexity with file size allows us to answer this query with
     organizational data based on {ULO/RDF}.
 
-    A tetrapodal search system should probably also take symbolic
-    knowledge into account. Based on some kind of measure of formula
-    complexity, different proofs could be rated. Similarly, with
-    narrative knowledge available to us, we could count the number of
-    words, references and so on to rate the narrative complexity of a
-    proof. This shows that combining symbolic knowledge, narrative
-    knowledge and organizational knowledge could allow us to
-    service~$\mathcal{Q}_{1}$ in a heuristic fashion.
+    \textbf{Other Aspects} A tetrapodal search system should probably
+    also take symbolic knowledge into account. Based on some kind of
+    measure of formula complexity, different proofs could be
+    rated. Similarly, with narrative knowledge available to us, we
+    could count the number of words, references and so on to rate the
+    narrative complexity of a proof. This shows that combining
+    symbolic knowledge, narrative knowledge and organizational
+    knowledge could allow us to service~$\mathcal{Q}_{1}$ in a
+    heuristic fashion.
+
+    \textbf{Implementation} Implementing a naive version of the
+    organizational aspect can be as simple as querying for all proofs,
+    ordered by size (or check time).
+    \begin{lstlisting}
+    PREFIX ulo: <https://mathhub.info/ulo#>
+
+    SELECT ?proof ?size
+    WHERE {
+        ?proof ulo:proof ?o .
+        ?proof ulo:external-size ?size .
+    }
+    ORDER BY DESC(?size)
+    \end{lstlisting}
+    Maybe we want to go one step further and calculate a rating that
+    assigns each proof some numeric value of complexity.  We can
+    achieve this in SPARQL as recent versions support arithmetic
+    as part of the SPARQL specification.
+    \begin{lstlisting}
+    PREFIX ulo: <https://mathhub.info/ulo#>
+    PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
+
+    SELECT ?proof ?size (xsd:float(?size) + xsd:float(?checktime) as ?rating)
+    WHERE {
+        ?proof ulo:proof ?o .
+        ?proof ulo:external-size ?size .
+        ?proof ulo:check-time ?checktime .
+    }
+    ORDER BY DESC(?rating)
+    \end{lstlisting}
+    Finding a reasonable rating is its own topic of research, but we
+    see that as long as it is based on basic arithmetic, it will be
+    possible to formulate in a SPARQL query.
 
     \item \textbf{$\mathcal{Q}_{2}$ ``Find algorithms that solve
     $NP$-complete graph problems.''} Here we want the tetrapodal search