Commit 3fb4dec8 authored by Michael Banken's avatar Michael Banken

added remaining revision suggestions

parent d93763e4
No preview for this file type
......@@ -2,7 +2,7 @@
\label{chap:alg}
\section{Idea}
Since our approach to identifying candidates is large based on set theory, the algorithms for finding are also making heavy use of set operations, using essentially the same construction method for creating the set of optimization candidates as the definitions presented in \autoref{chap:candidates}.
Since our approach to identifying candidates is largely based on set theory, the algorithms for finding are also making heavy use of set operations, using essentially the same construction method for creating the set of optimization candidates as the definitions presented in \autoref{chap:candidates}.
\subsection{Graph Optimization}
Optimization of the theory graph happens one theory at a time, beginning at the outer most edges of the theory graphs, i.e. those theories that are not included by any other theory.
......@@ -18,7 +18,7 @@ Optimization of a theory itself happens in two separate passes. The first pass d
The reason for separating these passes is to avoid the removal of simply redundant inclusions, that will later have to be reinserted when eliminating a partially superfluous inclusion. An example for such a hazard is the example of an overlap between the two types of candidates in \autoref{fig:redundantoverlap}
\subsubsection{First pass - Eliminating superfluous inclusions}
\subsubsection{First Pass - Eliminating Superfluous Inclusions}
In the first pass we first take a selection of optimization candidates, namely those theories that are directly included, but not used in the current theory. These are the theories that need to be replaced.
......@@ -35,7 +35,7 @@ Proof sketch:
\item Any indirectly included theory is either included by a theory that is retained or by one that is being replaced. Since the replacement of a theory inclusion is the necessary subset of its included theories, all necessary dependencies will remain included.
\end{itemize}
\subsubsection{Second pass - Eliminating simply redundant inclusions}
\subsubsection{Second Pass - Eliminating Simply Redundant Inclusions}
The idea behind the second pass is to collect all those theories that are included indirectly by the current theory and throwing away all of those direct inclusions that are part of this set. This will leave us exactly with those inclusions that are not simply redundant, without changing the flattened theory graph.
......@@ -48,9 +48,9 @@ If we do so, all indirect inclusions must ultimately be the result of a direct i
If these assumptions are wrong however, not only could we run into problems, but we will inevitably do so if the current theory is part of the cycle. It is therefore best to only perform this operation on a graph that satisfies our assumption.
\end{itemize}
\subsection{Future Lite Code}
\subsection{Future Light Cone}
\label{sec:futurelite}
In \autoref{sec:viability} we already mentioned the problem of future lite code. The obvious method for aquiring the future lite code is traversing the entire theory graph and whenever we find a theory inclusion, we make a note for the included theory that the theory we are currently traversing is part of its future code. Unfortunately this requires traversing the graph in its entirety.
In \autoref{sec:viability} we already mentioned the problem of future light cone. The obvious method for acquiring the future light cone is traversing the entire theory graph and whenever we find a theory inclusion, we make a note for the included theory that the theory we are currently traversing is part of its future code. Unfortunately this requires traversing the graph in its entirety.
For the actual algorithm we skip this part and instead put create a map of used theories in the future code, since this is the part we actually require. Since this means potentially going over every theory for every theory (or at least reasonably close to it), our worst case runtime for this part of the algorithm is quadratic in the number of theories in the graph.
......@@ -114,7 +114,7 @@ To somewhat mitigate the effect this has on the performance, the future is creat
\section{Pseudo Code}
\subsection{Optimizing graph}
\subsection{Optimizing Graph}
\label{sec:optgraph}
The following code applies the optimizations to the entire theory graph.
......@@ -139,7 +139,7 @@ The following code applies the optimizations to the entire theory graph.
\caption{optimizeGraph(theorygraph)}
\end{algorithm}
\subsection{Optimizing theory}
\subsection{Optimizing Theory}
\label{sec:opttheo}
The following pseudo code applies optimizations to a given theory.
......@@ -158,7 +158,7 @@ The following pseudo code applies optimizations to a given theory.
\caption{optimize(theory, futureUse)}
\end{algorithm}
\subsection{Finding superfluous inclusions}
\subsection{Finding Superfluous Inclusions}
\label{sec:alg_si}
The following pseudo code is for finding superfluous inclusions (see: \autoref{sec:superinc}).
......@@ -184,7 +184,7 @@ The following pseudo code is for finding superfluous inclusions (see: \autoref{s
Note that in \autoref{sec:superinc} we made a destinction between purely and partially superfluous inclusions. However we do not need to make this distinction while searching for them, as we can search for them by using the criteria for generally superfluous inclusion. Since they only differ in the set of inclusions that needs to be retained and we write that set in our result anyway, both can be accomplished by the same routine.
\subsection{Finding simply redundant inclusions}
\subsection{Finding Simply Redundant Inclusions}
\label{sec:alg_sri}
The following pseudo code is for finding simply redundant inclusions (see: \autoref{sec:redinc}).
......@@ -208,7 +208,7 @@ The following pseudo code is for finding simply redundant inclusions (see: \auto
\caption{simplyRedundantIncludes(theory)}
\end{algorithm}
\section{Performance analysis}
\section{Performance Analysis}
Since the performance will be heavily dependent on the size of the graph to be optimized, we will measure runtime and memory requirements depending on the variable t, which denotes the number of theories in the graph.
\subsection{Runtime}
......@@ -216,6 +216,7 @@ Since the performance will be heavily dependent on the size of the graph to be o
Exact runtime is specific to the library implementations of used functions, but after looking at the relevant Scala documentation we can make a few helpful assumptions \cite{scaladoccolperf}.
With effective constant time to lookup, add and remove in a hashset, we can deduce that likely runtimes for the set-operations $\cup$, $\cap$ and $\setminus$ are O(n), where n is the number of elements in the sets involved.
\vfill
\subsubsection{First pass}
\begin{algorithm}[H]
......
This diff is collapsed.
\chapter{Conclusion}
\section{State of the project}
\section{State of the Project}
\label{sec:state}
In its current state the tool is capable of automatically detecting the optimization candidates we introduced in this paper.
......@@ -20,4 +20,4 @@ The most important future goal is to further automate the optimization process.
Another important step is providing a proper interface for calling the optimizer. Preferably in a way that would seamlessly integrate with other MMT components.
Finally the runtime of the algorithm is a concern, as we discussed in \autoref{sec:runtime}. While it seems to run fine on small graphs, this could easily blow up for larger graphs. So any optimizations to the algorithm itself could be a welcome improvement. For this purpose further analysis of the actual runtime seems prudent.\\
One potential approach would be using prior knowledge about archive dependencies to limit the scope on which the algorithm needs to operate, when only optimizing parts of the graph.
\ No newline at end of file
One potential approach would be using prior knowledge about archive dependencies to limit the scope on which the algorithm needs to operate, when only optimizing parts of the graph. With enough information we could limit the search to incremental searches for optimizations that can be run after each change.
\ No newline at end of file
......@@ -2,7 +2,7 @@
\label{chap:implementation}
\section{Integration}
The intended interface for programmers accessing the graph optimizations corresponds more or less directly to the routines presented in \autoref{chap:alg}. They are implemented as methods of the class GraphOptimizationTool, which itself is a subclass of info.kwarc.mmt.api.frontend.Extension\cite{MMTapi}.
The intended interface for programmers accessing the graph optimizations corresponds more or less directly to the routines presented in \autoref{chap:alg}. They are implemented as methods of the class GraphOptimizationTool, which itself is a subclass of info.kwarc.mmt.api.frontend.Extension \cite{MMTapi}.
At the moment calling these functions is the only way to access the optimization functions, but this should change at some point in the future (see \autoref{sec:futurework}).
......@@ -20,7 +20,7 @@ At the moment calling these functions is the only way to access the optimization
def findRedundantIncludes(theoryPath : MPath, replacementmap : HashMap[MPath, HashMap[Path, HashSet[MPath]]] = HashMap[MPath, HashMap[Path, HashSet[MPath]]]()) : List[Path]
\end{lstlisting}
The method findRedundantIncludes is implementing simplyRedundantIncludes i.e the second pass from \autoref{sec:alg_sri}.
The method \lstinline{findRedundantIncludes} is implementing simplyRedundantIncludes i.e the second pass from \autoref{sec:alg_sri}.
\begin{lstlisting}[language=scala]
/** Finds superfluous inclusions
......@@ -34,7 +34,7 @@ The method findRedundantIncludes is implementing simplyRedundantIncludes i.e the
def findUnusedIncludeReplacements(theoryPath : MPath, replacementmap : HashMap[MPath, HashMap[Path, HashSet[MPath]]] = HashMap[MPath, HashMap[Path, HashSet[MPath]]](), futureUse : HashMap[MPath, HashSet[MPath]] = HashMap[MPath, HashSet[MPath]]()) : HashMap[Path, HashSet[MPath]]
\end{lstlisting}
The method findUnusedIncludeReplacements is implementing superfluousIncludes(theory, futureUse) i.e the first pass from \autoref{sec:alg_si}.
The method \lstinline{findUnusedIncludeReplacements} is implementing superfluousIncludes(theory, futureUse) i.e the first pass from \autoref{sec:alg_si}.
\begin{lstlisting}[language=scala]
/** Optimizes multiple theories
......@@ -47,7 +47,7 @@ The method findUnusedIncludeReplacements is implementing superfluousIncludes(the
def findReplacements(theories: Iterable[MPath], interactive : Boolean) : HashMap[MPath, HashMap[Path, HashSet[MPath]]]
\end{lstlisting}
This method findReplacements is similar to optimize(theory, futureUse)(\autoref{sec:opttheo}) and optimizeGraph(theorygraph)(\autoref{sec:optgraph}). The difference is that the method takes a an Iterable of theories, for which to apply the graph optimization algorithm. In other words it is applying the optimizations not to the entire graph, but a given subset.
This method \lstinline{findReplacements} is similar to optimize(theory, futureUse)(\autoref{sec:opttheo}) and optimizeGraph(theorygraph)(\autoref{sec:optgraph}). The difference is that the method takes a an Iterable of theories, for which to apply the graph optimization algorithm. In other words it is applying the optimizations not to the entire graph, but a given subset.
The boolean parameter interactive chooses whether to launch the interactive mode or not.
......@@ -64,7 +64,7 @@ If false is selected all suggested changes will be applied.
def findReplacements(interactive : Boolean = false) : HashMap[MPath, HashMap[Path, HashSet[MPath]]]
\end{lstlisting}
This method findReplacements(theories, interactive) is implementing optimizeGraph(theorygraph)from \autoref{sec:optgraph}. De facto it is a wrapper that calls the earlier findReplacements method on an Iterable over every theory it can find.
This method \lstinline{findReplacements(theories, interactive)} is implementing optimizeGraph(theorygraph)from \autoref{sec:optgraph}. De facto it is a wrapper that calls the earlier findReplacements method on an Iterable over every theory it can find.
This method also comes with an optional boolean parameter to choose between interactive and non-interactive mode (defaults to false).
......@@ -78,11 +78,11 @@ This method also comes with an optional boolean parameter to choose between inte
def toXML(map : HashMap[MPath, HashMap[Path, HashSet[MPath]]]) : String
\end{lstlisting}
The method toXML is used to convert the output from the findReplacements methods into human readable xml. The specification of the xml output can be found in \autoref{sec:xml}.
The method \lstinline{toXML} is used to convert the output from the findReplacements methods into human readable xml. The specification of the xml output can be found in \autoref{sec:xml}.
\subsection{User Interface}
In its current form the user interface is limited to the dialogue window created by the findReplacement procedure. The dialogue will provide the user with a suggested change to the theory graph.
In its current form the user interface is limited to the dialogue window created by the \lstinline{findReplacement} procedure. The dialogue will provide the user with a suggested change to the theory graph.
\label{sec:ui}
\begin{figure}[!htb]
......
\chapter{Introduction}
\chapter{Motivation}
\section{Motivation}
Knowledge is complicated and so is its representation, especially when trying to store it in a computer. One approach to solve this issue is by organizing bodies of knowledge in neat graph structures. A project that seeks to model knowledge in this way is the MMT language\cite{rabe2013scalable}, which has theory graphs as its semantics. Unfortunately the graph structures which are generated represent the knowledge so well that they inherent its complexity.
Knowledge is complicated and so is its representation, especially when trying to store it in a computer. One approach to solve this issue is by organizing bodies of knowledge in modular graph structures that enable flexible reuse of knowledge fragments. A project that seeks to model knowledge in this way is the MMT language \cite{rabe2013scalable}, which has theory graphs as its semantics. Unfortunately the graph structures which are generated represent the knowledge so well that they inherent its complexity.
\begin{figure}[!htb]
\centering
\includegraphics[width=\textwidth]{tgview}
\caption{Picture of the LATIN archive\cite{latin} graph using TGView\cite{rupprecht2017flexible}}
\caption{Picture of the LATIN archive \cite{latin} graph using TGView \cite{rupprecht2017flexible}}
\label{fig:tgview}
\end{figure}
For this reason it should be no surprise that these theory graphs can be plagued by redundancies and entirely unnecessary dependencies.
As the sources of these graphs are written by humans who only have a local view while authoring theories, it should be no surprise that these theory graphs can be plagued by redundancies and globally unnecessary dependencies.
To help reduce unnecessary complexity of these large theory graphs, I introduce Styler, an automated tool for optimizing theory inclusions.
To help reduce unnecessary complexity of these large theory graphs, I introduce Styler, an automated tool for optimizing theory inclusions via static analysis.
The goal of this tool is to automatically detect possible optimizations that reduce the number of edges without breaking the graph and recommend them to the user.
\ No newline at end of file
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment