diff --git a/graphoptimization.pdf b/graphoptimization.pdf index 4fa384004bcc98ab2995ee9ad03e2316dede1a71..f6505d4f857d127dc26e53cf8d76ed7b9b270ca7 100644 Binary files a/graphoptimization.pdf and b/graphoptimization.pdf differ diff --git a/src/algorithm.tex b/src/algorithm.tex index 92aaabf0a04b010c248c9eeaaf6ebeb206f49e6a..e50dca7a8a6d9a10c4b5192b4769741265930097 100644 --- a/src/algorithm.tex +++ b/src/algorithm.tex @@ -2,7 +2,7 @@ \label{chap:alg} \section{Idea} -Since our approach to identifying candidates is large based on set theory, the algorithms for finding are also making heavy use of set operations, using essentially the same construction method for creating the set of optimization candidates as the definitions presented in \autoref{chap:candidates}. +Since our approach to identifying candidates is largely based on set theory, the algorithms for finding are also making heavy use of set operations, using essentially the same construction method for creating the set of optimization candidates as the definitions presented in \autoref{chap:candidates}. \subsection{Graph Optimization} Optimization of the theory graph happens one theory at a time, beginning at the outer most edges of the theory graphs, i.e. those theories that are not included by any other theory. @@ -18,7 +18,7 @@ Optimization of a theory itself happens in two separate passes. The first pass d The reason for separating these passes is to avoid the removal of simply redundant inclusions, that will later have to be reinserted when eliminating a partially superfluous inclusion. An example for such a hazard is the example of an overlap between the two types of candidates in \autoref{fig:redundantoverlap} -\subsubsection{First pass - Eliminating superfluous inclusions} +\subsubsection{First Pass - Eliminating Superfluous Inclusions} In the first pass we first take a selection of optimization candidates, namely those theories that are directly included, but not used in the current theory. These are the theories that need to be replaced. @@ -35,7 +35,7 @@ Proof sketch: \item Any indirectly included theory is either included by a theory that is retained or by one that is being replaced. Since the replacement of a theory inclusion is the necessary subset of its included theories, all necessary dependencies will remain included. \end{itemize} -\subsubsection{Second pass - Eliminating simply redundant inclusions} +\subsubsection{Second Pass - Eliminating Simply Redundant Inclusions} The idea behind the second pass is to collect all those theories that are included indirectly by the current theory and throwing away all of those direct inclusions that are part of this set. This will leave us exactly with those inclusions that are not simply redundant, without changing the flattened theory graph. @@ -48,9 +48,9 @@ If we do so, all indirect inclusions must ultimately be the result of a direct i If these assumptions are wrong however, not only could we run into problems, but we will inevitably do so if the current theory is part of the cycle. It is therefore best to only perform this operation on a graph that satisfies our assumption. \end{itemize} -\subsection{Future Lite Code} +\subsection{Future Light Cone} \label{sec:futurelite} -In \autoref{sec:viability} we already mentioned the problem of future lite code. The obvious method for aquiring the future lite code is traversing the entire theory graph and whenever we find a theory inclusion, we make a note for the included theory that the theory we are currently traversing is part of its future code. Unfortunately this requires traversing the graph in its entirety. +In \autoref{sec:viability} we already mentioned the problem of future light cone. The obvious method for acquiring the future light cone is traversing the entire theory graph and whenever we find a theory inclusion, we make a note for the included theory that the theory we are currently traversing is part of its future code. Unfortunately this requires traversing the graph in its entirety. For the actual algorithm we skip this part and instead put create a map of used theories in the future code, since this is the part we actually require. Since this means potentially going over every theory for every theory (or at least reasonably close to it), our worst case runtime for this part of the algorithm is quadratic in the number of theories in the graph. @@ -114,7 +114,7 @@ To somewhat mitigate the effect this has on the performance, the future is creat \section{Pseudo Code} -\subsection{Optimizing graph} +\subsection{Optimizing Graph} \label{sec:optgraph} The following code applies the optimizations to the entire theory graph. @@ -139,7 +139,7 @@ The following code applies the optimizations to the entire theory graph. \caption{optimizeGraph(theorygraph)} \end{algorithm} -\subsection{Optimizing theory} +\subsection{Optimizing Theory} \label{sec:opttheo} The following pseudo code applies optimizations to a given theory. @@ -158,7 +158,7 @@ The following pseudo code applies optimizations to a given theory. \caption{optimize(theory, futureUse)} \end{algorithm} -\subsection{Finding superfluous inclusions} +\subsection{Finding Superfluous Inclusions} \label{sec:alg_si} The following pseudo code is for finding superfluous inclusions (see: \autoref{sec:superinc}). @@ -184,7 +184,7 @@ The following pseudo code is for finding superfluous inclusions (see: \autoref{s Note that in \autoref{sec:superinc} we made a destinction between purely and partially superfluous inclusions. However we do not need to make this distinction while searching for them, as we can search for them by using the criteria for generally superfluous inclusion. Since they only differ in the set of inclusions that needs to be retained and we write that set in our result anyway, both can be accomplished by the same routine. -\subsection{Finding simply redundant inclusions} +\subsection{Finding Simply Redundant Inclusions} \label{sec:alg_sri} The following pseudo code is for finding simply redundant inclusions (see: \autoref{sec:redinc}). @@ -208,7 +208,7 @@ The following pseudo code is for finding simply redundant inclusions (see: \auto \caption{simplyRedundantIncludes(theory)} \end{algorithm} -\section{Performance analysis} +\section{Performance Analysis} Since the performance will be heavily dependent on the size of the graph to be optimized, we will measure runtime and memory requirements depending on the variable t, which denotes the number of theories in the graph. \subsection{Runtime} @@ -216,6 +216,7 @@ Since the performance will be heavily dependent on the size of the graph to be o Exact runtime is specific to the library implementations of used functions, but after looking at the relevant Scala documentation we can make a few helpful assumptions \cite{scaladoccolperf}. With effective constant time to lookup, add and remove in a hashset, we can deduce that likely runtimes for the set-operations $\cup$, $\cap$ and $\setminus$ are O(n), where n is the number of elements in the sets involved. +\vfill \subsubsection{First pass} \begin{algorithm}[H] diff --git a/src/candidates.tex b/src/candidates.tex index d57f9424800be989a77c0fc871e31fef144d0436..80910cd3555adb239ec0d3ff47f12ca7c4617bac 100644 --- a/src/candidates.tex +++ b/src/candidates.tex @@ -8,7 +8,15 @@ For something to be declared a candidates we will only require that the theory i Notably optimization candidates will not necessarily preserve the flattened graph. This is only the case for simply redundant inclusions(see below). -\section{Simply redundant inclusion} +For defining the candidates we will require the following functions mapping theories $T$ to their different kinds of theory inclusions and the theories they depend on. + +\begin{align*} +dI(T) &:= \{S\in theories\vert body of T contains inclusion of S\}\\ +tI(T) &:= \{S\in theories\vert S is in transitive closure of dI(T)\}\\ +uT(T) &:= \{S\in theories\vert body of T contains symbol declared in S\} +\end{align*} + +\section{Simply Redundant Inclusion} \label{sec:redinc} An inclusion of a theory $A\hookrightarrow{}C$ is \underline{simply redundant}, if $C$ includes a theory $B$, such that $B\hookrightarrow{}C$ (see \autoref{fig:redundantbasic}). @@ -67,19 +75,21 @@ Simply redundant inclusions can be safely optimized by simply removing the redun \label{fig:redundantoptimized} \end{figure} -The simply redundant inclusions in a theory T are the set $I_R(T) =$\\ $(\bigcup_{I \in DirectIncludes(T)}Includes(I)) \cap DirectIncludes(T)$,\\ -where $Includes(I)$ are the transitively included theories in I and $DirectIncludes(T)$ are the theories included directly in the theory T. +The simply redundant inclusions in a theory T are the set\\ +$I_R(T) := (\bigcup_{I \in dI(T)}tI(I)) \cap dI(T)$,\\ +where $tI(I)$ are the transitively included theories in I and $dI(T)$ are the theories included directly in the theory T. -It is immediately obvious that removing a simply redundant inclusion I from a theory T also removes it from the above set via the intersection with $DirectIncludes(T)$.\\ -Slightly less obvious is that doing so neither adds nor removes any other inclusions. This is because by design the set $\bigcup_{I \in DirectIncludes(T)}Includes(I))$ still includes I and any of its children and thus remains unchanged. +It is immediately obvious that removing a simply redundant inclusion I from a theory T also removes it from the above set via the intersection with $dI(T)$.\\ +Slightly less obvious is that doing so neither adds nor removes any other inclusions. This is because by design the set $\bigcup_{I \in dI(T)}tI(I))$ still includes I and any of its children and thus remains unchanged. \section{Superfluous Inclusion} \label{sec:superinc} An inclusion of a theory $A\hookrightarrow{}B$ is \underline{superfluous}, if $B$ uses none of the constants declared in $A$. Such an inclusion can be \underline{purely superfluous} (\autoref{sec:puresi}) if it can be entirely removed, or \underline{partially superfluous} if it can be reduced to a subset of the theory inclusions in $A$ \autoref{sec:partiallysi}.\\ In both of these cases the inclusion can be replaced or removed entirely, while still yielding a well formed theory. However this changes the resulting flattened theory graph and can invalidate theories that include the optimized theory.\\ -The superfluous inclusions in a theory T are the set $I_{S}(T) =$\\ $\{ I \in DirectInludes(T) | I \notin UsedTheories(T) \}$,\\ -where $DirectIncludes(T)$ are the theories included directly in the theory T and UsedTheories(T) are those theories whose terms are used in T. +The superfluous inclusions in a theory T are the set\\ +$I_{S}(T) := \{ I \in dI(T) | I \notin uT(T) \}$,\\ +where $dI(T)$ are the theories included directly in the theory T and uT(T) are those theories whose terms are used in T. \subsection{Purely Superfluous Inclusion} \label{sec:puresi} @@ -122,8 +132,9 @@ Purely superfluous inclusions can be removed while still retaining a valid theor \label{fig:purelysuperfluousoptimized} \end{figure} -The purely superfluous inclusions in a theory T are the set $I_{PuS}(T) =$\\ $\{ I \in DirectInludes(T) | I \notin UsedTheories(T) \land Includes(I) \cap UsedTheories(T) \setminus Includes(UsedTheories(T)\setminus \{ T \}) = \emptyset \}$,\\ -where $Includes(I)$ are the transitively included theories in I, $DirectIncludes(T)$ are the theories included directly in the theory T and $UsedTheories(T)$ are those theories whose terms are used in T. +The purely superfluous inclusions in a theory T are the set\\ +$I_{PuS}(T) := \{ I \in dI(T) | I \notin uT(T) \land tI(I) \cap uT(T) \setminus tI(uT(T)\setminus \{ T \}) = \emptyset \}$,\\ +where $tI(I)$ are the transitively included theories in I, $dI(T)$ are the theories included directly in the theory T and $uT(T)$ are those theories whose terms are used in T. \subsection{Partially Superfluous Inclusion} \label{sec:partiallysi} @@ -199,15 +210,16 @@ Partially superfluous inclusions can be optimized by identifying the used subset \label{fig:partiallysuperfluousoptimized} \end{figure} -The partially superfluous inclusions in a theory T are the set $I_{PaS}(T) =$\\ $\{ I \in DirectInludes(T) | I \notin UsedTheories(T) \land Includes(I) \cap UsedTheories(T) \setminus Includes(UsedTheories(T)\setminus \{ T \}) \neq \emptyset \}$,\\ -where $Includes(I)$ are the transitively included theories in I, $DirectIncludes(T)$ are the theories included directly in the theory T and $UsedTheories(T)$ are those theories whose terms are used in T. +The partially superfluous inclusions in a theory T are the set\\ +$I_{PaS}(T) := \{ I \in dI(T) | I \notin uT(T) \land tI(I) \cap uT(T) \setminus tI(uT(T)\setminus \{ T \}) \neq \emptyset \}$,\\ +where $tI(I)$ are the transitively included theories in I, $dI(T)$ are the theories included directly in the theory T and $uT(T)$ are those theories whose terms are used in T. -\section{Viability and dangers of candidate optimizations} +\section{Viability and Dangers of Candidate Optimizations} \label{sec:viability} The candidates discussed earlier are cases in which inclusions can be changed, not necessarily where they should be changed, as the only criterion that makes a theory an optimization candidate is that an optimization can be performed while still retaining a valid theory. There is nothing that guarantees that this theory is also preferable to the original or even desirable at all to have.\\ The danger of performing the proposed optimizations varies between the type of candidate and the context, but overall application without at least some user guidance is not recommended. -\subsection{Simply redundant inclusion} +\subsection{Simply Redundant Inclusion} As noted in \autoref{sec:puresi} there is very little that speaks against removing simply redundant inclusions, since the flattened graph is preserved. However there are some cases where they might do more harm than good, particularly when they overlap with superfluous inclusions, as seen in \autoref{fig:redundantoverlap}.\\ Removing the redundant edge will not change the flattened graph, but it will complicate the changes needed to remove the superfluous edge between middle and top. A simple solution to avoiding this problem is to relegate the removal of simple redundancies until after other optimizations have been performed on the theory.\\ \begin{figure}[!htb] \centering @@ -234,7 +246,7 @@ As noted in \autoref{sec:puresi} there is very little that speaks against removi \caption{Example of overlap between simply redundant inclusion and superfluous inclusion} \label{fig:redundantoverlap} \end{figure} -\subsection{Purely superfluous inclusion} +\subsection{Purely Superfluous Inclusion} \label{sec:puresivia} Purely superfluous includes are a little trickier, as their removal can fundamentally change the underlying theory graph. Removing a purely superfluous inclusion. @@ -268,7 +280,7 @@ Removing purely superfluous includes from a theory union will leave behind an en However theory unions are not the only cases where optimization may be misplaced. When culling seemingly superfluous inclusions of non-empty theories we fundamentally change the theory graph, by cutting the content of the included theory from our candidate. This means that unlike the simply redundant case, this optimization should be carefully considered. -If this content is required by a theory that originally included it via an inclusion of the candidate, this may even break the entire theory graph. To avoid this problem it makes sense to not only watch what the candidate theory itself uses, but to look further ahead and also consider the symbols used by theories which include the candidate. We call these theories the \underline{future lite code}. +If this content is required by a theory that originally included it via an inclusion of the candidate, this may even break the entire theory graph. To avoid this problem it makes sense to not only watch what the candidate theory itself uses, but to look further ahead and also consider the symbols used by theories which include the candidate. We call these theories the \underline{future light cone}. Another problem arises were inclusion paths branch of before eventually meeting again. Ideally optimizing superfluous inclusions should retain exactly those theory inclusions that are necessary for keeping a valid theory graph with no undeclared objects. Unfortunately this is complicated by the requirement to keep a theory's future valid. Specifically the problem arises if a future theory indirectly includes a theory over more than one path (see \autoref{fig:doubleinclusion}). @@ -306,11 +318,11 @@ Another problem arises were inclusion paths branch of before eventually meeting As we can see the branched inclusion leaves us with two paths for top to include bottom. Either via inclusion of left or right. Therefore we could remove one of the inclusions of bottom in left or right, but not both of them. However recognizing such a case is difficult and even if one of the inclusions should be removed it is not easily deducible which one is truly superfluous. Thus it seems prudent to err on the side of caution and ignore such a case entirely and treat it like a regular case of a use of bottom in left and right's respective futures. -\subsection{Partially superfluous inclusion} +\subsection{Partially Superfluous Inclusion} Due to the inherent similarities many of the prior considerations also apply to partially superfluous inclusions. -Let us consider the following example from elementary Algebra (\autoref{fig:elalg}). It is easy to define a monoid as a semigroup with the addition of a neutral element. The problem this poses for our optimizations is that our definition doesn't make use of associativity in any way. If all the symbols needed for defining the neutral element are delivered through a theory included by semi groups called Magma, then replacing the inclusion of semigroups with magma is a perfectly valid optimization. This is however a vastly different theory then we would expect of something that calls itself a monoid and can lead to complications if a theory including this theory expects it to contain the associativity rule, as it should. Again this problem can be somewhat alleviated by looking at the future lite code. +Let us consider the following example from elementary Algebra (\autoref{fig:elalg}). It is easy to define a monoid as a semigroup with the addition of a neutral element. The problem this poses for our optimizations is that our definition doesn't make use of associativity in any way. If all the symbols needed for defining the neutral element are delivered through a theory included by SemiGrp called Magma, then replacing the inclusion of SemiGrp with Magma is a perfectly valid optimization. This is however a vastly different theory then we would expect of something that calls itself Monoid and can lead to complications if a theory including this theory expects it to contain the associativity rule, as it should. Again this problem can be somewhat alleviated by looking at the future light cone. \begin{figure}[!htb] \centering diff --git a/src/conclusion.tex b/src/conclusion.tex index f000cdcc072b55d3b0f34879654c2625b5f15173..67be268ba9a436a00b2ec3d3cff618123abf2757 100644 --- a/src/conclusion.tex +++ b/src/conclusion.tex @@ -1,5 +1,5 @@ \chapter{Conclusion} -\section{State of the project} +\section{State of the Project} \label{sec:state} In its current state the tool is capable of automatically detecting the optimization candidates we introduced in this paper. @@ -20,4 +20,4 @@ The most important future goal is to further automate the optimization process. Another important step is providing a proper interface for calling the optimizer. Preferably in a way that would seamlessly integrate with other MMT components. Finally the runtime of the algorithm is a concern, as we discussed in \autoref{sec:runtime}. While it seems to run fine on small graphs, this could easily blow up for larger graphs. So any optimizations to the algorithm itself could be a welcome improvement. For this purpose further analysis of the actual runtime seems prudent.\\ -One potential approach would be using prior knowledge about archive dependencies to limit the scope on which the algorithm needs to operate, when only optimizing parts of the graph. \ No newline at end of file +One potential approach would be using prior knowledge about archive dependencies to limit the scope on which the algorithm needs to operate, when only optimizing parts of the graph. With enough information we could limit the search to incremental searches for optimizations that can be run after each change. \ No newline at end of file diff --git a/src/integration.tex b/src/integration.tex index 9b84e0dd242d80783a330896abdee7dabb709c46..0de085cc346de956301154f91bf5373b47d3c340 100644 --- a/src/integration.tex +++ b/src/integration.tex @@ -2,7 +2,7 @@ \label{chap:implementation} \section{Integration} -The intended interface for programmers accessing the graph optimizations corresponds more or less directly to the routines presented in \autoref{chap:alg}. They are implemented as methods of the class GraphOptimizationTool, which itself is a subclass of info.kwarc.mmt.api.frontend.Extension\cite{MMTapi}. +The intended interface for programmers accessing the graph optimizations corresponds more or less directly to the routines presented in \autoref{chap:alg}. They are implemented as methods of the class GraphOptimizationTool, which itself is a subclass of info.kwarc.mmt.api.frontend.Extension \cite{MMTapi}. At the moment calling these functions is the only way to access the optimization functions, but this should change at some point in the future (see \autoref{sec:futurework}). @@ -20,7 +20,7 @@ At the moment calling these functions is the only way to access the optimization def findRedundantIncludes(theoryPath : MPath, replacementmap : HashMap[MPath, HashMap[Path, HashSet[MPath]]] = HashMap[MPath, HashMap[Path, HashSet[MPath]]]()) : List[Path] \end{lstlisting} -The method findRedundantIncludes is implementing simplyRedundantIncludes i.e the second pass from \autoref{sec:alg_sri}. +The method \lstinline{findRedundantIncludes} is implementing simplyRedundantIncludes i.e the second pass from \autoref{sec:alg_sri}. \begin{lstlisting}[language=scala] /** Finds superfluous inclusions @@ -34,7 +34,7 @@ The method findRedundantIncludes is implementing simplyRedundantIncludes i.e the def findUnusedIncludeReplacements(theoryPath : MPath, replacementmap : HashMap[MPath, HashMap[Path, HashSet[MPath]]] = HashMap[MPath, HashMap[Path, HashSet[MPath]]](), futureUse : HashMap[MPath, HashSet[MPath]] = HashMap[MPath, HashSet[MPath]]()) : HashMap[Path, HashSet[MPath]] \end{lstlisting} -The method findUnusedIncludeReplacements is implementing superfluousIncludes(theory, futureUse) i.e the first pass from \autoref{sec:alg_si}. +The method \lstinline{findUnusedIncludeReplacements} is implementing superfluousIncludes(theory, futureUse) i.e the first pass from \autoref{sec:alg_si}. \begin{lstlisting}[language=scala] /** Optimizes multiple theories @@ -47,7 +47,7 @@ The method findUnusedIncludeReplacements is implementing superfluousIncludes(the def findReplacements(theories: Iterable[MPath], interactive : Boolean) : HashMap[MPath, HashMap[Path, HashSet[MPath]]] \end{lstlisting} -This method findReplacements is similar to optimize(theory, futureUse)(\autoref{sec:opttheo}) and optimizeGraph(theorygraph)(\autoref{sec:optgraph}). The difference is that the method takes a an Iterable of theories, for which to apply the graph optimization algorithm. In other words it is applying the optimizations not to the entire graph, but a given subset. +This method \lstinline{findReplacements} is similar to optimize(theory, futureUse)(\autoref{sec:opttheo}) and optimizeGraph(theorygraph)(\autoref{sec:optgraph}). The difference is that the method takes a an Iterable of theories, for which to apply the graph optimization algorithm. In other words it is applying the optimizations not to the entire graph, but a given subset. The boolean parameter interactive chooses whether to launch the interactive mode or not. @@ -64,7 +64,7 @@ If false is selected all suggested changes will be applied. def findReplacements(interactive : Boolean = false) : HashMap[MPath, HashMap[Path, HashSet[MPath]]] \end{lstlisting} -This method findReplacements(theories, interactive) is implementing optimizeGraph(theorygraph)from \autoref{sec:optgraph}. De facto it is a wrapper that calls the earlier findReplacements method on an Iterable over every theory it can find. +This method \lstinline{findReplacements(theories, interactive)} is implementing optimizeGraph(theorygraph)from \autoref{sec:optgraph}. De facto it is a wrapper that calls the earlier findReplacements method on an Iterable over every theory it can find. This method also comes with an optional boolean parameter to choose between interactive and non-interactive mode (defaults to false). @@ -78,11 +78,11 @@ This method also comes with an optional boolean parameter to choose between inte def toXML(map : HashMap[MPath, HashMap[Path, HashSet[MPath]]]) : String \end{lstlisting} -The method toXML is used to convert the output from the findReplacements methods into human readable xml. The specification of the xml output can be found in \autoref{sec:xml}. +The method \lstinline{toXML} is used to convert the output from the findReplacements methods into human readable xml. The specification of the xml output can be found in \autoref{sec:xml}. \subsection{User Interface} -In its current form the user interface is limited to the dialogue window created by the findReplacement procedure. The dialogue will provide the user with a suggested change to the theory graph. +In its current form the user interface is limited to the dialogue window created by the \lstinline{findReplacement} procedure. The dialogue will provide the user with a suggested change to the theory graph. \label{sec:ui} \begin{figure}[!htb] diff --git a/src/introduction.tex b/src/introduction.tex index 8a77fbdcd4fde1b51a22e09236dae8b025824798..d8357150bc60b6699b4573d5b1533dbd049abeb9 100644 --- a/src/introduction.tex +++ b/src/introduction.tex @@ -1,17 +1,16 @@ -\chapter{Introduction} +\chapter{Motivation} -\section{Motivation} -Knowledge is complicated and so is its representation, especially when trying to store it in a computer. One approach to solve this issue is by organizing bodies of knowledge in neat graph structures. A project that seeks to model knowledge in this way is the MMT language\cite{rabe2013scalable}, which has theory graphs as its semantics. Unfortunately the graph structures which are generated represent the knowledge so well that they inherent its complexity. +Knowledge is complicated and so is its representation, especially when trying to store it in a computer. One approach to solve this issue is by organizing bodies of knowledge in modular graph structures that enable flexible reuse of knowledge fragments. A project that seeks to model knowledge in this way is the MMT language \cite{rabe2013scalable}, which has theory graphs as its semantics. Unfortunately the graph structures which are generated represent the knowledge so well that they inherent its complexity. \begin{figure}[!htb] \centering \includegraphics[width=\textwidth]{tgview} -\caption{Picture of the LATIN archive\cite{latin} graph using TGView\cite{rupprecht2017flexible}} +\caption{Picture of the LATIN archive \cite{latin} graph using TGView \cite{rupprecht2017flexible}} \label{fig:tgview} \end{figure} -For this reason it should be no surprise that these theory graphs can be plagued by redundancies and entirely unnecessary dependencies. +As the sources of these graphs are written by humans who only have a local view while authoring theories, it should be no surprise that these theory graphs can be plagued by redundancies and globally unnecessary dependencies. -To help reduce unnecessary complexity of these large theory graphs, I introduce Styler, an automated tool for optimizing theory inclusions. +To help reduce unnecessary complexity of these large theory graphs, I introduce Styler, an automated tool for optimizing theory inclusions via static analysis. The goal of this tool is to automatically detect possible optimizations that reduce the number of edges without breaking the graph and recommend them to the user. \ No newline at end of file