Since the performance will be heavily dependent on the size of the graph to be optimized, we will measure runtime and memory requirements depending on the variable t, which denotes the number of theories in the graph.
\subsection{Runtime}
\subsection{Runtime}
\label{sec:runtime}
\label{sec:runtime}
Exact runtime is specific to the library implementations of used functions, but after looking at the relevant Scala documentation we can make a few helpful assumptions \cite{scaladoccolperf}.
With effective constant time to lookup, add and remove in a hashset, we can deduce that likely runtimes for the set-operations $\cup$, $\cap$ and $\setminus$ are O(n), where n is the number of elements in the sets involved.
add include $\rightarrow$ futureUses(include)$\cup$usedTheories(include)$\cup$futureUses(theory) to futureUses\tcp*{$t$}
}
\KwRet replacements \tcp*{1}
}
}
\caption{optimizeGraph(theorygraph) - runtime}
\end{algorithm}
The inner loop runs up to $t$ times over up to $t$.\\
The outer loop runs up to $t$ times over up to $t+O(t^2+)+t^2$.\\
This results in an overall worst case performance of $t^2+t+t\cdot(t+O(t^2)+t^2)+3= O(t^3)$.
Since the total number of theories can be quite large a cubic runtime is hardly ideal. However it should be noted that the worst case requires the average theory to include most of the other theories.
In graphs where no theory includes more than the square root of the number of theories, the runtime should remain within quadratic bounds.
\subsection{Memory}
\subsection{Memory}
Making precise assumptions about the memory usage is even harder, as the documentation is insufficiently specific. We will therefore assume that the memory required by a HashSet or HashMap is in the same order of magnitude as it's elements.