Skip to content
Snippets Groups Projects
Commit 145841ac authored by Dennis Müller's avatar Dennis Müller
Browse files

update

parent d2fd485a
No related branches found
No related tags found
No related merge requests found
- theory discovery
- natural extensions (partial views)
- models
- counter examples
- theorem transport (current theory is codomain)
- prioritize many definitions / theorems
- refactoring advice
- assoc / comm normalization
- term indexing techniques
\ No newline at end of file
\RequirePackage{tikz}
\usetikzlibrary{arrows}
\tikzstyle{arrow}=[-angle 45]
\tikzstyle{mono}=[right hook-angle 45]
\tikzstyle{epi}=[-triangle 45]
\tikzstyle{equiv}=[angle 45-angle 45]
......@@ -11,6 +11,7 @@
\usepackage{wrapfig}
\usepackage{paralist}
\usepackage{xspace}
\usepackage{mytikz}
%\usepackage{appendix}
\usepackage[style=alphabetic,hyperref=auto,defernumbers=true,backend=bibtex,firstinits=true,maxbibnames=9,maxcitenames=3,isbn=false]{biblatex}
......@@ -81,7 +82,7 @@
time, which limits the scalability of the manual approach.
To remedy this problem, we have developed a view-finder algorithm that automates theory
morphism discovery. In this paper we present and implementation in the MMT system and
morphism discovery. In this paper we present an implementation in the MMT system and
show specific use cases. We focus on an application of \emph{theory discovery}, where a user can
check whether a (part of a) formal theory already exists in some library, potentially
avoiding duplication of work or suggesting an opportunity for refactoring.
......@@ -96,12 +97,15 @@
\section{Finding Theory Morphisms}\label{sec:viewfinder}
\input{viewfinder}
\section{Applications}\label{sec:usecase}
\section{Extended Use Case}\label{sec:usecase}
\input{usecase}
\section{Unrealized Applications}
\input{applications}
\section{Conclusion}\label{sec:concl}
\paragraph{Future Work} \ednote{term indexing, normalization, using alignments / translations between libraries}
\paragraph{Future Work} \ednote{term indexing}
\paragraph{Acknowledgements}
The authors gratefully acknowledge financial support from the OpenDreamKit Horizon 2020
......
......@@ -44,4 +44,56 @@ An assignment in a view $V:T_1\to T_2$ is syntactically well-formed if for any a
\subsection{Proof Assistant Libraries in MMT}\label{sec:oaf}
OAF: PVS, HOL Light, MitM for use case
\ No newline at end of file
As part of the OAF project \ednote{cite}, we have imported several proof assistant libraries into the MMT system. To motivate some of the design choices made in this paper, we will outline the general procedure behind these imports.
\paragraph{} First, we formalize the core logical foundation of the system. We do so by using the logical framework LF\ednote{cite} (at its core a dependently-typed lambda-calculus) and various extensions thereof, which are implemented in and supported by the MMT system. In LF, we can formalize the foundational primitives using the usual judgments-as-types and higher-order abstract syntax encodings -- hence theorems and axioms are represented as constants with a type $\vdash P$ for some proposition $P$, and primitive constructs like lambdas are formalized as LF-functions taking LF-lambda-expressions -- which serve as a general encoding of any variable binders -- as arguments.
The resulting formalizations are then used as meta-theory for imports of the libraries of the system under consideration. This results in a theory graph as in Figure \ref{fig:oaf}.
\begin{figure}
\begin{tikzpicture}
\node (MMT) at (2,2.5) {MMT};
\draw[fill=yellow!60] (2,1) ellipse (1.5cm and .6cm);
\node[color=yellow] at (-3.3,1) {Logical Frameworks};
\node (L) at (1,1) {LF};
\node (Lx) at (3,1) {LF+X};
\draw[arrow](MMT) -- (L);
\draw[arrow](MMT) -- (Lx);
\draw[arrow](L) -- (Lx);
\draw[fill=red!60] (2,-.5) ellipse (3.2cm and .6cm);
\node[color=red] at (-3.3,-.5) {Foundations};
\node at (2,-.7) {\ldots};
\draw[fill=blue!60] (0,-2.25) ellipse (1.9cm and .8cm);
\node (H) at (0,-.5) {HOL Light};
\node[color=blue!80] at (-3.5,-2) {HOL Light library};
\node (B) at (-1,-2) {Bool};
\node (A) at (1,-2) {Arith};
\node (E) at (0,-2.5) {\ldots};
\draw[arrow](L) -- (H);
\draw[arrow](H) -- (B);
\draw[arrow](H) -- (A);
\draw[arrow](B) -- (A);
\draw[fill=olive] (4,-2.25) ellipse (1.9cm and .8cm);
\node (M) at (4,-.5) {PVS};
\node[color=olive] at (-3.3,-2.5) {PVS library};
\node (B') at (3,-2) {Booleans};
\node (A') at (5,-2) {Reals};
\node (E') at (4,-2.5) {\ldots};
\node (A) at (1,-2) {Arith};
\node (E) at (0,-2.5) {\ldots};
\draw[arrow](Lx) -- (M);
\draw[arrow](M) -- (B');
\draw[arrow](M) -- (A');
\draw[arrow](B') -- (A');
\end{tikzpicture}
\caption{A (Simplified) Theory Graph for the OAF Project}\label{fig:oaf}
\end{figure}
\ No newline at end of file
......@@ -37,3 +37,17 @@ When elaborating definitions, it is important to consider that this may also red
Similarly, certain idiosyncracies -- such as PVS's common usage of theory parameters -- call for not just matching symbol references, but also variables or possibly even complex expressions. To handle these situations, we additionally allow for ``holes'' in the lists $\cn{syms}(C)$, which may be unified with any other symbol or hole, but are not recursed into. The subterms that are to be considered holes can be marked as such during preprocessing.
\paragraph{Storing Encodings} Finally, for computing the encodings of a theory we only need to know the symbols to be fixed in the component $\cn{h}(C)$ of an encoding, for which only the meta-theories of the theories are relevant. They also determine the specific preprocessings and translations we want to likely use. Since there is only a small number of meta-theories incolved that are relevant in practice, we can store and retrieve the encodings for the most important situations. Since computing the encodings (as well as sorting the theories in a library by their dependencies) is the most expensive part of the algorithm, this -- once computed and stored -- makes the viewfinding process itself rather efficent.
\subsection{Preprocessing and Normalization}
The common logical framework used for all the libraries at our disposal makes it easy to systematically normalize theories built on various logical foundations. We currently use the following approaches to preprocessing theories:
\begin{itemize}
\item Free variables in a term, often occurences of theory parameters as e.g. used extensively in the PVS system, are replaced by holes.
\item For foundations that use product types, we curry function types $(A_1 \times\ldots A_n)\to B$ to $A_1 \to \ldots \to A_n\to B$. We treat lambda-expressions and applications accordingly.
\item Higher-order abstract syntax encodings are eliminated by raising atomic types, function types, applications and lambdas to the level of the logical framework. This eliminates (redundant) implicit arguments that only occur due to their formalization in the logical framework.
This has the advantage that possible differences between the types of the relevant subterms and implicit type arguments (e.g. in the presence of subtyping) do not negatively affect viewfinding.
\item We use the curry-howard correspondence to transform axioms and theorems of the form $\vdash (P\Rightarrow Q)$ to function types $\vdash P \to \vdash Q$. Analogously, we transform judgments of the form $\vdash \forall x : A.\;P$ to $\prod_{x:A}\vdash P$.
\item For classical logics, we afterwards rewrite all logical connectives using their usual definitions using negation and conjunction only. Double negations are eliminated.
\item Typed Equalities are transformed to untyped ones; again getting rid of the redundant type argument of the equality.
\item The arguments of conjunctions and equalities are reordered (currently only by their number of subterms).
\end{itemize}
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment