Commit 9f8d6a66 authored by Michael Kohlhase's avatar Michael Kohlhase
Browse files

overview and spellcheck

parent c32eaffa
......@@ -43,4 +43,5 @@ Note that even though the algorithm is in principle symmetric, some aspects ofte
%%% End:
% LocalWords: emph specialized generalization transfomations normalization prioritize
% LocalWords: sec:usecase
% LocalWords: sec:usecase newpart sec:pvs mathhub RupKohMue:fitgv17 textbf Countermodel
% LocalWords: sec:algparams sec:normalizeintra sec:normalizeinter
......@@ -89,7 +89,10 @@ occurrences in the term, into which the algorithm recurses.
Secondly, we apply this view finder in two concrete case studies: In the first, we start with an abstract theory and try to figure out if it already exists in the same library. In the second example, we write down a simple theory of commutative operators in one language to find all commutative operators in another library based on a different language.
\paragraph{Overview}
In Section~\ref{sec:prelim}, we revise the basics of MMT and the representations of (exemplary) the PVS and HOL Light libraries.
In Section~\ref{sec:prelim}, we revise the basics of MMT and the representations of (exemplary) the PVS and HOL Light libraries.
Section~\ref{sec:viewfinder} presents the view finding algorithm restricted to the intra-library case and showcases it for the theory classification use case.
In Section~\ref{sec:across}, we extend the algorithm to inter-library view finding discuss results of applying it to the HOL Light and PVS libraries.
Section~\ref{sec:appl} discusses additional applications and Section~\ref{sec:concl} concludes the paper.
%%% Local Variables:
%%% mode: latex
......@@ -97,7 +100,7 @@ In Section~\ref{sec:prelim}, we revise the basics of MMT and the representations
%%% TeX-master: "paper"
%%% End:
% LocalWords: defemph compactenum textsf formalization vivews generalization centering
% LocalWords: defemph compactenum textsf formalization vivews generalization centering hol_isahol_matching sec:appl sec:concl
% LocalWords: compactitem lstset aboveskip 0pt,belowskip hline lstlisting mathescape
% LocalWords: rightarrow vdash emptyset baseset NorKoh:efnrsmk07 formalized omdoc emph
% LocalWords: standardized emph thibault-paper ednote
......@@ -102,7 +102,7 @@
\input{usecase}
\section{Future Work}
\section{Future Work}\label{sec:appl}
\input{applications}
\section{Conclusion}\label{sec:concl}
......@@ -125,4 +125,4 @@
% LocalWords: tableofcontents defemph compactenum emptyset newpart compactitem lstset
% LocalWords: centering aboveskip 0pt,belowskip hline lstlisting mathescape rightarrow
% LocalWords: vdash baseset sec:concl subsubsection Formalizations emph sec:usecase
% LocalWords: usecase Unrealized
% LocalWords: usecase Unrealized sec:appl
......@@ -200,6 +200,7 @@ The resulting formalizations are then used as meta-theory for imports of the lib
%%% End:
% LocalWords: fig:mmt textbf textbf mapsto vdash_ emph hline ctabular ldots,x_m:t_m vec
% LocalWords: ldots,a_n compactitem cdot vec ednote ex:lamsyn urtheory mathcal f,t
% LocalWords: ldots,a_n compactitem cdot vec ednote ex:lamsyn urtheory mathcal f,t y,x
% LocalWords: x:A;t oldpart fig:mmtgrammar centering vspace mdframed multirow ldots,x_n
% LocalWords: OAFproject:on vdash formalized formalizations tikzpicture ldots Arith
% LocalWords: OAFproject:on vdash formalized formalizations tikzpicture ldots Arith s,t
% LocalWords: newpart forall subseteq Rightarrow forall Rightarrow subseteq t,s lf
......@@ -3,13 +3,13 @@
In addition to the above optimizations in the algorithm, we can normalize the theories themselves in various ways to improve the number and quality of the morphisms found. To accomodate these, we add two additional features to the algorithm:
\begin{enumerate}
\item Some normalizations -- such as elaborating definitions -- may also reduce the number of results, for example if both theories use similar abbreviations for complex terms, or the same concept is declared axiomatically in one theory, but definitionally in the other. For that reason, we can allow \textbf{several abstract syntax trees for the same constant}, such as one with definitions expanded and one ``as is''.
\item Similarly, certain idiosyncracies (such as PVS's common usage of theory parameters, see Section \ref{sec:normalizeinter}) call for not just matching symbol references, but also variables or possibly even complex expressions. To handle these situations, we additionally allow for \textbf{holes} in the constant lists of an abstract syntax tree, which may be unified with any other symbol or hole by the core algorithm, but are not recursed into and do not count as a requirement. The subterms that are to be considered holes can be marked as such during preprocessing or normalizing.
\item Similarly, certain idiosyncrasies (such as PVS's common usage of theory parameters, see Section \ref{sec:normalizeinter}) call for not just matching symbol references, but also variables or possibly even complex expressions. To handle these situations, we additionally allow for \textbf{holes} in the constant lists of an abstract syntax tree, which may be unified with any other symbol or hole by the core algorithm, but are not recursed into and do not count as a requirement. The subterms that are to be considered holes can be marked as such during preprocessing or normalizing.
\end{enumerate}
The common logical framework used for all the libraries at our disposal -- namely LF and extensions thereof -- makes it easy to systematically normalize theories built on various logical foundations. On the basis of the above features, we currently use the following approaches to normalizing theories:
\begin{itemize}
\item Free variables in a term are replaced by holes.
\item Since judgments -- in our imports -- correspond to constants of type $\vdash P$ for some proposition $P$, we can use the curry-howard correspondence to equate the type $\vdash (P \Rightarrow Q)$ with the function type $(\vdash P) \to (\vdash Q)$, as well as the judgment $\vdash \forall x : A.\;P$ with the function type $(x:A) \to \vdash P$.
\item Since judgments -- in our imports -- correspond to constants of type $\vdash P$ for some proposition $P$, we can use the Curry/Howard correspondence to equate the type $\vdash (P \Rightarrow Q)$ with the function type $(\vdash P) \to (\vdash Q)$, as well as the judgment $\vdash \forall x : A.\;P$ with the function type $(x:A) \to \vdash P$.
Since both styles of formalization are more or less preferred in different systems, we replace each occurence of the former by the latter.
\item For classical logics, we rewrite all remaining logical connectives using their usual definitions on the basis of negation and conjunction only. Double negations are eliminated.
......@@ -34,7 +34,7 @@ is based on the (basic higher-order logic) foundation of the Math-in-the-Middle
Right-clicking anywhere within the theory allows Jane to select \cn{MMT} $\to$ \cn{Find\ Views\ to...} $\to$ \cn{MitM/smglom} (the main Math-in-the-Middle library), telling her (within less than one second) that two morphisms have been found, the most promising of which points to the theory
\cn{matroid\_theory} (see Figure \ref{fig:use:target}) in the library.
Note that the latter makes use of predefined propositions in its axioms and uses a type \cn{coll} for the collection of sets, while the former has the statements of the axioms directly in the theory and uses a predicate \cn{beautiful}. Additionally, the implication that beautiful sets (or sets in a matroid) are finite is stated as a logical formula in the former, while the latter uses the curry-howard correspondence.
Note that the latter makes use of predefined propositions in its axioms and uses a type \cn{coll} for the collection of sets, while the former has the statements of the axioms directly in the theory and uses a predicate \cn{beautiful}. Additionally, the implication that beautiful sets (or sets in a matroid) are finite is stated as a logical formula in the former, while the latter uses the Curry/Howard correspondence.
\begin{figure}[ht]\centering
\fbox{\includegraphics[width=0.6\textwidth]{matroids}}
......@@ -45,7 +45,7 @@ Note that the latter makes use of predefined propositions in its axioms and uses
We have so far assumed one fixed meta-theory for all theories involved; we will now discuss the situation when looking for morphisms between theories in different libraries (and built on different foundations).
Obviously, various differences in available foundational primitives and library-specific best practices and idiosyncracies can prevent the algorithm from finding desired matches. There are two approaches to increasing the number of results in these cases:
Obviously, various differences in available foundational primitives and library-specific best practices and idiosyncrasies can prevent the algorithm from finding desired matches. There are two approaches to increasing the number of results in these cases:
\begin{itemize}
\item In many instances, the translations between two foundations is too complex to be discovered purely syntactically. In these cases we can provide arbitrary meta-morphism between theories -- especially (but not exclusively) on the meta-theories -- which are applied before computing the abstract syntax trees (e.g. alignment-based translations as in \cite{MRLR:alignments:17})
\item We can apply additional foundation-specific normalizations before preprocessing theories, such as eliminating higher-order abstract syntax encodings or implicit arguments, or elaborating specific abbreviations/definitions.
......@@ -53,14 +53,14 @@ Obviously, various differences in available foundational primitives and library-
\begin{newpart}{DM}
The normalizations mentioned in Section \ref{sec:normalizeintra} already suggest equating the involved logical primitives (such as logical connectives) via a meta-morphism.
Foundation-specific normalizations specifically for finding morphisms \emph{across} libraries is to our knowledge an as-of-yet unexplored field of investigation. Every formal system has certain unique idiosyncracies, best practices or widely used features; finding an ideal normalization method is a correspondingly difficult domain-specific problem.
Foundation-specific normalizations specifically for finding morphisms \emph{across} libraries is to our knowledge an as-of-yet unexplored field of investigation. Every formal system has certain unique idiosyncrasies, best practices or widely used features; finding an ideal normalization method is a correspondingly difficult domain-specific problem.
We will discuss some of our findings specifically regarding the PVS library as a case study.
\subsection{Normalization in PVS}\label{sec:normalizeinter}
PVS~\cite{pvs} is a proof assistant under active development, based on a higher-order logic with predicate subtyping and various convenience features such as record types, update expressions and inductive datatypes. In addition to the \emph{Prelude} library, which contains the most common domains of mathematical discourse and is shipped with PVS itself, there is a large library of formal mathematics developed and maintained by NASA~\cite{PVSlibraries:url}.
\paragraph{} While features like subtyping and records are interesting challenges, we will concentrate on one specific idiosyncracy in PVS -- its prevalent use of \emph{theory parameters}.
\paragraph{} While features like subtyping and records are interesting challenges, we will concentrate on one specific idiosyncrasy in PVS -- its prevalent use of \emph{theory parameters}.
In practice, theory parameters are often used in PVS for the signature of an abstract theory. For example, the theory of groups \cn{group\_def} in the NASA library has three theory parameters $(\cn T,\ast,\cn{one})$ for the signature, and includes the theory \cn{monoid\_def} with the same parameters; the axioms for a group are then formalized as a predicate on the theory parameters.
......@@ -68,7 +68,7 @@ Given that the same practice is used in few other systems (if any), searching fo
\begin{enumerate}
\item \emph{Simple treatment:} We can interpret references to theory parameters as free variables and turn them into holes. Includes of parametric theories with arguments are turned into simple includes.
\item \emph{Covariant treatment:} We introduce new constants for the theory parameters and replace occurences of the parameters by constant references. Includes with parameters are again replaced by normal includes.
\item \emph{Covariant treatment:} We introduce new constants for the theory parameters and replace occurrences of the parameters by constant references. Includes with parameters are again replaced by normal includes.
In the above mentioned theory \cn{group\_def}, we would hence add three new constants \cn T, $\ast$ and \cn{one} with their corresponding types.
......@@ -76,7 +76,7 @@ Given that the same practice is used in few other systems (if any), searching fo
In the above mentioned theory \cn{group\_def}, we would change e.g. the unary predicate \cn{inverse\_exists?} with type $T\to\cn{bool}$ to a function with type $(T : \cn{pvstype})\to(\ast : T \to T \to T)\to (\cn{one}:T)\to(T\to\cn{bool})$.
An include of $\cn{group\_def}(S,\circ,e)$ in some other theory, e.g. \cn{group}, would be replaced by a simple include, but occurences of \cn{inverse\_exists?} in \cn{group} would be replaced by $\oma{\cn{inverse\_exists?}}{S,\circ,e}$.
An include of $\cn{group\_def}(S,\circ,e)$ in some other theory, e.g. \cn{group}, would be replaced by a simple include, but occurrences of \cn{inverse\_exists?} in \cn{group} would be replaced by $\oma{\cn{inverse\_exists?}}{S,\circ,e}$.
\end{enumerate}
The first approach is the most straight-forward, but will lead to many false positives and negatives.
......@@ -115,7 +115,7 @@ We have tried the first two approaches regarding theory parameters -- i.e. the s
\caption{Results of Inter- and Intra-Library View Finding in the PVS NASA Library}\label{fig:pvsresults}
\end{figure}
Most of the results in the simple MitM$\to$NASA case are artifacts of the theory parameter treatments -- in fact only two of the 17 results are meaningful (to operations on sets and the theory of numberfields). In the covariant case, the more requirements of each simple morphism lead to fuller (one total) and less spurious morphisms.
Most of the results in the simple MitM$\to$NASA case are artefacts of the theory parameter treatments -- in fact only two of the 17 results are meaningful (to operations on sets and the theory of number fields). In the covariant case, the more requirements of each simple morphism lead to fuller (one total) and less spurious morphisms.
With a theory from the NASA library as domain, the results are already too many to be properly evaluated by hand.
......@@ -135,3 +135,9 @@ This example also hints at a way to iteratively improve the results of the view
%%% End:
% LocalWords: ednote centering fbox includegraphics textwidth beautysource smglom pvs
% LocalWords: newpart Normalization sec:normalizeintra optimizations normalizations a,b
% LocalWords: textbf sec:normalizeinter subterms normalizing vdash vdash Rightarrow
% LocalWords: vdash vdash vdash forall vdash formalization mathbb sec:usecase coll emph
% LocalWords: subtyping PVSlibraries:url monoid formalized pvstype circ,e circ,e ldots
% LocalWords: formalizations ldots langle psvtype pvsterm pvsterm pvsterm mmt-term
% LocalWords: pvsapply Monoids Monoids summarized fig:pvsresults hline hline
......@@ -180,7 +180,7 @@ The symbols will hence be encoded in the component $t$ instead of the list $s$.
\paragraph{Picking starting theories:} If we try to find morphisms between whole collections of theories, we can obviously disregard all theories that are already included in some other theory in our collections, since we work with a normalized (and dependency-closed) version of a theory. Consequently, by only using maximal theories we do not find any fewer morphisms but speed up the search significantly.
\paragraph{Storing Encodings} Finally, for computing the encodings of a theory we only need to know the symbols to be fixed in the component $t$ of an abstract syntax tree, for which only the meta-theories of the theories are relevant. They also determine the specific preprocessings and translations we want to likely use. Since there is only a small number of meta-theories incolved that are relevant in practice, we can store and retrieve the encodings for the most important situations. Since computing the encodings (as well as sorting the theories in a library by their dependencies) is the most expensive part of the algorithm, this -- once computed and stored -- makes the view finding process itself rather efficent.
\paragraph{Storing Encodings} Finally, for computing the encodings of a theory we only need to know the symbols to be fixed in the component $t$ of an abstract syntax tree, for which only the meta-theories of the theories are relevant. They also determine the specific preprocessings and translations we want to likely use. Since there is only a small number of meta-theories involved that are relevant in practice, we can store and retrieve the encodings for the most important situations. Since computing the encodings (as well as sorting the theories in a library by their dependencies) is the most expensive part of the algorithm, this -- once computed and stored -- makes the view finding process itself rather efficient.
%%% Local Variables:
%%% mode: latex
......@@ -189,6 +189,8 @@ The symbols will hence be encoded in the component $t$ instead of the list $s$.
%%% End:
% LocalWords: defemph defemph emph vdash_ normalization optimized compactenum textbf th
% LocalWords: t,s ednote renormalize oldpart syms mapsto mapsto ldots,l compactitem
% LocalWords: t,s ednote renormalize oldpart syms mapsto mapsto ldots,l compactitem y,x
% LocalWords: normalizing subtyping subterms preprocessings ldots ldots formalization
% LocalWords: vdash Rightarrow vdash vdash vdash forall vdash
% LocalWords: vdash Rightarrow vdash vdash vdash forall vdash sec:preproc subseteq c,c
% LocalWords: subseteq ex:beautifinite powerset powerset emptyset emptyset normalized
% LocalWords: Optimizations sec:algparams axiomatizations
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment