Commit 6aa200e5 authored by Dennis Müller's avatar Dennis Müller
Browse files

all over the place

parent 7b1ce4ca
We have seen how a viewfinder can be used for theory \emph{discovery}. But with minor variations, extensions or more specialized interfaces many other potential use cases open up, which we plan to investigate in the future:
\begin{newpart}{DM}
We have seen how a viewfinder can be used for theory \emph{discovery} and finding constants with specific desired properties, but many other potential use cases are imaginable. The main problems to solve with respect to these is less about the algorithm or software design challenges, but user interfaces.
The theory discovery use case described in Sec. \ref{sec:usecase} is mostly desirable in a setting where a user is actively writing or editing a theory, so the integration in jEdit is sensible. However, the across-library use case in Sec. \ref{sec:pvs} already would be a lot more useful in a theory exploration setting, such as when browsing available archives on MathHub \ednote{cite} or in the graph viewer integrated in \mmt \ednote{cite}. Additional specialized user interfaces would enable or improve the following use cases:
\end{newpart}
\begin{itemize}
\item If the codomain of a view is a theory representing a specific model, it would tell her that those
\item \textbf{Model-/Countermodel Finding:} If the codomain of a view is a theory representing a specific model, it would tell her that those
are \emph{examples} of her abstract theory.
Furthermore, partial views -- especially those that are total on some included theory -- could
be insightful \emph{counterexamples}.
\item Given that the Viewfinder looks for \emph{partial} views, we can use it to find natural
\item \textbf{Library Refactoring:} Given that the Viewfinder looks for \emph{partial} views, we can use it to find natural
extensions of a starting theory. Imagine Jane removing the last of her axioms for ``beautiful sets'' --
the other axioms (disregarding finitude of her sets) would allow her to find e.g. both Matroids and
\emph{Ideals}, which would suggest to her to refactor her library accordingly.
......@@ -15,11 +19,11 @@ We have seen how a viewfinder can be used for theory \emph{discovery}. But with
be refactored as an extension of the codomain, which would allow her to use all theorems and definitions
therein.
\item If we additionally consider views into and out of the theories found, this can make theory discovery even
\item \textbf{Theory Generalization:} If we additionally consider views into and out of the theories found, this can make theory discovery even
more attractive. For example, a view from a theory of vector spaces intro matroids could inform Jane additionally,
that her beautiful sets, being matroids, form a generalization of the notion of linear independence in linear algebra.
\item If we were to keep book on our transfomations during preprocessing and normalization, we could use the found
\item \textbf{Folklore-based Conjecture:} If we were to keep book on our transfomations during preprocessing and normalization, we could use the found
views for translating both into the codomain as well as back from there into our starting theory.
This would allow for e.g. discovering and importing theorems and useful definitions from some other library --
......@@ -27,13 +31,7 @@ We have seen how a viewfinder can be used for theory \emph{discovery}. But with
A useful interface might specifically prioritize views into theories on top of which there are many
theorems and definitions that have been discovered.
\item For the last two use cases, it would be advantageous to look for views \emph{into} our working
theory instead.
\end{itemize}
For some of these use cases it would be advantageous to look for views \emph{into} our working theory instead.
Note that even though the algorithm is in principle symmetric, some aspects often depend
on the direction -- e.g. how we preprocess the theories,
which constants we use as starting points or how we treat and evaluate the resulting (partial) views.
\item The last example in Section \ref{sec:usecase} shows how we can find properties like commutativity and
associativity, which can in turn inform a better normalization of the theory, which in turn would potentially
allow for finding more views. This could iteratively improve the results of the viewfinder.
\end{itemize}
\ No newline at end of file
Note that even though the algorithm is in principle symmetric, some aspects often depend on the direction -- e.g. how we preprocess the theories, which constants we use as starting points or how we treat and evaluate the resulting (partial) views (see Sections \ref{sec:algparams} and \ref{sec:preproc}).
\ No newline at end of file
\newcommand{\tb}{\hspace*{.5cm}}
\newcommand{\tbiff}{\tb\miff\tb}
\newcommand{\tbimpl}{\tb\impl\tb}
\newcommand{\mpag}[2]{\begin{minipage}{#1}#2\end{minipage}}
\newcommand{\mpage}[1]{\begin{minipage}{\textwidth}#1\end{minipage}}
% shortcut for centered tabular
\newenvironment{ctabular}[1]{\begin{center}\begin{tabular}{#1}}{\end{tabular}\end{center}}
\newenvironment{tabularfigure}[3]
{\def\@mycaption{#2}\def\@mylabel{#3}\begin{figure}[htb]\centering\begin{tabular}{#1}}
{\end{tabular}\caption{\@mycaption}\label{\@mylabel}\end{figure}}
% wrap this around multiple calls to \footnotetext if there were multiple \footnotemark's to get the right numbering
\newenvironment{multfootnotetext}[1]{\addtocounter{footnote}{-#1}\let\basics@footnotetext=\footnotetext\renewcommand{\footnotetext}[1]{\stepcounter{footnote}\basics@footnotetext{##1}}}{}
%change figure/table placement
% General parameters, for ALL pages:
%\renewcommand{\topfraction}{0.9} % max fraction of floats at top
%\renewcommand{\bottomfraction}{0.8} % max fraction of floats at bottom
%Parameters for TEXT pages (not float pages):
\setcounter{topnumber}{2}
%\setcounter{bottomnumber}{2}
%\setcounter{totalnumber}{4} % 2 may work better
%\setcounter{dbltopnumber}{2} % for 2-column pages
%\renewcommand{\dbltopfraction}{0.9} % fit big float above 2-col. text
%\renewcommand{\textfraction}{0.07} % allow minimal text w. figs
%Parameters for FLOAT pages (not text pages):
%\renewcommand{\floatpagefraction}{0.7} % N.B.: floatpagefraction MUST be less than topfraction !!
%\renewcommand{\dblfloatpagefraction}{0.7}
% remember to use [htp] or [htpb] for placement
% ifnonempty{a}{b} = if (a == empty) empty else b
\newcommand{\ifnonempty}[3][]{\def\@empty{}\def\@test{#2}\ifx\@test\@empty#1\else#3\fi}
% fold{a}{b1,...,bn} = b1 a ... a bn
\newcommand{\fold}[2]{\def\@tmpop{\relax}\@for\@I:=#2\do{\@tmpop\@I\def\@tmpop{#1}}}
% rep{n}{a} = a ... a (n times)
\newcounter{loopcount}
\newcommand{\ntimes}[2]{\setcounter{loopcount}{#1}\loop\ifnum\theloopcount>0#2\addtocounter{loopcount}{-1}\repeat}
\renewcommand{\epsilon}{\varepsilon}
\renewcommand{\phi}{\varphi}
\renewcommand{\theta}{\vartheta}
\newcommand{\N}{\mathbb{N}}
\newcommand{\n}{\mathbb{N}^*}
\newcommand{\R}{\mathbb{R}}
\newcommand{\Q}{\mathbb{Q}}
\newcommand{\Z}{\mathbb{Z}}
\newcommand{\C}{\mathbb{C}}
\newcommand{\B}{\mathbb{B}}
\newcommand{\app}{\approx}
\newcommand{\sq}{\subseteq}
\newcommand{\impl}{\Rightarrow}
\newcommand{\Arr}{\Rightarrow}
\newcommand{\Darr}{\Leftrightarrow}
\newcommand{\arr}{\rightarrow}
\newcommand{\larr}{\leftarrow}
\newcommand{\darr}{\leftrightarrow}
\newcommand{\harr}{\hookrightarrow}
\newcommand{\marr}{\mapsto}
\newcommand{\caret}{\hat{\;}}
%\renewcommand{\nin}{\not\in}
\newcommand{\sm}{\setminus}
\newcommand{\es}{\varnothing}
\newcommand{\pwr}{\mathcal{P}}
\newcommand{\rewrites}{\rightsquigarrow}
%semantics of genfrac: #1, #2 delimiters; #3 line tickness; #4 scriptsize (0-3); #5, #6 items
\newcommand{\myatop}[3][]{\genfrac{}{}{0pt}{#1}{#2}{#3}} %two items on top of each other, optional argument: 0-3 for normal-small script
\newcommand{\myatopp}[3]{\myatop{\myatop[0]{#1}{#2}}{#3}} %three items on top of each other
\newcommand{\myatoppp}[4]{\myatop{\myatop[0]{\myatop[0]{#1}{#2}}{#3}}{#4}} %four items on top of each other
\newcommand{\ov}[1]{\overline{#1}}
\newcommand{\und}[1]{\underline{#1}}
\newcommand{\cas}[1]{\begin{cases}#1\end{cases}}
\newcommand{\bcas}[1]{\left.\cas{#1}\right\}}
\newcommand{\mifc}{&\mathrm{if}\;}
\newcommand{\mothw}{&\mathrm{otherwise}}
\newcommand{\mathll}[2][l]{\providecommand{\nl}[1][.2cm]{\\[##1]}\begin{array}{#1}#2\end{array}}
\newcommand{\eqns}[2][=]{\providecommand{\nl}[1][.2cm]{\\[##1]}\begin{array}{l@{\;#1\;}l@{\tb}l}#2\end{array}}
\newcommand{\Ceq}[1]{\;\stackrel{#1}{=}\;}
\newcommand{\op}[1]{\mathit{#1}}
\newenvironment{myeqnarray}[1][=] %\nl for new line right of the = symbol
{\newcommand{\nl}{\\\multicolumn{1}{c}{}&}\begin{equation*}\begin{array}{l@{\;#1\;}l@{\tb}l}}
{\end{array}\end{equation*}}
\newcommand{\rul}[3][]{\cfrac{#2}{#3}\,#1}
% column vector \vect{a\\b\\c}
\newcommand{\vect}[1]{\left(\begin{array}{c}#1\end{array}\right)}
% code for defining LFS-style flexary commands: \curriedvec{3}{a}{b}{c}
%\newcount\foldindex
%\newcommand{\curriedvec}[1]{\left(\array{c}\global\foldindex#1\curriedvec@next}
%\newcommand{\curriedvec@next}[1]{#1\global\advance\foldindex-1\ifnum\foldindex>1\\\expandafter\curriedvec@next\else\endarray\right)\fi}
%Mathematical Text
\newcommand{\mof}{\;\mathrm{of}\;}
\newcommand{\mif}{\;\mathrm{if}\;}
\newcommand{\mthen}{\;\mathrm{then}\;}
\newcommand{\mfor}{\;\mathrm{for}\;}
\newcommand{\mand}{\;\mathrm{and}\;}
\newcommand{\msome}{\;\mathrm{some}\;}
\newcommand{\mall}{\;\mathrm{all}\;}
\newcommand{\mforall}{\;\mathrm{for}\;\mathrm{all}\;}
\newcommand{\mforsome}{\;\mathrm{for}\;\mathrm{some}\;}
\newcommand{\mnot}{\;\mathrm{not}\;}
\newcommand{\mno}{\;\mathrm{no}\;}
\newcommand{\mor}{\;\mathrm{or}\;}
\newcommand{\minn}{\;\mathrm{in}\;}
\newcommand{\mwith}{\;\mathrm{with}\;}
\newcommand{\mwhere}{\;\mathrm{where}\;}
\newcommand{\mexists}{\;\mathrm{exists}\;}
\newcommand{\miff}{\;\mathrm{iff}\;}
\newcommand{\mimplies}{\;\mathrm{implies}\;}
\newcommand{\msuchthat}{\;\mathrm{such}\;\mathrm{that}\;}
\newcommand{\motherwise}{\;\mathrm{otherwise}\;}
\newcommand{\mtext}[1]{\;\mathrm{#1}\;}
%categories, general
\newcommand{\id}[1]{\op{id}_{#1}}
%\newcommand{\�}[2]{{#2}\circ {#1}}
\newcommand{\oo}[3]{{#3}\circ {#2}\circ {#1}}
\newcommand{\ooo}[4]{{#4}\circ {#3}\circ {#2}\circ {#1}}
\newcommand{\obj}[1]{|#1|}
\newcommand{\sli}[2]{#1\backslash #2}
\newcommand{\slii}[2]{#1/#2}
\newcommand{\catop}[1]{{{#1}^{op}}}
\newcommand{\catfont}[1]{\mathcal{#1}}
\newcommand{\Set}{\catfont{SET}}
\newcommand{\Cat}{\catfont{CAT}}
\newcommand{\Poset}{\catfont{POSET}}
\newcommand{\Rel}{\catfont{REL}}
\newcommand{\Class}{\catfont{CLASS}}
\newcommand{\Ins}{\catfont{INS}}
\newcommand{\Logics}{\catfont{LOG}}
%for tikz
\newcommand{\arrowtip}{angle 45}
\newcommand{\arrowtipepi}{triangle 45}
\newcommand{\arrowtipmono}{right hook}
\newcommand{\refledge}[2]{(#1) .. controls +(-.5,.75) and +(.5,.75) .. node[above]{#2} (#1)} % reflexive edge
%institutions
\newcommand{\I}{\mathbb{I}}
\newcommand{\insfont}[1]{\mathbf{#1}}
\newcommand{\Sig}[1][]{{\insfont{Sig}^{#1}}}
\newcommand{\Con}[1][]{{\insfont{Con}^{#1}}}
\newcommand{\Sen}[1][]{{\insfont{Sen}^{#1}}}
\newcommand{\Mod}[1][]{{\insfont{Mod}^{#1}}}
\newcommand{\Pf}[1][]{{\insfont{Pf}\,^{#1}}}
\newcommand{\Th}[1][]{{\insfont{Th}^{#1}}}
\newcommand{\Syn}[1][]{{\insfont{Syn}^{#1}}}
\newcommand{\Jud}[1][]{{\insfont{Jud}^{#1}}}
\newcommand{\der}{\vdash}
\newcommand{\dera}[4][]{#2\der^{#1}_{#3}#4}
\newcommand{\moda}[4][]{#2\models^{#1}_{#3}#4}
\newcommand{\nmoda}[4][]{#2\not\models^{#1}_{#3}#4}
% CFGs
\newenvironment{grammar}{\[\begin{array}{l@{\tb\bbc\tb}l@{\tb}l}}{\end{array}\]}
\newcommand{\bnf}[1]{#1}
\newcommand{\bbc}{\bnf{::=}}
\newcommand{\bnfalt}{\ensuremath{\;\bnf{|}\;}}
\providecommand{\alt}{\ensuremath{\;\bnf{|}\;}} % already used by beamer
\newcommand{\opt}[1]{\bnf{[}#1\bnf{]}}
\newcommand{\bnfbracket}[1]{\bnf{(}#1\bnf{)}}
\newcommand{\rep}[1]{#1^{\bnf{\ast}}}
\newcommand{\bnfchoice}[1]{\bnf{[}#1\bnf{]}}
\newcommand{\bnfnegchoice}[1]{\bnf{[\caret}#1\bnf{]}}
\newenvironment{commgrammar}{\[\begin{array}{l@{\;}c@{\;}l@{\tb}l}}{\end{array}\]}
\newcommand{\gcomment}[1]{\multicolumn{4}{l}{\rule{0pt}{4ex}\fbox{#1}\vspace{.3em}}}
\newcommand{\gprod}[3]{#1 & \bbc & #2 & \text{#3}}
\newcommand{\galtprod}[2]{ & \bnf{|} & #1 & \text{#2}}
\RequirePackage{xspace}
\newcommand{\mmt}{\texorpdfstring{{\normalfont\scshape{Mmt}}\xspace}{MMT\ }}
\newcommand{\omdoc}{{\scshape{OMDoc}}\xspace}
\newcommand{\mathml}{{\scshape{MathML}}\xspace}
\newcommand{\openmath}{{\scshape{OpenMath}}\xspace}
......@@ -18,8 +18,10 @@ the following properties:
To see what is known about beautiful sets, she types these three conditions into a theory
classifier, which interprets them as a \MMT theory $Q$, computes all (total) views from
$Q$ into a library $\cL$, and returns presentations of target theories and the assignments
made by the views. In our example we have the situation in
Figure~\ref{fig:theory-classification-ex}. In the base use case, Jane learns that her
made by the views.
% In our example we have the situation in Figure~\ref{fig:theory-classification-ex}.
In the base use case, Jane learns that her
``beautiful sets'' correspond to the well-known structure of
matroids~\cite{wikipedia:matroid}, so she can directly apply matroid theory to her
problems.
......@@ -38,6 +40,7 @@ problems.
\end{compactitem}
\end{newpart}
\begin{oldpart}{I would remove this completely -- the theories are given via screenshots later anyway}
\begin{figure}[ht]\centering\lstset{aboveskip=0pt,belowskip=0pt}
\begin{tabular}{|p{5cm}|p{5cm}|}\hline
$Q$ & Result \\\hline
......@@ -62,6 +65,7 @@ theory matroid : F?MitM
\end{tabular}
\caption{Theory Classification for beautiful sets}\label{fig:theory-classification-ex}
\end{figure}
\end{oldpart}
\paragraph{Related Work}
Existing systems have so far only worked with explicitly given theory morphisms, e.g., in IMPS \cite{imps} or Isabelle \cite{isabelle}.
......@@ -87,7 +91,7 @@ Firstly, we present a the design and implementation of a generic theory morphism
The algorithm tries to match two symbols by unifying their types.
This is made efficient by separating the term into a hashed representation of its abstract syntax tree (which serves as a fast plausibility check for pre-selecting matching candidates) and the list of symbol occurrences in the term, into which the algorithm recurses.
Secondly, we apply this view finder in two concrete case studies. \ednote{add 1-2 sentences for each case study}
Secondly, we apply this view finder in two concrete case studies: In the first, we start with an abstract theory and try to figure out if it already exists in the same library. In the second example, we write down a simple theory of commutative operators in one language to find all commutative operators in another library based on a different language.
\paragraph{Overview}
In Section~\ref{sec:prelim}, we revise the basics of MMT and the representations of (exemplary) the PVS and HOL Light libraries.
......
......@@ -76,10 +76,9 @@
\maketitle
\begin{abstract}
We present a method for finding morphisms between formal theories, both within as well
as across libraries based on different logical foundations. These morphisms can yield
both (more or less formal) \emph{alignments} between individual symbols as well as
truth-preserving morphisms between whole theories. As they induce new theorems in the
target theory for any of the source theory, theory morphisms are high-value elements of
as across libraries based on different logical foundations.
% These morphisms can yield both (more or less formal) \emph{alignments} between individual symbols as well as truth-preserving morphisms between whole theories.
As they induce new theorems in the target theory for any of the source theory, theory morphisms are high-value elements of
a modular formal library. Usually, theory morphisms are manually encoded, but this
practice requires authors who are familiar with source and target theories at the same
time, which limits the scalability of the manual approach.
......@@ -100,15 +99,15 @@
\section{Finding Theory Morphisms}\label{sec:viewfinder}
\input{viewfinder}
\section{Extended Use Case}\label{sec:usecase}
\section{Implementation \& Use Case}\label{sec:usecase}
\input{usecase}
\section{Unrealized Applications}
\section{Low-hanging Fruit: Currently Unrealized Applications}
\input{applications}
\section{Conclusion}\label{sec:concl}
\paragraph{Future Work} \ednote{term indexing}
\paragraph{Future Work} \ednote{term indexing}\ednote{Better normalizing (Norman's thesis)}
\paragraph{Acknowledgments}
The authors gratefully acknowledge financial support from the OpenDreamKit Horizon 2020
......
......@@ -21,7 +21,7 @@ In particular, if we represent proofs as typed terms, theory morphisms preserve
This property makes theory morphism so valuable for structuring, refactoring, and integrating large corpora.
MMT achieves language-independence through the use of \textbf{meta-theories}: every MMT-theory may designate a previously defined theory as its meta-theory.
For example, when represent the HOL Light library in MMT, we first write a theory $L$ for HOL Light.
For example, when we represent the HOL Light library in MMT, we first write a theory $L$ for the logical primitives of HOL Light.
Then each theory in the HOL Light library is represented as a theory with $L$ as its meta-theory.
In fact, we usually go one step further: $L$ itself is a theory, whose meta-theory is a logical framework such as LF.
That allows $L$ to concisely define the syntax and inference system of HOL Light.
......@@ -39,7 +39,7 @@ Thus, we assume that $\Sigma$ and $\Sigma'$ have the same meta-theory $M$, and t
& Theory $\Sigma$ & Morphism $\sigma:\Sigma\to\Sigma'$ \\
\hline
set of & typed constant declarations $c:E$ & assignments $c\mapsto E'$ \\
$\Sigma$-expressions $E$ & formed from $M$- and $\Sigma$-constants & mapped to $\Sigma'$ by homomorphic extension \\
$\Sigma$-expressions $E$ & formed from $M$- and $\Sigma$-constants & mapped to $\Sigma'$ expressions \\
\hline
\end{tabular}
\end{center}
......@@ -59,8 +59,11 @@ Complex expressions are of the form $\ombind{o}{x_1:t_1,\ldots,x_m:t_m}{a_1,\ldo
\item $a_i$ is an argument of $o$
\end{compactitem}
The bound variable context may be empty, and we write $\oma{o}{\vec{a}}$ instead of $\ombind{o}{\cdot}{\vec{a}}$.
For example, \ednote{give examples}
\begin{newpart}{DM}
For example, the expression $\lambda x,y:\mathbb N.\; (x + \cn{one})\cdot(y+\cn{one})$ would be written as $\ombind{\lambda}{x:\mathbb N,y : \mathbb N}{\oma{\cdot}{\oma{+}{x,\cn{one}},\oma{+}{y,\cn{one}}}}$ instead.
%For example, the second axiom ``Every subset of a beautiful set is beautiful'' (i.e. the term $\forall s,t : \cn{set\ }X.\;\cn{beautiful}(s)\wedge t \subseteq s \Rightarrow \cn{beautiful}(t)$) would be written as
%\[ \ombind{\forall}{s : \oma{\cn{set}}{X},t : \oma{\cn{set}}{X}}{\oma{\Rightarrow}{\oma{\wedge}{\oma{\cn{beautiful}}{s},\oma{\subseteq}{t,s}},\oma{\cn{beautiful}}{t}}} \]
\end{newpart}
Finally, we remark on a few additional features of the MMT language that are important for large-scale case studies but not critical to understand the basic intuitions of results.
MMT provides a module systems that allows theories to instantiate and import each other. The module system is conservative: every theory can be elaborated into one that only declares constants.
MMT constants may carry an optional definiens, in which case we write $c:E=e$.
......@@ -90,49 +93,49 @@ Defined constants can be eliminated by definition expansion.
%\end{center}
%\end{example
\begin{oldpart}{FR: replaced with the above}
For the purposes of this paper, we will work with the (only slightly simplified) grammar given in Figure \ref{fig:mmtgrammar}.
\begin{figure}[ht]\centering\vspace*{-1em}
\begin{mdframed}
\begin{tabular}{rl@{\quad}l|l}
\cn{Thy} $::=$ & $T[:T]=\{ (\cn{Inc})^\ast\ (\cn{Const})^\ast \}$ & Theory & \multirow{2}{*}{Modules} \\
\cn{View} $::=$ & $T : T \to T = \{ (\cn{Ass})^\ast \}$ & View & \\ \hline
\cn{Const} $::=$ & $C [:t] [=t]$ & Constant Declarations& \multirow{3}{*}{Declarations} \\
\cn{Inc} $::=$ & $\incl T$ & Includes & \\
\cn{Ass} $::=$ & $C = t$ & Assignments & \\ \hline
$\Gamma$ $::=$ & $(x [:t][=t])^\ast$ & Variable Contexts & \multirow{2}{*}{Objects} \\
$t$ $::=$ & $x \mid T?C \mid \oma{t}{(t)^+} \mid \ombind{t}{\Gamma}{t} $ & Terms & \\\hline\hline
$T$ $::=$ & \cn{String} & Module Names & \multirow{3}{*}{Strings} \\
$C$ $::=$ & \cn{String} & Constant Names & \\
$x$ $::=$ & \cn{String} & Variable Names & \\
\end{tabular}
\end{mdframed}\vspace*{-.5em}
\caption{The MMT Grammar}\label{fig:mmtgrammar}\label{fig:mmtgrammar}\vspace*{-1em}
\end{figure}
In more detail:
\begin{itemize}
\item Theories have a module name, an optional \emph{meta-theory} and a body consisting of \emph{includes} of other theories
and a list of \emph{constant declarations}.
\item Constant declarations have a name and two optional term components; a \emph{type} ($[:t]$), and a \emph{definition} ($[=t]$).
\item Views $V : T_1 \to T_2$ have a module name, a domain theory $T_1$, a codomain theory $T_2$ and a body consisting of assignments
$C = t$.
\item Terms are either
\begin{itemize}
\item variables $x$,
\item symbol references $T?C$ (referencing the constant $C$ in theory $T$),
\item applications $\oma{f}{a_1,\ldots,a_n}$ of a term $f$ to a list of arguments $a_1,\ldots,a_n$ or
\item binding application $\ombind{f}{x_1[:t_1][=d_1],\ldots,x_n[:t_n][=d_n]}{b}$, where $f$ \emph{binds} the variables
$x_1,\ldots,x_n$ in the body $b$ (representing binders such as quantifiers, lambdas, dependent type constructors etc.).
\end{itemize}
\end{itemize}
The term components of a constant in a theory $T$ may only contain symbol references to constants declared previously in $T$, or that are declared in some theory $T'$ (recursively) included in $T$ (or its meta-theory, which we consider an \emph{include} as well).
We can eliminate all includes in a theory $T$ by simply copying over the constant declarations in the included theories; we call this process \emph{flattening}. We will often and without loss of generality assume a theory to be \emph{flat} for convenience.
An assignment in a view $V:T_1\to T_2$ is syntactically well-formed if for any assignment $C=t$ contained, $C$ is a constant declared in the flattened domain $T_1$ and $t$ is a syntactically well-formed term in the codomain $T_2$. We call a view \emph{total} if all \emph{undefined} constants in the domain have a corresponding assignment and \emph{partial} otherwise.
\end{oldpart}
%\begin{oldpart}{FR: replaced with the above}
%For the purposes of this paper, we will work with the (only slightly simplified) grammar given in Figure \ref{fig:mmtgrammar}.
%
%\begin{figure}[ht]\centering\vspace*{-1em}
% \begin{mdframed}
% \begin{tabular}{rl@{\quad}l|l}
% \cn{Thy} $::=$ & $T[:T]=\{ (\cn{Inc})^\ast\ (\cn{Const})^\ast \}$ & Theory & \multirow{2}{*}{Modules} \\
% \cn{View} $::=$ & $T : T \to T = \{ (\cn{Ass})^\ast \}$ & View & \\ \hline
% \cn{Const} $::=$ & $C [:t] [=t]$ & Constant Declarations& \multirow{3}{*}{Declarations} \\
% \cn{Inc} $::=$ & $\incl T$ & Includes & \\
% \cn{Ass} $::=$ & $C = t$ & Assignments & \\ \hline
% $\Gamma$ $::=$ & $(x [:t][=t])^\ast$ & Variable Contexts & \multirow{2}{*}{Objects} \\
% $t$ $::=$ & $x \mid T?C \mid \oma{t}{(t)^+} \mid \ombind{t}{\Gamma}{t} $ & Terms & \\\hline\hline
% $T$ $::=$ & \cn{String} & Module Names & \multirow{3}{*}{Strings} \\
% $C$ $::=$ & \cn{String} & Constant Names & \\
% $x$ $::=$ & \cn{String} & Variable Names & \\
% \end{tabular}
% \end{mdframed}\vspace*{-.5em}
% \caption{The MMT Grammar}\label{fig:mmtgrammar}\label{fig:mmtgrammar}\vspace*{-1em}
%\end{figure}
%
%In more detail:
%\begin{itemize}
% \item Theories have a module name, an optional \emph{meta-theory} and a body consisting of \emph{includes} of other theories
% and a list of \emph{constant declarations}.
% \item Constant declarations have a name and two optional term components; a \emph{type} ($[:t]$), and a \emph{definition} ($[=t]$).
% \item Views $V : T_1 \to T_2$ have a module name, a domain theory $T_1$, a codomain theory $T_2$ and a body consisting of assignments
% $C = t$.
% \item Terms are either
% \begin{itemize}
% \item variables $x$,
% \item symbol references $T?C$ (referencing the constant $C$ in theory $T$),
% \item applications $\oma{f}{a_1,\ldots,a_n}$ of a term $f$ to a list of arguments $a_1,\ldots,a_n$ or
% \item binding application $\ombind{f}{x_1[:t_1][=d_1],\ldots,x_n[:t_n][=d_n]}{b}$, where $f$ \emph{binds} the variables
% $x_1,\ldots,x_n$ in the body $b$ (representing binders such as quantifiers, lambdas, dependent type constructors etc.).
% \end{itemize}
%\end{itemize}
%The term components of a constant in a theory $T$ may only contain symbol references to constants declared previously in $T$, or that are declared in some theory $T'$ (recursively) included in $T$ (or its meta-theory, which we consider an \emph{include} as well).
%We can eliminate all includes in a theory $T$ by simply copying over the constant declarations in the included theories; we call this process \emph{flattening}. We will often and without loss of generality assume a theory to be \emph{flat} for convenience.
%
%An assignment in a view $V:T_1\to T_2$ is syntactically well-formed if for any assignment $C=t$ contained, $C$ is a constant declared in the flattened domain $T_1$ and $t$ is a syntactically well-formed term in the codomain $T_2$. We call a view \emph{total} if all \emph{undefined} constants in the domain have a corresponding assignment and \emph{partial} otherwise.
%
%\end{oldpart}
\subsection{Proof Assistant Libraries in MMT}\label{sec:oaf}
......
......@@ -16,10 +16,44 @@ Right-clicking anywhere within the theory allows Jane to select \cn{MMT} $\to$ \
\caption{The Theory of Matroids in the MitM Library}\label{fig:use:target}
\end{figure}
\paragraph{} Using a different library as target, we can for example quickly write down a theory for a commutative binary operator (still using the Math-in-the-Middle foundation), allowing us to search e.g. the PVS Prelude library for any commutative operators, as in Figure \ref{fig:use:pvs}.
\begin{newpart}{DM: Moved}
\section{Across-Library Viewfinding}\label{sec:across}
We have so far assumed one fixed meta-theory for all theories involved; we will now discuss the situation when looking for views between theories in different libraries (and built on different foundations).
Obviously, various differences in available foundational primitives and library-specific best practices and idiosyncracies can prevent the algorithm from finding desired matches. There are two approaches to increasing the number of results in these cases:
\begin{itemize}
\item In many instances, the translations between two foundations is too complex to be discovered purely syntactically. In these cases we can provide arbitrary translations between theories, which are applied before computing the encoding.\ednote{Mention/cite alignment-translation paper}
\item We can do additional transformations before preprocessing theories, auch as normalizing expressions, eliminating higher-order abstract syntax encodings or encoding-related redundant information (such as the type of a typed equality, which in the presence of subtyping can be different from the types of both sides of an equation), or elaborating abbreviations/definitions.
\end{itemize}
When elaborating definitions, it is important to consider that this may also reduce the number of results, if both theories use similar abbreviations for complex terms, or the same concept is declared axiomatically in one theory, but definitionally in the other. For that reason, we can allow \textbf{several abstract syntax trees for the same constant}, such as one with definitions expanded and one ``as is''.
Similarly, certain idiosyncracies -- such as PVS's common usage of theory parameters -- call for not just matching symbol references, but also variables or possibly even complex expressions. To handle these situations, we additionally allow for \textbf{holes} in the constant lists of an abstract syntax tree, which may be unified with any other symbol or hole, but are not recursed into. The subterms that are to be considered holes can be marked as such during preprocessing.
\subsection{Normalization}\label{sec:preproc}
The common logical framework used for all the libraries at our disposal -- namely LF and extensions thereof -- makes it easy to systematically normalize theories built on various logical foundations. We currently use the following approaches to preprocessing theories:
\begin{itemize}
\item Free variables in a term, often occurences of theory parameters as e.g. used extensively in the PVS system, are replaced by holes.
\item For foundations that use product types, we curry function types $(A_1 \times\ldots A_n)\to B$ to $A_1 \to \ldots \to A_n\to B$. We treat lambda-expressions and applications accordingly.
\item Higher-order abstract syntax encodings are eliminated by raising atomic types, function types, applications and lambdas to the level of the logical framework. This eliminates (redundant) implicit arguments that only occur due to their formalization in the logical framework.
This has the advantage that possible differences between the types of the relevant subterms and implicit type arguments (e.g. in the presence of subtyping) do not negatively affect viewfinding.
\item We use the curry-howard correspondence to transform axioms and theorems of the form $\vdash (P\Rightarrow Q)$ to function types $\vdash P \to \vdash Q$. Analogously, we transform judgments of the form $\vdash \forall x : A.\;P$ to $\prod_{x:A}\vdash P$.
\item For classical logics, we afterwards rewrite all logical connectives using their usual definitions using negation and conjunction only. Double negations are eliminated.
\item Typed Equalities are transformed to untyped ones; again getting rid of the redundant type argument of the equality.
\item The arguments of conjunctions and equalities are reordered (currently only by their number of subterms).
\end{itemize}
\subsection{Implementation}\label{sec:pvs}
\paragraph{} Using the above normalization methods, we can examplary write down a theory for a commutative binary operator using the Math-in-the-Middle foundation, while targeting e.g. the PVS Prelude library -- allowing us to find all commutative operators, as in Figure \ref{fig:use:pvs}.
\begin{figure}[ht]\centering
\fbox{\includegraphics[width=\textwidth]{pvs}}
\caption{Searching for Commutative Operators in PVS}\label{fig:use:pvs}
\end{figure}
\ednote{8 results for NASA, but NASA doesn't work in jEdit because of limited memory}
\ No newline at end of file
\ednote{8 results for NASA, but NASA doesn't work in jEdit because of limited memory}
This example also hints at a way to iteratively improve the results of the viewfinder: since we can find properties like commutativity and associativity, we can use the results to in turn inform a better normalization of the theory by exploiting these properties. This in turn would potentially allow for finding more views.
\end{newpart}
\ No newline at end of file
This diff is collapsed.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment