Commit b0a6f80e authored by Michael Kohlhase's avatar Michael Kohlhase
Browse files

merge

parents 8c2d086a 7f86f2ab
We have seen how a viewfinder can be used for theory \emph{discovery}. But with minor variations, extensions or more specialized interfaces many other potential use cases open up, which we plan to investigate in the future:
\begin{newpart}{DM}
We have seen how a viewfinder can be used for theory \emph{discovery} and finding constants with specific desired properties, but many other potential use cases are imaginable. The main problems to solve with respect to these is less about the algorithm or software design challenges, but user interfaces.
The theory discovery use case described in Sec. \ref{sec:usecase} is mostly desirable in a setting where a user is actively writing or editing a theory, so the integration in jEdit is sensible. However, the across-library use case in Sec. \ref{sec:pvs} already would be a lot more useful in a theory exploration setting, such as when browsing available archives on MathHub~\cite{mathhub} or in the graph viewer integrated in \mmt ~\cite{RupKohMue:fitgv17}. Additional specialized user interfaces would enable or improve the following use cases:
\end{newpart}
\begin{itemize}
\item If the codomain of a view is a theory representing a specific model, it would tell her that those
\item \textbf{Model-/Countermodel Finding:} If the codomain of a view is a theory representing a specific model, it would tell her that those
are \emph{examples} of her abstract theory.
Furthermore, partial views -- especially those that are total on some included theory -- could
be insightful \emph{counterexamples}.
\item Given that the Viewfinder looks for \emph{partial} views, we can use it to find natural
\item \textbf{Library Refactoring:} Given that the Viewfinder looks for \emph{partial} views, we can use it to find natural
extensions of a starting theory. Imagine Jane removing the last of her axioms for ``beautiful sets'' --
the other axioms (disregarding finitude of her sets) would allow her to find e.g. both Matroids and
\emph{Ideals}, which would suggest to her to refactor her library accordingly.
......@@ -15,11 +19,11 @@ We have seen how a viewfinder can be used for theory \emph{discovery}. But with
be refactored as an extension of the codomain, which would allow her to use all theorems and definitions
therein.
\item If we additionally consider views into and out of the theories found, this can make theory discovery even
\item \textbf{Theory Generalization:} If we additionally consider views into and out of the theories found, this can make theory discovery even
more attractive. For example, a view from a theory of vector spaces intro matroids could inform Jane additionally,
that her beautiful sets, being matroids, form a generalization of the notion of linear independence in linear algebra.
\item If we were to keep book on our transfomations during preprocessing and normalization, we could use the found
\item \textbf{Folklore-based Conjecture:} If we were to keep book on our transfomations during preprocessing and normalization, we could use the found
views for translating both into the codomain as well as back from there into our starting theory.
This would allow for e.g. discovering and importing theorems and useful definitions from some other library --
......@@ -27,16 +31,11 @@ We have seen how a viewfinder can be used for theory \emph{discovery}. But with
A useful interface might specifically prioritize views into theories on top of which there are many
theorems and definitions that have been discovered.
\item For the last two use cases, it would be advantageous to look for views \emph{into} our working
theory instead.
Note that even though the algorithm is in principle symmetric, some aspects often depend
on the direction -- e.g. how we preprocess the theories,
which constants we use as starting points or how we treat and evaluate the resulting (partial) views.
\item The last example in Section \ref{sec:usecase} shows how we can find properties like commutativity and
associativity, which can in turn inform a better normalization of the theory, which in turn would potentially
allow for finding more views. This could iteratively improve the results of the viewfinder.
\end{itemize}
For some of these use cases it would be advantageous to look for views \emph{into} our working theory instead.
Note that even though the algorithm is in principle symmetric, some aspects often depend on the direction -- e.g. how we preprocess the theories, which constants we use as starting points or how we treat and evaluate the resulting (partial) views (see Sections \ref{sec:algparams}, \ref{sec:normalizeintra} and \ref{sec:normalizeinter}).
%%% Local Variables:
%%% mode: latex
%%% eval: (visual-line-mode) (set-fill-column 5000)
......
\newcommand{\tb}{\hspace*{.5cm}}
\newcommand{\tbiff}{\tb\miff\tb}
\newcommand{\tbimpl}{\tb\impl\tb}
\newcommand{\mpag}[2]{\begin{minipage}{#1}#2\end{minipage}}
\newcommand{\mpage}[1]{\begin{minipage}{\textwidth}#1\end{minipage}}
% shortcut for centered tabular
\newenvironment{ctabular}[1]{\begin{center}\begin{tabular}{#1}}{\end{tabular}\end{center}}
\newenvironment{tabularfigure}[3]
{\def\@mycaption{#2}\def\@mylabel{#3}\begin{figure}[htb]\centering\begin{tabular}{#1}}
{\end{tabular}\caption{\@mycaption}\label{\@mylabel}\end{figure}}
% wrap this around multiple calls to \footnotetext if there were multiple \footnotemark's to get the right numbering
\newenvironment{multfootnotetext}[1]{\addtocounter{footnote}{-#1}\let\basics@footnotetext=\footnotetext\renewcommand{\footnotetext}[1]{\stepcounter{footnote}\basics@footnotetext{##1}}}{}
%change figure/table placement
% General parameters, for ALL pages:
%\renewcommand{\topfraction}{0.9} % max fraction of floats at top
%\renewcommand{\bottomfraction}{0.8} % max fraction of floats at bottom
%Parameters for TEXT pages (not float pages):
\setcounter{topnumber}{2}
%\setcounter{bottomnumber}{2}
%\setcounter{totalnumber}{4} % 2 may work better
%\setcounter{dbltopnumber}{2} % for 2-column pages
%\renewcommand{\dbltopfraction}{0.9} % fit big float above 2-col. text
%\renewcommand{\textfraction}{0.07} % allow minimal text w. figs
%Parameters for FLOAT pages (not text pages):
%\renewcommand{\floatpagefraction}{0.7} % N.B.: floatpagefraction MUST be less than topfraction !!
%\renewcommand{\dblfloatpagefraction}{0.7}
% remember to use [htp] or [htpb] for placement
% ifnonempty{a}{b} = if (a == empty) empty else b
\newcommand{\ifnonempty}[3][]{\def\@empty{}\def\@test{#2}\ifx\@test\@empty#1\else#3\fi}
% fold{a}{b1,...,bn} = b1 a ... a bn
\newcommand{\fold}[2]{\def\@tmpop{\relax}\@for\@I:=#2\do{\@tmpop\@I\def\@tmpop{#1}}}
% rep{n}{a} = a ... a (n times)
\newcounter{loopcount}
\newcommand{\ntimes}[2]{\setcounter{loopcount}{#1}\loop\ifnum\theloopcount>0#2\addtocounter{loopcount}{-1}\repeat}
\renewcommand{\epsilon}{\varepsilon}
\renewcommand{\phi}{\varphi}
\renewcommand{\theta}{\vartheta}
\newcommand{\N}{\mathbb{N}}
\newcommand{\n}{\mathbb{N}^*}
\newcommand{\R}{\mathbb{R}}
\newcommand{\Q}{\mathbb{Q}}
\newcommand{\Z}{\mathbb{Z}}
\newcommand{\C}{\mathbb{C}}
\newcommand{\B}{\mathbb{B}}
\newcommand{\app}{\approx}
\newcommand{\sq}{\subseteq}
\newcommand{\impl}{\Rightarrow}
\newcommand{\Arr}{\Rightarrow}
\newcommand{\Darr}{\Leftrightarrow}
\newcommand{\arr}{\rightarrow}
\newcommand{\larr}{\leftarrow}
\newcommand{\darr}{\leftrightarrow}
\newcommand{\harr}{\hookrightarrow}
\newcommand{\marr}{\mapsto}
\newcommand{\caret}{\hat{\;}}
%\renewcommand{\nin}{\not\in}
\newcommand{\sm}{\setminus}
\newcommand{\es}{\varnothing}
\newcommand{\pwr}{\mathcal{P}}
\newcommand{\rewrites}{\rightsquigarrow}
%semantics of genfrac: #1, #2 delimiters; #3 line tickness; #4 scriptsize (0-3); #5, #6 items
\newcommand{\myatop}[3][]{\genfrac{}{}{0pt}{#1}{#2}{#3}} %two items on top of each other, optional argument: 0-3 for normal-small script
\newcommand{\myatopp}[3]{\myatop{\myatop[0]{#1}{#2}}{#3}} %three items on top of each other
\newcommand{\myatoppp}[4]{\myatop{\myatop[0]{\myatop[0]{#1}{#2}}{#3}}{#4}} %four items on top of each other
\newcommand{\ov}[1]{\overline{#1}}
\newcommand{\und}[1]{\underline{#1}}
\newcommand{\cas}[1]{\begin{cases}#1\end{cases}}
\newcommand{\bcas}[1]{\left.\cas{#1}\right\}}
\newcommand{\mifc}{&\mathrm{if}\;}
\newcommand{\mothw}{&\mathrm{otherwise}}
\newcommand{\mathll}[2][l]{\providecommand{\nl}[1][.2cm]{\\[##1]}\begin{array}{#1}#2\end{array}}
\newcommand{\eqns}[2][=]{\providecommand{\nl}[1][.2cm]{\\[##1]}\begin{array}{l@{\;#1\;}l@{\tb}l}#2\end{array}}
\newcommand{\Ceq}[1]{\;\stackrel{#1}{=}\;}
\newcommand{\op}[1]{\mathit{#1}}
\newenvironment{myeqnarray}[1][=] %\nl for new line right of the = symbol
{\newcommand{\nl}{\\\multicolumn{1}{c}{}&}\begin{equation*}\begin{array}{l@{\;#1\;}l@{\tb}l}}
{\end{array}\end{equation*}}
\newcommand{\rul}[3][]{\cfrac{#2}{#3}\,#1}
% column vector \vect{a\\b\\c}
\newcommand{\vect}[1]{\left(\begin{array}{c}#1\end{array}\right)}
% code for defining LFS-style flexary commands: \curriedvec{3}{a}{b}{c}
%\newcount\foldindex
%\newcommand{\curriedvec}[1]{\left(\array{c}\global\foldindex#1\curriedvec@next}
%\newcommand{\curriedvec@next}[1]{#1\global\advance\foldindex-1\ifnum\foldindex>1\\\expandafter\curriedvec@next\else\endarray\right)\fi}
%Mathematical Text
\newcommand{\mof}{\;\mathrm{of}\;}
\newcommand{\mif}{\;\mathrm{if}\;}
\newcommand{\mthen}{\;\mathrm{then}\;}
\newcommand{\mfor}{\;\mathrm{for}\;}
\newcommand{\mand}{\;\mathrm{and}\;}
\newcommand{\msome}{\;\mathrm{some}\;}
\newcommand{\mall}{\;\mathrm{all}\;}
\newcommand{\mforall}{\;\mathrm{for}\;\mathrm{all}\;}
\newcommand{\mforsome}{\;\mathrm{for}\;\mathrm{some}\;}
\newcommand{\mnot}{\;\mathrm{not}\;}
\newcommand{\mno}{\;\mathrm{no}\;}
\newcommand{\mor}{\;\mathrm{or}\;}
\newcommand{\minn}{\;\mathrm{in}\;}
\newcommand{\mwith}{\;\mathrm{with}\;}
\newcommand{\mwhere}{\;\mathrm{where}\;}
\newcommand{\mexists}{\;\mathrm{exists}\;}
\newcommand{\miff}{\;\mathrm{iff}\;}
\newcommand{\mimplies}{\;\mathrm{implies}\;}
\newcommand{\msuchthat}{\;\mathrm{such}\;\mathrm{that}\;}
\newcommand{\motherwise}{\;\mathrm{otherwise}\;}
\newcommand{\mtext}[1]{\;\mathrm{#1}\;}
%categories, general
\newcommand{\id}[1]{\op{id}_{#1}}
%\newcommand{\�}[2]{{#2}\circ {#1}}
\newcommand{\oo}[3]{{#3}\circ {#2}\circ {#1}}
\newcommand{\ooo}[4]{{#4}\circ {#3}\circ {#2}\circ {#1}}
\newcommand{\obj}[1]{|#1|}
\newcommand{\sli}[2]{#1\backslash #2}
\newcommand{\slii}[2]{#1/#2}
\newcommand{\catop}[1]{{{#1}^{op}}}
\newcommand{\catfont}[1]{\mathcal{#1}}
\newcommand{\Set}{\catfont{SET}}
\newcommand{\Cat}{\catfont{CAT}}
\newcommand{\Poset}{\catfont{POSET}}
\newcommand{\Rel}{\catfont{REL}}
\newcommand{\Class}{\catfont{CLASS}}
\newcommand{\Ins}{\catfont{INS}}
\newcommand{\Logics}{\catfont{LOG}}
%for tikz
\newcommand{\arrowtip}{angle 45}
\newcommand{\arrowtipepi}{triangle 45}
\newcommand{\arrowtipmono}{right hook}
\newcommand{\refledge}[2]{(#1) .. controls +(-.5,.75) and +(.5,.75) .. node[above]{#2} (#1)} % reflexive edge
%institutions
\newcommand{\I}{\mathbb{I}}
\newcommand{\insfont}[1]{\mathbf{#1}}
\newcommand{\Sig}[1][]{{\insfont{Sig}^{#1}}}
\newcommand{\Con}[1][]{{\insfont{Con}^{#1}}}
\newcommand{\Sen}[1][]{{\insfont{Sen}^{#1}}}
\newcommand{\Mod}[1][]{{\insfont{Mod}^{#1}}}
\newcommand{\Pf}[1][]{{\insfont{Pf}\,^{#1}}}
\newcommand{\Th}[1][]{{\insfont{Th}^{#1}}}
\newcommand{\Syn}[1][]{{\insfont{Syn}^{#1}}}
\newcommand{\Jud}[1][]{{\insfont{Jud}^{#1}}}
\newcommand{\der}{\vdash}
\newcommand{\dera}[4][]{#2\der^{#1}_{#3}#4}
\newcommand{\moda}[4][]{#2\models^{#1}_{#3}#4}
\newcommand{\nmoda}[4][]{#2\not\models^{#1}_{#3}#4}
% CFGs
\newenvironment{grammar}{\[\begin{array}{l@{\tb\bbc\tb}l@{\tb}l}}{\end{array}\]}
\newcommand{\bnf}[1]{#1}
\newcommand{\bbc}{\bnf{::=}}
\newcommand{\bnfalt}{\ensuremath{\;\bnf{|}\;}}
\providecommand{\alt}{\ensuremath{\;\bnf{|}\;}} % already used by beamer
\newcommand{\opt}[1]{\bnf{[}#1\bnf{]}}
\newcommand{\bnfbracket}[1]{\bnf{(}#1\bnf{)}}
\newcommand{\rep}[1]{#1^{\bnf{\ast}}}
\newcommand{\bnfchoice}[1]{\bnf{[}#1\bnf{]}}
\newcommand{\bnfnegchoice}[1]{\bnf{[\caret}#1\bnf{]}}
\newenvironment{commgrammar}{\[\begin{array}{l@{\;}c@{\;}l@{\tb}l}}{\end{array}\]}
\newcommand{\gcomment}[1]{\multicolumn{4}{l}{\rule{0pt}{4ex}\fbox{#1}\vspace{.3em}}}
\newcommand{\gprod}[3]{#1 & \bbc & #2 & \text{#3}}
\newcommand{\galtprod}[2]{ & \bnf{|} & #1 & \text{#2}}
\RequirePackage{xspace}
\newcommand{\mmt}{\texorpdfstring{{\normalfont\scshape{Mmt}}\xspace}{MMT\ }}
\newcommand{\omdoc}{{\scshape{OMDoc}}\xspace}
\newcommand{\mathml}{{\scshape{MathML}}\xspace}
\newcommand{\openmath}{{\scshape{OpenMath}}\xspace}
tex/beautysource.png

15.3 KB | W: | H:

tex/beautysource.png

70 KB | W: | H:

tex/beautysource.png
tex/beautysource.png
tex/beautysource.png
tex/beautysource.png
  • 2-up
  • Swipe
  • Onion skin
@online{wikipedia:matroid,
label = {MWP},
title = {Matroid --- Wikipedia{,} The Free Encyclopedia},
urldate = {2018-04-4},
url = {https://en.wikipedia.org/w/index.php?title=Matroid}}
@Article{imps,
author = "W. Farmer and J. Guttman and F. Thayer",
title = "{IMPS: An Interactive Mathematical Proof System}",
journal = "{Journal of Automated Reasoning}",
pages = "213--248",
volume = "11",
number = "2",
year = "1993",
}
@Book{isabelle,
author = "L. Paulson",
title = "{Isabelle: A Generic Theorem Prover}",
publisher = "Springer",
series = "Lecture Notes in Computer Science",
volume = "828",
year = "1994",
}
@INPROCEEDINGS{NorKoh:efnrsmk07,
author = {Immanuel Normann and Michael Kohlhase},
title = {Extended Formula Normalization for $\epsilon$-Retrieval and
Sharing of Mathematical Knowledge},
pages = {266--279},
crossref = {MKM07},
keywords = {conference},
pubs = {mkohlhase,mws,omdoc}}
@inproceedings{KMOR:pvs:17,
author = "M. Kohlhase and D. M{\"u}ller and S. Owre and F. Rabe",
title = "{Making PVS Accessible to Generic Services by Interpretation in a Universal Format}",
year = "2017",
pages = "319--335",
booktitle = "Interactive Theorem Proving",
editor = "M. Ayala-Rincon and C. Munoz",
publisher = "Springer"
}
@inproceedings{hol_isahol_matching,
author = {T. Gauthier and C. Kaliszyk},
title = "{Matching concepts across HOL libraries}",
booktitle = {Intelligent Computer Mathematics},
editor = {S. Watt and J. Davenport and A. Sexton and P. Sojka and J. Urban},
pages = {267--281},
year = 2014,
publisher = {Springer}
}
@online{OAFproject:on,
label = {OAF},
url = {https://kwarc.info/projects/oaf/},
title = {{OAF}: An Open Archive for Formalizations},
urldate = {2018-04-26}}
@Article{lf,
author = "R. Harper and F. Honsell and G. Plotkin",
title = "{A framework for defining logics}",
journal = "{Journal of the Association for Computing Machinery}",
year = 1993,
volume = 40,
number = 1,
pages = "143--184",
}
@inproceedings{KKMR:alignments:16,
author = "C. Kaliszyk and M. Kohlhase and D. M{\"u}ller and F. Rabe",
title = "{A Standard for Aligning Mathematical Concepts}",
year = "2016",
pages = "229--244",
booktitle = "Work in Progress at CICM 2016",
editor = "A. Kohlhase and M. Kohlhase and P. Libbrecht and B. Miller and F. Tompa and A. Naummowicz and W. Neuper and P. Quaresma and M. Suda",
publisher = "CEUR-WS.org"
}
@inproceedings{ODK:mitm:16,
author = "P. Dehaye and M. Iancu and M. Kohlhase and A. Konovalov and S. Leli{\`e}vre and D. M{\"u}ller and M. Pfeiffer and F. Rabe and N. Thi{\'e}ry and T. Wiesing",
title = "{Interoperability in the ODK Project: The Math-in-the-Middle Approach}",
year = "2016",
pages = "117--131",
booktitle = "Intelligent Computer Mathematics",
editor = "M. Kohlhase and L. {de Moura} and M. Johansson and B. Miller and F. Tompa",
publisher = "Springer"
}
@inproceedings{MRLR:alignments:17,
author = "D. M{\"u}ller and C. Rothgang and Y. Liu and F. Rabe",
title = "{Alignment-based Translations Across Formal Systems Using Interface Theories}",
year = "2017",
pages = "77--93",
booktitle = "Proof eXchange for Theorem Proving",
editor = "C. Dubois and B. {Woltzenlogel Paleo}",
publisher = "Open Publishing Association"
}
@INPROCEEDINGS{mathhub,
author = "M. Iancu and C. Jucovschi and M. Kohlhase and T. Wiesing",
title = "{System Description: MathHub.info}",
booktitle = "Intelligent Computer Mathematics",
editor = "S. Watt and J. Davenport and A. Sexton and P. Sojka and J. Urban",
pages = "431--434",
publisher = "Springer",
year = {2014},
}
@inproceedings{RupKohMue:fitgv17,
author = {Marcel Rupprecht and Michael Kohlhase and Dennis M{\"u}ller},
title = {A Flexible, Interactive Theory-Graph Viewer},
crossref = {MathUI17},
url = {http://kwarc.info/kohlhase/papers/mathui17-tgview.pdf},
pubs = {dmueller,mkohlhase,mrupprecht,odk,mathhub,odkWP6}}
@book{omdoc,
author = "M. Kohlhase",
title = "{OMDoc: An Open Markup Format for Mathematical Documents (Version 1.2)}",
series = "{Lecture Notes in Artificial Intelligence}",
number = "4180",
publisher = "Springer",
year = "2006",
}
@PROCEEDINGS{MKM07,
title = {{MKM/Calculemus}},
booktitle = {Towards Mechanized Mathematical Assistants. {MKM/Calculemus}},
year = {2007},
isbn = {978-3-540-73083-5},
editor = {Manuel Kauers and Manfred Kerber and Robert Miner and Wolfgang Windsteiger},
number = {4573},
series = {LNAI},
keywords = {conference},
publisher = {Springer Verlag}}
@Proceedings{MathUI17,
editor = {Andrea Kohlhase and Marco Pollanen},
title = {MathUI 2017: The 12th Workshop on Mathematical User Interfaces},
booktitle = {MathUI 2017: The 12th Workshop on Mathematical User Interfaces},
SOONurl = {http://ceur-ws.org/Vol-1785/},
pubs = {akohlhase},
year = {2017}}
@article{RK:mmt:10,
author = "F. Rabe and M. Kohlhase",
title = "{A Scalable Module System}",
year = "2013",
pages = "1--54",
journal = "Information and Computation",
volume = "230",
number = "1"
}
@inproceedings{KR:hollight:14,
author = "C. Kaliszyk and F. Rabe",
title = "{Towards Knowledge Management for HOL Light}",
year = "2014",
pages = "357--372",
booktitle = "Intelligent Computer Mathematics",
editor = "S. Watt and J. Davenport and A. Sexton and P. Sojka and J. Urban",
publisher = "Springer"
}
\ No newline at end of file
......@@ -67,14 +67,14 @@ Automatically and systematically searching for new theory morphisms was first un
However, at that time no large corpora of formalized mathematics were available in standardized formats that would have allowed easily testing the ideas in large corpora.
This situation has changed since then as multiple such exports have become available.
In particular, we have developed the MMT language \cite{RK:mmt} and the concrete syntax of the OMDoc XML format \cite{omdoc} as a uniform representation language for such corpora.
And we have translated multiple proof assistant libraries into this format, including the ones of PVS in \cite{KMOR:pvs:17} and HOL Light in \cite{RK:hollight:15}.
In particular, we have developed the MMT language \cite{RK:mmt:10} and the concrete syntax of the OMDoc XML format \cite{omdoc} as a uniform representation language for such corpora.
And we have translated multiple proof assistant libraries into this format, including the ones of PVS in \cite{KMOR:pvs:17} and HOL Light in \cite{KR:hollight:14}.
Building on these developments, we are now able, for the first time, to apply generic methods --- i.e., methods that work at the MMT level --- to search for theory morphisms in these libraries.
While inspired by the ideas of \cite{NorKoh:efnrsmk07}, our design and implementation are completely novel.
In particular, the theory makes use of the rigorous language-independent definitions of \emph{theory} and \emph{theory morphism} provided by MMT, and the practical implementation makes use of the MMT system, which provides high-level APIs for these concepts.
\cite{cezary+thibault-paper} applies techniques related to ours to a related problem.
\cite{hol_isahol_matching} applies techniques related to ours to a related problem.
Instead, of theory morphisms inside a single corpus, they use machine learning to find similar constants in two different corpora.
Their results can roughly be seen as a single partial morphism from one corpus to the other.
......@@ -86,7 +86,7 @@ separating the term into a hashed representation of its abstract syntax tree (wh
as a fast plausibility check for pre-selecting matching candidates) and the list of symbol
occurrences in the term, into which the algorithm recurses.
Secondly, we apply this view finder in two concrete case studies. \ednote{add 1-2 sentences for each case study}
Secondly, we apply this view finder in two concrete case studies: In the first, we start with an abstract theory and try to figure out if it already exists in the same library. In the second example, we write down a simple theory of commutative operators in one language to find all commutative operators in another library based on a different language.
\paragraph{Overview}
In Section~\ref{sec:prelim}, we revise the basics of MMT and the representations of (exemplary) the PVS and HOL Light libraries.
......
tex/matroids.png

21.2 KB | W: | H:

tex/matroids.png

22.8 KB | W: | H:

tex/matroids.png
tex/matroids.png
tex/matroids.png
tex/matroids.png
  • 2-up
  • Swipe
  • Onion skin
......@@ -19,11 +19,12 @@
\usepackage[style=alphabetic,hyperref=auto,defernumbers=true,backend=bibtex,firstinits=true,maxbibnames=9,maxcitenames=3,isbn=false]{biblatex}
\addbibresource{kwarcpubs.bib}
\addbibresource{extpubs.bib}
\addbibresource{kwarccrossrefs.bib}
\addbibresource{extcrossrefs.bib}
\addbibresource{systems.bib}
%\addbibresource{kwarcpubs.bib}
%\addbibresource{extpubs.bib}
%\addbibresource{kwarccrossrefs.bib}
%\addbibresource{extcrossrefs.bib}
%\addbibresource{systems.bib}
\addbibresource{biblio}
\renewbibmacro*{event+venue+date}{}
\renewbibmacro*{doi+eprint+url}{%
\iftoggle{bbx:doi}
......@@ -76,10 +77,9 @@
\maketitle
\begin{abstract}
We present a method for finding morphisms between formal theories, both within as well
as across libraries based on different logical foundations. These morphisms can yield
both (more or less formal) \emph{alignments} between individual symbols as well as
truth-preserving morphisms between whole theories. As they induce new theorems in the
target theory for any of the source theory, theory morphisms are high-value elements of
as across libraries based on different logical foundations.
% These morphisms can yield both (more or less formal) \emph{alignments} between individual symbols as well as truth-preserving morphisms between whole theories.
As they induce new theorems in the target theory for any of the source theory, theory morphisms are high-value elements of
a modular formal library. Usually, theory morphisms are manually encoded, but this
practice requires authors who are familiar with source and target theories at the same
time, which limits the scalability of the manual approach.
......@@ -97,13 +97,12 @@
\section{Preliminaries}\label{sec:prelim}
\input{prelim}
\section{Finding Theory Morphisms}\label{sec:viewfinder}
\section{Finding Theory Morphisms Intra-Library}\label{sec:viewfinder}
\input{viewfinder}
\section{Extended Use Case}\label{sec:usecase}
\input{usecase}
\section{Unrealized Applications}
\section{Future Work}
\input{applications}
\section{Conclusion}\label{sec:concl}
......
......@@ -21,7 +21,7 @@ In particular, if we represent proofs as typed terms, theory morphisms preserve
This property makes theory morphism so valuable for structuring, refactoring, and integrating large corpora.
MMT achieves language-independence through the use of \textbf{meta-theories}: every MMT-theory may designate a previously defined theory as its meta-theory.
For example, when represent the HOL Light library in MMT, we first write a theory $L$ for HOL Light.
For example, when we represent the HOL Light library in MMT, we first write a theory $L$ for the logical primitives of HOL Light.
Then each theory in the HOL Light library is represented as a theory with $L$ as its meta-theory.
In fact, we usually go one step further: $L$ itself is a theory, whose meta-theory is a logical framework such as LF.
That allows $L$ to concisely define the syntax and inference system of HOL Light.
......@@ -39,7 +39,7 @@ Thus, we assume that $\Sigma$ and $\Sigma'$ have the same meta-theory $M$, and t
& Theory $\Sigma$ & Morphism $\sigma:\Sigma\to\Sigma'$ \\
\hline
set of & typed constant declarations $c:E$ & assignments $c\mapsto E'$ \\
$\Sigma$-expressions $E$ & formed from $M$- and $\Sigma$-constants & mapped to $\Sigma'$ by homomorphic extension \\
$\Sigma$-expressions $E$ & formed from $M$- and $\Sigma$-constants & mapped to $\Sigma'$ expressions \\
\hline
\end{tabular}
\end{center}
......@@ -59,8 +59,11 @@ Complex expressions are of the form $\ombind{o}{x_1:t_1,\ldots,x_m:t_m}{a_1,\ldo
\item $a_i$ is an argument of $o$
\end{compactitem}
The bound variable context may be empty, and we write $\oma{o}{\vec{a}}$ instead of $\ombind{o}{\cdot}{\vec{a}}$.
For example, \ednote{give examples}
\begin{newpart}{DM}
For example, the axiom $\forall x:\cn{set},y:\cn{set}.\; \cn{beautiful}(x) \wedge y \subseteq x \Rightarrow \cn{beautiful}(y)$ would be written as $\ombind{\forall}{x:\cn{set},y:\cn{set}}{\oma{\Rightarrow}{\oma{\wedge}{\oma{\cn{beautiful}}{x},\oma{\subseteq}{y,x}},\oma{\cn{beautiful}}{y}}}$ instead.
%For example, the second axiom ``Every subset of a beautiful set is beautiful'' (i.e. the term $\forall s,t : \cn{set\ }X.\;\cn{beautiful}(s)\wedge t \subseteq s \Rightarrow \cn{beautiful}(t)$) would be written as
%\[ \ombind{\forall}{s : \oma{\cn{set}}{X},t : \oma{\cn{set}}{X}}{\oma{\Rightarrow}{\oma{\wedge}{\oma{\cn{beautiful}}{s},\oma{\subseteq}{t,s}},\oma{\cn{beautiful}}{t}}} \]
\end{newpart}
Finally, we remark on a few additional features of the MMT language that are important for large-scale case studies but not critical to understand the basic intuitions of results.
MMT provides a module systems that allows theories to instantiate and import each other. The module system is conservative: every theory can be elaborated into one that only declares constants.
MMT constants may carry an optional definiens, in which case we write $c:E=e$.
......@@ -90,55 +93,55 @@ Defined constants can be eliminated by definition expansion.
%\end{center}
%\end{example
\begin{oldpart}{FR: replaced with the above}
For the purposes of this paper, we will work with the (only slightly simplified) grammar given in Figure \ref{fig:mmtgrammar}.
\begin{figure}[ht]\centering\vspace*{-1em}
\begin{mdframed}
\begin{tabular}{rl@{\quad}l|l}
\cn{Thy} $::=$ & $T[:T]=\{ (\cn{Inc})^\ast\ (\cn{Const})^\ast \}$ & Theory & \multirow{2}{*}{Modules} \\
\cn{View} $::=$ & $T : T \to T = \{ (\cn{Ass})^\ast \}$ & View & \\ \hline
\cn{Const} $::=$ & $C [:t] [=t]$ & Constant Declarations& \multirow{3}{*}{Declarations} \\
\cn{Inc} $::=$ & $\incl T$ & Includes & \\
\cn{Ass} $::=$ & $C = t$ & Assignments & \\ \hline
$\Gamma$ $::=$ & $(x [:t][=t])^\ast$ & Variable Contexts & \multirow{2}{*}{Objects} \\
$t$ $::=$ & $x \mid T?C \mid \oma{t}{(t)^+} \mid \ombind{t}{\Gamma}{t} $ & Terms & \\\hline\hline
$T$ $::=$ & \cn{String} & Module Names & \multirow{3}{*}{Strings} \\
$C$ $::=$ & \cn{String} & Constant Names & \\
$x$ $::=$ & \cn{String} & Variable Names & \\
\end{tabular}
\end{mdframed}\vspace*{-.5em}
\caption{The MMT Grammar}\label{fig:mmtgrammar}\label{fig:mmtgrammar}\vspace*{-1em}
\end{figure}
In more detail:
\begin{itemize}
\item Theories have a module name, an optional \emph{meta-theory} and a body consisting of \emph{includes} of other theories
and a list of \emph{constant declarations}.
\item Constant declarations have a name and two optional term components; a \emph{type} ($[:t]$), and a \emph{definition} ($[=t]$).
\item Views $V : T_1 \to T_2$ have a module name, a domain theory $T_1$, a codomain theory $T_2$ and a body consisting of assignments
$C = t$.
\item Terms are either
\begin{itemize}
\item variables $x$,
\item symbol references $T?C$ (referencing the constant $C$ in theory $T$),
\item applications $\oma{f}{a_1,\ldots,a_n}$ of a term $f$ to a list of arguments $a_1,\ldots,a_n$ or
\item binding application $\ombind{f}{x_1[:t_1][=d_1],\ldots,x_n[:t_n][=d_n]}{b}$, where $f$ \emph{binds} the variables
$x_1,\ldots,x_n$ in the body $b$ (representing binders such as quantifiers, lambdas, dependent type constructors etc.).
\end{itemize}
\end{itemize}
The term components of a constant in a theory $T$ may only contain symbol references to constants declared previously in $T$, or that are declared in some theory $T'$ (recursively) included in $T$ (or its meta-theory, which we consider an \emph{include} as well).
We can eliminate all includes in a theory $T$ by simply copying over the constant declarations in the included theories; we call this process \emph{flattening}. We will often and without loss of generality assume a theory to be \emph{flat} for convenience.
An assignment in a view $V:T_1\to T_2$ is syntactically well-formed if for any assignment $C=t$ contained, $C$ is a constant declared in the flattened domain $T_1$ and $t$ is a syntactically well-formed term in the codomain $T_2$. We call a view \emph{total} if all \emph{undefined} constants in the domain have a corresponding assignment and \emph{partial} otherwise.
\end{oldpart}
%\begin{oldpart}{FR: replaced with the above}
%For the purposes of this paper, we will work with the (only slightly simplified) grammar given in Figure \ref{fig:mmtgrammar}.
%
%\begin{figure}[ht]\centering\vspace*{-1em}
% \begin{mdframed}
% \begin{tabular}{rl@{\quad}l|l}
% \cn{Thy} $::=$ & $T[:T]=\{ (\cn{Inc})^\ast\ (\cn{Const})^\ast \}$ & Theory & \multirow{2}{*}{Modules} \\
% \cn{View} $::=$ & $T : T \to T = \{ (\cn{Ass})^\ast \}$ & View & \\ \hline
% \cn{Const} $::=$ & $C [:t] [=t]$ & Constant Declarations& \multirow{3}{*}{Declarations} \\
% \cn{Inc} $::=$ & $\incl T$ & Includes & \\
% \cn{Ass} $::=$ & $C = t$ & Assignments & \\ \hline
% $\Gamma$ $::=$ & $(x [:t][=t])^\ast$ & Variable Contexts & \multirow{2}{*}{Objects} \\
% $t$ $::=$ & $x \mid T?C \mid \oma{t}{(t)^+} \mid \ombind{t}{\Gamma}{t} $ & Terms & \\\hline\hline
% $T$ $::=$ & \cn{String} & Module Names & \multirow{3}{*}{Strings} \\
% $C$ $::=$ & \cn{String} & Constant Names & \\
% $x$ $::=$ & \cn{String} & Variable Names & \\
% \end{tabular}
% \end{mdframed}\vspace*{-.5em}
% \caption{The MMT Grammar}\label{fig:mmtgrammar}\label{fig:mmtgrammar}\vspace*{-1em}
%\end{figure}
%
%In more detail:
%\begin{itemize}
% \item Theories have a module name, an optional \emph{meta-theory} and a body consisting of \emph{includes} of other theories
% and a list of \emph{constant declarations}.
% \item Constant declarations have a name and two optional term components; a \emph{type} ($[:t]$), and a \emph{definition} ($[=t]$).
% \item Views $V : T_1 \to T_2$ have a module name, a domain theory $T_1$, a codomain theory $T_2$ and a body consisting of assignments
% $C = t$.
% \item Terms are either
% \begin{itemize}
% \item variables $x$,
% \item symbol references $T?C$ (referencing the constant $C$ in theory $T$),
% \item applications $\oma{f}{a_1,\ldots,a_n}$ of a term $f$ to a list of arguments $a_1,\ldots,a_n$ or
% \item binding application $\ombind{f}{x_1[:t_1][=d_1],\ldots,x_n[:t_n][=d_n]}{b}$, where $f$ \emph{binds} the variables
% $x_1,\ldots,x_n$ in the body $b$ (representing binders such as quantifiers, lambdas, dependent type constructors etc.).
% \end{itemize}
%\end{itemize}
%The term components of a constant in a theory $T$ may only contain symbol references to constants declared previously in $T$, or that are declared in some theory $T'$ (recursively) included in $T$ (or its meta-theory, which we consider an \emph{include} as well).
%We can eliminate all includes in a theory $T$ by simply copying over the constant declarations in the included theories; we call this process \emph{flattening}. We will often and without loss of generality assume a theory to be \emph{flat} for convenience.
%
%An assignment in a view $V:T_1\to T_2$ is syntactically well-formed if for any assignment $C=t$ contained, $C$ is a constant declared in the flattened domain $T_1$ and $t$ is a syntactically well-formed term in the codomain $T_2$. We call a view \emph{total} if all \emph{undefined} constants in the domain have a corresponding assignment and \emph{partial} otherwise.
%
%\end{oldpart}
\subsection{Proof Assistant Libraries in MMT}\label{sec:oaf}
As part of the OAF project~\cite{OAFproject:on}, we have imported several proof assistant libraries into the MMT system. To motivate some of the design choices made in this paper, we will outline the general procedure behind these imports.
\paragraph{} First, we formalize the core logical foundation of the system. We do so by using the logical framework LF\ednote{cite} (at its core a dependently-typed lambda calculus) and various extensions thereof, which are implemented in and supported by the MMT system. In LF, we can formalize the foundational primitives using the usual judgments-as-types and higher-order abstract syntax encodings -- hence theorems and axioms are represented as constants with a type $\vdash P$ for some proposition $P$, and primitive constructs like lambdas are formalized as LF-functions taking LF-lambda-expressions -- which serve as a general encoding of any variable binders -- as arguments.
\paragraph{} First, we formalize the core logical foundation of the system. We do so by using the logical framework LF~\cite{lf} (at its core a dependently-typed lambda calculus) and various extensions thereof, which are implemented in and supported by the MMT system. In LF, we can formalize the foundational primitives using the usual judgments-as-types and higher-order abstract syntax encodings -- hence theorems and axioms are represented as constants with a type $\vdash P$ for some proposition $P$, and primitive constructs like lambdas are formalized as LF-functions taking LF-lambda-expressions -- which serve as a general encoding of any variable binders -- as arguments.
The resulting formalizations are then used as meta-theory for imports of the libraries of the system under consideration. This results in a theory graph as in Figure \ref{fig:oaf}.
......
\begin{newpart}{moved}
\subsection{Normalization}\label{sec:normalizeintra}
When elaborating definitions, it is important to consider that this may also reduce the number of results, if both theories use similar abbreviations for complex terms, or the same concept is declared axiomatically in one theory, but definitionally in the other. For that reason, we can allow \textbf{several abstract syntax trees for the same constant}, such as one with definitions expanded and one ``as is''.
Similarly, certain idiosyncracies -- such as PVS's common usage of theory parameters -- call for not just matching symbol references, but also variables or possibly even complex expressions. To handle these situations, we additionally allow for \textbf{holes} in the constant lists of an abstract syntax tree, which may be unified with any other symbol or hole, but are not recursed into. The subterms that are to be considered holes can be marked as such during preprocessing.