Commit e1c77e0b authored by Michael Kohlhase's avatar Michael Kohlhase
Browse files

Merge branch 'master' of gl.kwarc.info:SIGMathLing/website

parents 61ea0dc6 7bd8350f
Pipeline #1734 passed with stage
in 2 minutes and 36 seconds
---
layout: post
title: arXiv 2019 Data Set and Embeddings Released
---
The 2019 release to the arXMLiv data set has been published.
Details can be found on the corresponding [data set resource page](/resources/arxmliv-dataset-082019/) and [embeddings resource page](/resources/arxmliv-embeddings-082019).
The content of this data set is licensed to [SIGMathLing members](/member/) for research
and tool development purposes subject to the [SIGMathLing Non-Disclosure-Agreement](/nda/).
......@@ -7,8 +7,9 @@ Part of the [arXMLiv](https://kwarc.info/projects/arXMLiv/) project at the [KWAR
### Author
- Deyan Ginev
### Current release
- 08.2017
### Release
- This page documents: 08.2017
- Latest: [08.2019](/resources/arxmliv-dataset-082019/)
### Accessibility and License
The content of this Dataset is licensed to [SIGMathLing members](/member/) for research
......
......@@ -7,8 +7,9 @@ Part of the [arXMLiv](https://kwarc.info/projects/arXMLiv/) project at the [KWAR
### Author
- Deyan Ginev
### Current release
- 08.2018
### Release
- This page documents: 08.2018
- Latest: [08.2019](/resources/arxmliv-dataset-082019/)
### Accessibility and License
The content of this Dataset is licensed to [SIGMathLing members](/member/) for research
......
---
layout: page
title: arXMLiv 08.2019 - An HTML5 dataset for arXiv.org
---
Part of the [arXMLiv](https://kwarc.info/projects/arXMLiv/) project at the [KWARC](https://kwarc.info/) research group
### Author
- Deyan Ginev
### Release
- This page documents: 08.2019
- Latest: [08.2019](/resources/arxmliv-dataset-082019/)
### Accessibility and License
The content of this Dataset is licensed to [SIGMathLing members](/member/) for research
and tool development purposes.
Access is restricted to [SIGMathLing members](/member/) under the
[SIGMathLing Non-Disclosure-Agreement](/nda/) as for most [arXiv](http://arxiv.org)
articles, the right of distribution was only given (or assumed) to arXiv itself.
### Contents
- 1,374,539 HTML5 documents
- Four separate archive bundles, separated by LaTeXML conversion severity
- derivative **word embeddings** and a **token model** are available separately [here](/resources/arxmliv-embeddings-082019/)
| subset ID | number of documents | size archived | size unpacked |
| :--- | ---: | ---: | ---: |
| no\_problem | 150,701 | 7.4 GB | 57 GB |
| warning_1 | 500,000 | 75 GB | 641 GB |
| warning_2 | 328,127 | 50 GB | 429 GB |
| error | 395,711 | 60 GB | 521 GB |
| subset file name | MD5 |
| :--- | :--- |
| `arXMLiv_08_2019_no_problem.zip` | `b70535d607ec916d9f6456b2b1fef421` |
| `arXMLiv_08_2019_warning_1.zip` | `fd4496504020a256f4e4f4200cb731fc` |
| `arXMLiv_08_2019_warning_2.zip` | `5d3ce062a768ce439bd7447f8f011e2b` |
| `arXMLiv_08_2019_error.zip` | `74c91c3b187d151f8bce7bb9936c050f` |
### Description
This is the third public release of the arXMLiv dataset generated by the [KWARC](https://kwarc.info/) research group. It contains 1,374,539 HTML5 scientific documents from the arXiv.org preprint archive, converted from their respective TeX sources. An 11% increase in available articles over the 08.2018 release.
The dataset is segmented in 4 subsets, corresponding to three severity levels of the HTML conversion.
- The `no_problem` set had no obvious challenges in conversion and is the safest, most reliable subset
- The `warning_1` and `warning_2` sets cover a variety of minor issues, from mathematical expressions unparseable by the LaTeXML grammar, to missing LaTeX packages with no apparent use in the document. The vast majority of the documents should both have a good-looking rendering, as well as data consistency for e.g. NLP tasks.
- The `error` set covers all conversions which successfully generated an HTML5 document, but had major issues during the conversion. Examples would range from unknown macros (due to limited LaTeX coverage), unexpected latex syntax, math/text mode mismatches, as well as real LaTeX errors from the original sources. This subset should be used with extra caution, though should still preserve overall data consistency and could be safely used for e.g. generating word embeddings.
This version of the dataset has had minimal manual quality control, and we offer no additional warranty beyond the latexml severity reported.
We welcome community feedback on all of: data quality, representation issues, need for auxiliary resources (e.g. figures, token models), as well as organization and archival best practices. The conversion, build system, and data redistribution efforts are all ongoing projects at the [KWARC research group](http://kwarc.info).
### Citing this Resource
The dataset should be referenced in all academic publications that present results
obtained with its help. The reference should contain the identifier `arXMLiv:08.2019` in
the title, the author, year, a reference to SIGMathLing, and the URL of the resource
description page. For convenience, we supply some records for bibTeX and EndNote below. To
cite a particular part of the dataset use the subset identifiers in the ciation;
e.g. `\cite[no_problem subset]{arXMLiv:08.2019}` or just explain it in the text using the
concrete identifier.
#### pure bibTeX
```
@MISC{SML:arXMLiv:08.2019,
author = {Deyan Ginev},
title = {arXMLiv:08.2019 dataset, an HTML5 conversion of arXiv.org},
howpublished = {hosted at \url{https://sigmathling.kwarc.info/resources/arxmliv-dataset-082019/}},
note = {SIGMathLing -- Special Interest Group on Math Linguistics},
year = 2019}
```
#### bibTeX for the bibLaTeX package (preferred)
```
@online{SML:arXMLiv:08.2019,
author = {Deyan Ginev},
title = {arXMLiv:08.2019 dataset, an HTML5 conversion of arXiv.org},
url = {https://sigmathling.kwarc.info/resources/arxmliv-dataset-082019/},
note = {SIGMathLing -- Special Interest Group on Math Linguistics},
year = 2019}
```
#### EndNote
```
%0 Generic
%T arXMLiv:08.2019 dataset, an HTML5 conversion of arXiv.org
%A Ginev, Deyan
%D 2019
%I hosted at https://sigmathling.kwarc.info/resources/arxmliv-dataset-082019/
%F SML:arXMLiv:08.2019b
%O SIGMathLing – Special Interest Group on Math Linguistics
```
### Download
[Download link](https://gl.kwarc.info/SIGMathLing/dataset-arXMLiv-08-2019)
([SIGMathLing members](/member/) only)
### Generated via
- [LaTeXML 0.8.4](https://github.com/brucemiller/LaTeXML/releases/tag/v0.8.4),
- [CorTeX 0.4.2](https://github.com/dginev/CorTeX/releases/tag/0.4.2)
......@@ -6,8 +6,10 @@ Part of the [arXMLiv](https://kwarc.info/projects/arXMLiv/) project at the [KWAR
### Author
- Deyan Ginev
### Current release
- 08.2017
### Release
- This page documents: 08.2017
- Latest: [08.2019](/resources/arxmliv-embeddings-082019/)
### Accessibility and License
The content of this Dataset is licensed to [SIGMathLing members](/member/) for research
......
......@@ -7,8 +7,9 @@ Part of the [arXMLiv](https://kwarc.info/projects/arXMLiv/) project at the [KWAR
### Author
- Deyan Ginev
### Current release
- 08.2018
### Release
- This page documents: 08.2018
- Latest: [08.2019](/resources/arxmliv-embeddings-082019/)
### Accessibility and License
The content of this Dataset is licensed to [SIGMathLing members](/member/) for research
......@@ -148,13 +149,12 @@ python2 eval/python/distance.py --vocab_file vocab.arxmliv.txt --vectors_file gl
```
1. **lattice**
```
Word: lattice Position in vocabulary: 488
```
Word: lattice Position in vocabulary: 488
Word Cosine distance
---------------------------------------------------------
---------------------------------------------------------
lattices 0.853103
......@@ -167,16 +167,15 @@ Word: lattice Position in vocabulary: 488
finite 0.614720
spacing 0.603067
```
```
2. **entanglement**
```
Word: entanglement Position in vocabulary: 1568
```
Word: entanglement Position in vocabulary: 1568
Word Cosine distance
---------------------------------------------------------
---------------------------------------------------------
entangled 0.780425
......@@ -203,32 +202,30 @@ Word: entanglement Position in vocabulary: 1568
coherence 0.606859
nonlocality 0.601337
```
```
3. **forgetful**
```
Word: forgetful Position in vocabulary: 11740
```
Word: forgetful Position in vocabulary: 11740
Word Cosine distance
---------------------------------------------------------
---------------------------------------------------------
functor 0.723472
functors 0.656184
morphism 0.598965
```
```
4. **eigenvalue**
```
Word: eigenvalue Position in vocabulary: 1448
```
Word: eigenvalue Position in vocabulary: 1448
Word Cosine distance
---------------------------------------------------------
---------------------------------------------------------
eigenvalues 0.893073
......@@ -258,16 +255,15 @@ Word: eigenvalue Position in vocabulary: 1448
eigenmodes 0.604839
```
```
5. **riemannian**
```
Word: riemannian Position in vocabulary: 2285
```
Word: riemannian Position in vocabulary: 2285
Word Cosine distance
---------------------------------------------------------
---------------------------------------------------------
manifolds 0.765827
......@@ -302,4 +298,4 @@ Word: riemannian Position in vocabulary: 2285
submanifolds 0.612716
geodesic 0.604488
```
\ No newline at end of file
```
\ No newline at end of file
---
layout: page
title: arXMLiv 08.2019 - Word Embeddings; Token Model
---
Part of the [arXMLiv](https://kwarc.info/projects/arXMLiv/) project at the [KWARC](https://kwarc.info/) research group
### Author
- Deyan Ginev
### Release
- This page documents: 08.2019
- Latest: [08.2019](/resources/arxmliv-embeddings-082019/)
### Accessibility and License
The content of this Dataset is licensed to [SIGMathLing members](/member/) for research
and tool development purposes.
Access is restricted to [SIGMathLing members](/member/) under the
[SIGMathLing Non-Disclosure-Agreement](/nda/) as for most [arXiv](http://arxiv.org)
articles, the right of distribution was only given (or assumed) to arXiv itself.
### Contents
- An 15.2 billion token model for the arXMLiv 08.2019 dataset, including subformula lexemes
- `token_model.zip`
- 300 dimensional GloVe word embeddings for the arXMLiv 08.2019 dataset
- `glove.arxmliv.15B.300d.zip` and `vocab.arxmliv.zip`
- the main arXMLiv dataset is available separately [here](/resources/arxmliv-dataset-082019/)
#### Token Model Statistics
| subset | documents | paragraphs |
| ---------- | --------: | ---------: |
| no_problem | 150,701 | 6,071,920 |
| warning_1 | 500,000 | 36,130,694 |
| warning_2 | 328,127 | 24,285,351 |
| error | 395,711 | 31,155,136 |
| complete | 1,374,539 | 97,643,101 |
| subset | words | formulas | inline cite |
| ---------- | ------------: | ----------: | ----------: |
| no_problem | 619,051,536 | 25,210,637 | 4,248,840 |
| warning_1 | 2,917,283,935 | 212,113,899 | 18,553,611 |
| warning_2 | 1,937,516,458 | 140,094,708 | 12,590,335 |
| error | 2,307,007,544 | 163,290,748 | 14,200,445 |
| complete | 7,780,859,473 | 540,709,992 | 49,593,231 |
#### GloVe Model Statistics
| subset | tokens | unique words | unique words (freq 5+ ) |
| ---------- | -------------: | -----------: | ----------------------: |
| complete | 15,214,964,673 | 2,868,070 | 1,013,106 |
### Citing this Resource
Please cite the main dataset when using the word embeddings, as they are generated and distributed jointly. [Instructions here](/resources/arxmliv-dataset-082019/#citing-this-resource)
### Download
[Download link](https://gl.kwarc.info/SIGMathLing/embeddings-arXMLiv-08-2019)
([SIGMathLing members](/member/) only)
### Generated via
- [llamapun 0.3.3](https://github.com/KWARC/llamapun/releases/tag/0.3.3),
- [GloVe 1.2, 2019](https://github.com/stanfordnlp/GloVe/tree/07d59d5e6584e27ec758080bba8b51fce30f69d8)
### Generation Parameters
* token model distributed as 4 subsets - no_problem, warning_1, warning_2 and error. complete model is derived via:
```
cat token_model_no_problem.txt \
token_model_warning_1.txt token_model_warning_2.txt \
token_model_error.txt > token_model_complete.txt
```
* [llamapun v0.3.3](https://github.com/KWARC/llamapun/releases/tag/0.3.3), `corpus_token_model` example used for token model extraction
* processed logical paragraphs, abstracts, captions and keywords, ignore all other content (e.g. tables, bibliography, others)
* excluded paragraphs containing latexml errors (marked via a `ltx_ERROR` HTML class); also excluded when words over 25 characters were encountered.
* used llamapun math-aware word tokenization, with sub-formula math lexemes (improved robustness since 2018)
* marked up inline citations replaced with `citationelement` token
* numeric literals replaced with `NUM` token (both in text and formulas)
* internal references replaced with `ref` token (e.g. `Figure ref`)
* textual punctuation is included as-is, while mathematical punctuation is annotated via the latexml-generated lexemes.
* words are downcased, while math content is kept cased, to mitigate lexical ambiguity.
* [GloVe repository at sha 07d59d](https://github.com/stanfordnlp/GloVe/tree/07d59d5e6584e27ec758080bba8b51fce30f69d8)
* build/vocab_count -min-count 5
* build/cooccur -memory 48.0 -window-size 15
* build/shuffle -memory 48.0
* build/glove -threads 30 -x-max 100 -iter 50 -vector-size 300 -binary 2
### Examples and baselines
#### GloVe in-built evaluation (non-expert tasks e.g. language, relationships, geography)
1. NEW; 2019 model
* Total accuracy: 38.30% (7017/18322)
* Highest score: "gram3-comparative.txt", 78.60% (1047/1332)
2. 2018 [GloVe embeddings](/resources/arxmliv-embeddings-082018/)
* Total accuracy: 35.48% (6298/17750)
* Highest score: "gram3-comparative.txt", 76.65% (1021/1332)
3. demo baseline: text8 demo (first 100M characters of Wikipedia)
* Total accuracy: 23.62% (4211/17827)
* Highest score: "gram6-nationality-adjective.txt", 58.65% (892/1521)
#### Measuring word analogy
In a cloned GloVe repository, start via:
```
python2 eval/python/word_analogy.py --vocab_file vocab.arxmliv.txt --vectors_file glove.arxmliv.15B.300d.txt
```
1. `abelian` is to `group` as `disjoint` is to `?`
* Top hit: `union`, cosine distance `0.618853`
2. `convex` is to `concave` as `positive` is to `?`
* Top hit: `negative`, cosine distance `0.806679`
3. `finite` is to `infinte` as `abelian` is to `?`
* Top hit: `nonabelian`, cosine distance `0.698089`
4. `quantum` is to `classical` as `bottom` is to `?`
* Top hit: `middle`, cosine distance `0.769180`
* Close second: `top`, cosine distance `0.765937`
5. `eq` is to `proves` as `figure` is to `?`
* Top hit: `showing`, cosine distance `0.689938`
6. `italic_x` is to `italic_y` as `italic_a` is to `?`
* Top hit: `italic_b`, cosine distance `0.915467`
#### Nearest word vectors
In a cloned GloVe repository, start via:
```
python2 eval/python/distance.py --vocab_file vocab.arxmliv.txt --vectors_file glove.arxmliv.15B.300d.txt
```
1. **lattice**
```
Word: lattice Position in vocabulary: 515
Word Cosine distance
---------------------------------------------------------
lattices 0.865888
honeycomb 0.677004
finite 0.650216
triangular 0.632165
crystal 0.627800
sublattice 0.619792
cubic 0.609822
```
2. **entanglement**
```
Word: entanglement Position in vocabulary: 1603
Word Cosine distance
---------------------------------------------------------
entangled 0.803443
multipartite 0.744602
negativity 0.698730
concurrence 0.693703
tripartite 0.669840
discord 0.660572
fidelity 0.657391
quantum 0.655452
teleportation 0.628923
qubits 0.627504
bipartite 0.622791
entangling 0.621139
nonlocality 0.619905
qubit 0.615623
entropy 0.601869
```
3. **forgetful**
```
Word: forgetful Position in vocabulary: 12259
Word Cosine distance
---------------------------------------------------------
functor 0.749501
functors 0.686806
morphism 0.632394
morphisms 0.610589
```
4. **eigenvalue**
```
Word: eigenvalue Position in vocabulary: 1527
Word Cosine distance
---------------------------------------------------------
eigenvalues 0.903885
eigenvector 0.781512
eigenvectors 0.774260
eigenfunction 0.751316
eigenfunctions 0.707166
eigenspace 0.683321
eigen 0.657366
laplacian 0.649859
matrix 0.645466
eigenmode 0.628024
operator 0.620245
eigenmodes 0.610912
largest 0.607076
eigenstates 0.603603
```
5. **riemannian**
```
Word: riemannian Position in vocabulary: 2285
Word Cosine distance
---------------------------------------------------------
manifold 0.771125
manifolds 0.770408
metric 0.709820
finsler 0.699053
curvature 0.672640
ricci 0.667813
riemmanian 0.661929
euclidean 0.645167
metrics 0.641648
submanifold 0.638131
kahler 0.635828
riemanian 0.626252
noncompact 0.623363
geodesic 0.620316
submanifolds 0.613058
endowed 0.608804
foliation 0.601818
```
\ No newline at end of file
......@@ -3,6 +3,10 @@ layout: page
title: SIGMathLing - arXMLiv Project Datasets and Resources
---
## 2019
1. [arXMLiv corpus, 08.2019 release](/resources/arxmliv-dataset-082019/)
1. [arXMLiv word embeddings, 08.2019 release](/resources/arxmliv-embeddings-082019)
## 2018
1. [arXMLiv corpus, 08.2018 release](/resources/arxmliv-dataset-082018/)
1. [arXMLiv word embeddings, 08.2018 release](/resources/arxmliv-embeddings-082018)
......
......@@ -3,6 +3,8 @@ layout: page
title: SIGMathLing - Datasets and Resources
---
## Resources hosted on the SIGMathLing Repository
1. [arXMLiv corpus, 08.2019 release](/resources/arxmliv-dataset-082019/)
1. [arXMLiv word embeddings, 08.2019 release](/resources/arxmliv-embeddings-082019)
1. [arXMLiv statements dataset, 08.2018 release](/resources/arxmliv-statements-082018)
1. [arXMLiv word embeddings, 08.2018 release](/resources/arxmliv-embeddings-082018)
1. [arXMLiv corpus, 08.2018 release](/resources/arxmliv-dataset-082018/)
......
......@@ -7,13 +7,13 @@ Recall that {{site.title}} maintains [a bouquet of services](services/); here we
### Resource Repositories
We have a [{{site.title}} group](http://gl.kwarc.info/SIGMathLing) on the [GitLab](https://en.wikipedia.org/wiki/GitLab) server [gl.kwarc.info](http://gl.kwarc.info), where we will start making repositories on.
We have a [{{site.title}} group](http://gl.kwarc.info/SIGMathLing) on the [GitLab](https://en.wikipedia.org/wiki/GitLab) server [gl.kwarc.info](http://gl.kwarc.info), where we have hosted a range of data repositories.
This allows us to use Git permissions for access control and the GitLab permission UI for management.
We estimate that for the first two years {{site.title}} will have below 25 members (reducing the traffic) and below 5 TB data sets.
We estimate that for the first two years (2017-2019) {{site.title}} will have below 25 members (reducing the traffic) and below 5 TB data sets.
gl.kwarc.info should be able to serve that given that most data sets will be served via [Git LFS](https://git-lfs.github.com/).
Should space or traffic become a problem for the KWARC servers to handle, we will try to raise money for a more scalable solution.
We will also have a close look at [Zenodo](http://zenodo.org) and see whether we can delegate hosting to them.
[Zenodo](http://zenodo.org) has officially turned down hosting the SIGMathLing resources due to the large volume of data, but we are open to exploring alternative providers - feel free to reach out!
### Standardizing Datasets and Resources
......@@ -26,7 +26,8 @@ We will need to develop standards for representing, classifying, describing, and
* an evaluation data set (gold standard)?
* what is the quality? f-measure,
* what is the license.
3. *Citation* The idea is to have a "landing page per resourcer that address all
3. *Identification*: we are looking into obtaining a DOI data identifier for each resource
4. *Citation* The idea is to have a "landing page per resourcer that address all
the points in 1. and 2. as well as the authors that can be cited. The landing page
should also have pre-made bibTeX (and possibly EndNote) entries to make citations
easier.
......@@ -42,4 +43,3 @@ Currently, this is just a manually curated [page on the {{site.title}} web site]
### Math Analysis Blackboard
MK would like develop and publish an annotation schema (using the KAT schema as a starting point) and establish a math result triple store that manages all of these. Technical details are still open how best to do this, but Deyan is quite skeptical.
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment