diff --git a/README.md b/README.md index 482e64001a8611dd678f157a2f1dee2eb4ec7870..d474d8621dbf4fb139abdfe63deecdc8542ee0dd 100644 --- a/README.md +++ b/README.md @@ -1,12 +1,14 @@ Directories =========== -* `/timeline`: Goals and results for each week. +* `/doc`: Project documentation. -* `/ulo`: Playing around w/ the results of the ULO paper [1, 2] +* `/timeline`: Goals and results for each week. * `/graphdb`: Playing with the RDF4J [3] API of GraphDB [4]. +* `/ulo`: Playing around w/ the results of the ULO paper [1, 2] + References ========== diff --git a/doc/components.md b/doc/components.md new file mode 100644 index 0000000000000000000000000000000000000000..b92bc64ee65a884b92d75cac396277fd7784f312 --- /dev/null +++ b/doc/components.md @@ -0,0 +1,57 @@ +Components +========== + +This is a rough sketch of the involved components of this project. +Look at `componets.png` for a little illustration of the involved +components. + +`Collector` +----------- + +* Given some source of ULO/RDF files (Git repository, HTTP server, + local file system), the `Collector` processes/cleans up these files + and forwards them to the `Importer`. + +* Implement core functionality as library with command line and web + front end. + +* Can be implemented in any language. I'll probably pick Go as I'm + pretty productive in it. + + +`Harvester` +----------- + +* Low Priority. Just an idea I had. But it might go against the idea + of using MathHub as a centralized place for data. + +* Converts arbitrary source data (e.g. Coq) to ULO/RDF. + +* The generated RDF is *volatile*, it does not need to be stored to + any repository. Rather it is directly forwarded to a `Collector`. + +* I'm not sure if this makes any sense actually as it might be + difficult to track changes and so on. + +* Can be implemented in any language that makes sense for the given + source format. + +`Importer` +---------- + +* Essentially a wrapper around a database. Written in the language + that best fits the database. In particular, GraphDB only has good + Java/JVM programming support. + +* Accessed w/ a simple file upload API. You upload a file and get a + path returned that shows you the current state of the import. + +`Endpoint` +---------- + +* Again, like the `Importer`, this is a wrapper around the database. + +* Might be optional if applications accesses the database directly. + Certainly when it comes to querying I will not introduce a custom API + as querying is a problem way way more complicated than a simple + import. diff --git a/doc/components.png b/doc/components.png new file mode 100644 index 0000000000000000000000000000000000000000..85aeb968b3ee332dfdd5f550c26df848199795b5 Binary files /dev/null and b/doc/components.png differ