after the engineer chooses $p=q=2$ (Cauchy-Schwarz inequality).
after the engineer chooses $p=q=2$ (Cauchy-Schwarz inequality).
Estimating the individual values of $V$ and $I$ is a much simpler problem. Admittedly, Google would have found the information by querying for ``\texttt{Cauchy-Scharz H\"older}'', but that is the crucial information the engineer was missing in the first place.
Estimating the individual values of $V$ and $I$ is a much simpler problem. Admittedly, Google would have found the information by querying for ``\texttt{Cauchy-Schwarz H\"older}'', but that is the crucial information the engineer was missing in the first place.
This example already shows that we need to handle mathematical data (deep FAIR; here for the H\"older's and Cauchy-Schwarz inequalities) in all their complexity to improve the state of the art.
This example already shows that we need to handle mathematical data (deep FAIR; here for the H\"older's and Cauchy-Schwarz inequalities) in all their complexity to improve the state of the art.
@@ -56,7 +56,7 @@ Our proposal is inspired by and partially driven by the results of the OpenDream
...
@@ -56,7 +56,7 @@ Our proposal is inspired by and partially driven by the results of the OpenDream
\item Fostering the innovation potential by opening up the EOSC ecosystem of e-infrastructure service providers to new innovative actors.
\item Fostering the innovation potential by opening up the EOSC ecosystem of e-infrastructure service providers to new innovative actors.
\end{compactenum}}
\end{compactenum}}
The EOSC Hub has so far mostly focused on \emph{generic} Open Science services, i.e., services that can be applied uniformly to all datasets form all disciplines.
The EOSC Hub has so far mostly focused on \emph{generic} Open Science services, i.e., services that can be applied uniformly to all datasets from all disciplines.
While this has led to a very powerful service offering, it has gaps when it comes to the needs of specific scientific communities.
While this has led to a very powerful service offering, it has gaps when it comes to the needs of specific scientific communities.
This applies in particular to mathematical data, including both data from mathematics itself as well as mathematically structured data from other disciplines.
This applies in particular to mathematical data, including both data from mathematics itself as well as mathematically structured data from other disciplines.
\textbf{Virtually the entire research data cycle for mathematical data requires semantics-aware services}, i.e., services that are aware of and can leverage the internal structure of the datasets instead of treating the entire dataset as a whole.
\textbf{Virtually the entire research data cycle for mathematical data requires semantics-aware services}, i.e., services that are aware of and can leverage the internal structure of the datasets instead of treating the entire dataset as a whole.
...
@@ -67,12 +67,12 @@ The \TheProject consortium is carefully chosen to \textbf{cover the most mature
...
@@ -67,12 +67,12 @@ The \TheProject consortium is carefully chosen to \textbf{cover the most mature
The \TheProject services respond directly to the particular needs in those communities, in particular the coherent integration, systematic deployment, and the general improvement and scaling up of these technologies.
The \TheProject services respond directly to the particular needs in those communities, in particular the coherent integration, systematic deployment, and the general improvement and scaling up of these technologies.
While initially driven by these needs, \TheProject eventually builds more than the sum of its parts.
While initially driven by these needs, \TheProject eventually builds more than the sum of its parts.
By integrating mathematical datasets via a uniform standard (see \WPref{foundations}) and mathematical services through a uniform platform (see \WPref{services}, \textbf{we make them available to much larger interdisciplinary communities}.
By integrating mathematical datasets via a uniform standard (see \WPref{foundations}) and mathematical services through a uniform platform (see \WPref{services}), \textbf{we make them available to much larger interdisciplinary communities}.
In particular, \TheProject includes the development of multiple client applications (see \WPref{services}) for our services.
In particular, \TheProject includes the development of multiple client applications (see \WPref{services}) for our services.
Crucially, these are integrated with existing widely-used systems, thus making it possible for users from other disciplines and industry to discover our services and integrate them into their existing work flows.
Crucially, these are integrated with existing widely-used systems, thus making it possible for users from other disciplines and industry to discover our services and integrate them into their existing work flows.
\TheProject is \textbf{strongly committed to providing a prototype service that can be readily integrated with the EOSC} (see \WPref{services}).
\TheProject is \textbf{strongly committed to providing a prototype service that can be readily integrated with the EOSC} (see \WPref{services}).
To maximize our impact, we ensure that many representative and well-known mathematical datasets (like we surveyed in \cite{bercic:cmo:table,Bercic:cmo:wiki}) out there are already deployed on this prototype service (see Figure~\ref{fig:datasets} and \WPref{cases}).
To maximize our impact, we ensure that many representative and well-known mathematical datasets out there, like the ones surveyed in \cite{bercic:cmo:table,Bercic:cmo:wiki}, are already deployed on this prototype service (see Figure~\ref{fig:datasets} and \WPref{cases}).
Besides increasing the popularity of the EOSC, this will provide a well-greased pathway for other users to share their data via the EOSC.
Besides increasing the popularity of the EOSC, this will provide a well-greased pathway for other users to share their data via the EOSC.
In particular, this can salvage the many large and practically used datasets, which are currently generated and lost soon thereafter.
In particular, this can salvage the many large and practically used datasets, which are currently generated and lost soon thereafter.