Call for Resources Track Paper
Resources are of paramount importance as they foster scientific advancement. For example, the DBpedia resource had a major influence on the Semantic Web community by enabling the Linked (Open) Data movement. Validating a research hypothesis or providing answers to a research question often goes together with developing new resources that support these achievements. These resources include, among others, datasets, benchmarks, workflows, and software. Sharing them is key to allow other researchers to compare new results, reproduce experimental settings and explore new lines of research, in accordance with the FAIR principles for scientific data management. Yet, resources themselves rarely get the same recognition as the scientific advances they facilitate.
The ISWC 2019 Resources Track aims to promote the sharing of resources including, but not restricted to: datasets, ontologies/vocabularies, ontology design patterns, evaluation benchmarks or methods, software tools/services, APIs and software frameworks, workflows, crowdsourcing task designs, protocols, methodologies and metrics, that have contributed to the generation of novel scientific work. In particular, we encourage the sharing of such resources following best and well established practices within the Semantic Web community. This track calls for contributions that provide a concise and clear description of a resource and its usage.
A typical Resource track paper has its focus set on reporting on one of the following categories of resources:
- Datasets produced
- to support specific evaluation tasks;
- to support novel research methods;
- by novel algorithms;
- Ontologies, vocabularies and ontology design patterns, with a focus on describing the modelling process underlying their creation;
- Benchmarking activities focusing on datasets and algorithms for comprehensible and systematic evaluation of existing and future systems;
- Reusable research prototypes / services supporting a given research hypothesis;
- Community-shared software frameworks that can be extended or adapted to support scientific study and experimentation;
- Scientific and experimental workflows used and reused in practical studies;
- Crowdsourcing task designs that have been used and can be (re)used for building resources such as gold standards and the like;
- Protocols for conducting experiments and studies;
- Novel evaluation methodologies and metrics, and their demonstration in an experimental study.
Differentiation from the other tracks
We strongly recommend that prospective authors carefully check the calls of the other main tracks of the conference in order to identify the optimal track for their submission. Papers that propose new algorithms and architectures should continue to be submitted to the regular research track, whilst papers that describe the use of semantic web technologies in practical settings should be submitted to the in-use track. When new reusable resources are produced during the process undertaken for achieving these results, e.g. datasets, ontologies, workflows, etc., they are suitable subject for a submission to the Resources Track.
The program committee will consider the quality of both the resource and the paper in its review process. Therefore, authors must ensure unfettered access to the resource during the review process by citing the resource at a permanent location. For example, data available in a repository such as FigShare, Zenodo, or a domain specific repository; or software code being available in public code repository such as GitHub or BitBucket. In exceptional cases, when it is not possible to make the resource public, authors must provide anonymous access to the resource for the reviewers.
We welcome the submission of established resources, having a community using them (excluding the authors), and of new resources, which may not prove established reuse but have sufficient evidence and motivation for claiming potential adoption. In the first case it is required to provide evidence and statistics about the resource adoption. In the second case authors should defend the claim of potential adoption by providing evidence of discussion in fora, mailing lists, and the like.
All resources will be evaluated along the following generic review criteria.
- Does the resource break new ground?
- Does the resource plug an important gap?
- How does the resource advance the state of the art?
- Has the resource been compared to other existing resources (if any) of similar scope?
- Is the resource of interest to the Semantic Web community?
- Is the resource of interest to society in general?
- Will/has the resource have/had an impact, especially in supporting the adoption of Semantic Web technologies?
- Is there evidence of usage by a wider community beyond the resource creators or their project? Alternatively (for new resources), what is the resource’s potential for being (re)used; for example, based on the activity volume on discussion fora, mailing lists, issue trackers, support portal, etc?
- Is the resource easy to (re)use? For example, does it have good-quality documentation? Are there tutorials available? Etc.
- Is the resource general enough to be applied in a wider set of scenarios, not just for the originally designed use?
- Is there potential for extensibility to meet future requirements?
- Does the resource include a clear explanation of how others use the data and software? Or (for new resources) how others are expected to use the data and software?
- Does the resource description clearly state what the resource can and cannot do, and the rationale for the exclusion of some functionality?
Design & Technical quality:
- Does the design of the resource follow resource-specific best practices?
- Did the authors perform an appropriate reuse or extension of suitable high-quality resources? For example, in the case of ontologies, authors might extend upper ontologies and/or reuse ontology design patterns.
- Is the resource suitable for solving the task at hand?
- Does the resource provide an appropriate description (both human- and machine-readable), thus encouraging the adoption of FAIR principles? Is there a schema diagram? For datasets, is the description available in terms of VoID/DCAT/DublinCore?
- Mandatory: Is the resource (and related results) published at a persistent URI (PURL, DOI, w3id)?
- Mandatory: Is there a canonical citation associated with the resource?
- Mandatory: Does the resource provide a licence specification? (See creativecommons.org, opensource.org for more information)
- Is the resource publicly available? For example as API, Linked Open Data, Download, Open Code Repository.
- Is the resource publicly findable? Is it registered in (community) registries (e.g. Linked Open Vocabularies, BioPortal, or DataHub)? Is it registered in generic repositories such as FigShare, Zenodo or GitHub?
- Is there a sustainability plan specified for the resource? Is there a plan for the maintenance of the resource?
- Does the resource adopt open standards, when applicable? Alternatively, does it have a good reason not to adopt standards?
As regards specific resource types, checklists of their quality attributes are available in a presentation. Both authors and reviewers could make use of them when assessing the quality of the particular resource
- Pre-submission of abstracts is a strict requirement. All papers and abstracts have to be submitted electronically via EasyChair.
- Papers describing a resource must be in the range of 8 and 16 pages (including references). Papers must describe the resource and focus on the sustainability and community surrounding the resource. Benchmark papers are expected to include evaluations and provide a detailed description of the experimental setting. Papers that exceed the page limit will be rejected without review.
- All research submissions must be in English.
- Submissions must be in PDF or in HTML, formatted in the style of the Springer Publications format for Lecture Notes in Computer Science (LNCS). For details on the LNCS style, see Springer’s Author Instructions. Springer requires the sources of papers that have been accepted for publication in LaTeX or Word format. Hence, if a paper submitted in HTML is accepted, the authors have to prepare such a Latex or Word version of their paper. The authors can choose to do this step manually or using tool support as outlined below.
- ISWC 2019 submissions are not anonymous. We encourage embedding metadata in the PDF or HTML to provide a machine-readable link from the paper to the resource.
- Authors will have the opportunity to submit a rebuttal to the reviews to clarify the issues raised by program committee members.
- Authors of accepted papers will be required to provide semantic annotations for the abstract of their submission, which will be made available on the conference web site. Details will be provided at the time of acceptance.
- Accepted papers will be distributed to conference attendees and also published by Springer in the printed conference proceedings, as part of the Lecture Notes in Computer Science series.
- At least one author of each accepted paper must register for the conference and present the paper there. As in previous years, students will be able to apply for travel support to attend the conference. Preference will be given to students that are first authors on papers accepted to the main conference or the doctoral consortium, followed by those who are first authors on papers accepted to ISWC workshops and the Poster & Demo session.
Prior Publication And Multiple Submissions
ISWC 2019 will not accept resource papers that, at the time of submission, are under review for or have already been published in or accepted for publication in a journal, another conference, or another ISWC track. The conference organisers may share information on submissions with other venues to ensure that this rule is not violated.
|Abstracts due||April 3, 2019|
|Full papers due||April 10, 2019|
|Author rebuttals||May 22-28, 2019|
|Notifications||June 18, 2019|
|Camera-ready papers due||July 2, 2019|
All deadlines are midnight Hawaii time. (GMT-10)