Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closes #113 - Add Chebi (Chapti) #525

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

napsternxg
Copy link
Contributor

@napsternxg napsternxg commented Apr 28, 2022

Please name your PR after the issue it closes. You can use the following line: "Closes #ISSUE-NUMBER" where you replace the ISSUE-NUMBER with the one corresponding to your dataset.

Fixes #113

If the following information is NOT present in the issue, please populate:

Checkbox

  • Confirm that this PR is linked to the dataset issue.
  • Create the dataloader script biodatasets/my_dataset/my_dataset.py (please use only lowercase and underscore for dataset naming).
  • Provide values for the _CITATION, _DATASETNAME, _DESCRIPTION, _HOMEPAGE, _LICENSE, _URLs, _SUPPORTED_TASKS, _SOURCE_VERSION, and _BIGBIO_VERSION variables.
  • Implement _info(), _split_generators() and _generate_examples() in dataloader script.
  • Make sure that the BUILDER_CONFIGS class attribute is a list with at least one BigBioConfig for the source schema and one for a bigbio schema.
  • Confirm dataloader script works with datasets.load_dataset function.
  • Confirm that your dataloader script passes the test suite run with python -m tests.test_bigbio biodatasets/my_dataset/my_dataset.py.
  • If my dataset is local, I have provided an output of the unit-tests in the PR (please copy paste). This is OPTIONAL for public datasets, as we can test these without access to the data files.

@sg-wbi sg-wbi changed the title Fixes #113 - Add Chebi (Chapti) Closes #113 - Add Chebi (Chapti) May 9, 2022
@mariosaenger mariosaenger self-assigned this Oct 24, 2024
@mariosaenger mariosaenger requested a review from phlobo October 26, 2024 07:32
@mariosaenger
Copy link
Collaborator

@phlobo Please have a look at the implementation.

Copy link
Collaborator

@phlobo phlobo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mariosaenger I have some questions regarding this dataset, could you please have a look?

issn = {0305-1048},
pages = {D344—50},
url = {https://europepmc.org/articles/PMC2238832},
biburl = {https://aclanthology.org/W19-5008.bib},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like biburl and bibsource belong to a different dataset

issn = {0305-1048},
pages = {D344—50},
url = {https://europepmc.org/articles/PMC2238832},
biburl = {https://aclanthology.org/W19-5008.bib},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like biburl and bibsource belong to a different dataset


DATA_URL = "https://github.com/bigscience-workshop/biomedical/files/8568960/PatentAnnotations_GoldStandard.tar.gz"
_URLS = {
# The original dataset is hosted on CVS on sourceforge. Hence I have downloaded and reuploded it as tar.gz format.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The provenance of this dataset seems to be tricky. It is not mentioned in the original ChEBI publication in NAR, that we have as a citation. Is there any other source of information about the annotation project? Otherwise, it is a bit hard to tell if we got the number of annotations, IDs, etc. right.

# Converted via the following command:
# cvs -z3 -d:pserver:[email protected]:/cvsroot/chebi co \
# chapati/patentsGoldStandard/PatentAnnotations_GoldStandard.tgz
# mkdir -p ./MoNERo
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The MoNERo part belongs to a different dataset?

"offsets": [[e["start"], e["end"]]],
"type": e["attrs"]["type"],
"normalized": [
{"db_name": "chebi", "db_id": chebi_id.strip()} for chebi_id in e["attrs"]["chebi-id"].split(",")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The IDs extracted this way look inconsistent, e.g.:

 {'id': 'WO2007000651-E8',
   'type': 'CHEMICAL',
   'text': ['Zinc oxide'],
   'offsets': [[613, 623]],
   'normalized': [{'db_name': 'chebi', 'db_id': 'CHEBI:36560'}]},
  {'id': 'WO2007000651-E9',
   'type': 'ONT',
   'text': ['astringent'],
   'offsets': [[690, 700]],
   'normalized': [{'db_name': 'chebi', 'db_id': 'WO2007000651:157583'}]},

Maybe the db_id should just contain the last numerical bit?

"offsets": [[e["start"], e["end"]]],
"type": e["attrs"]["type"],
"normalized": [
{"db_name": "chebi", "db_id": chebi_id.strip()} for chebi_id in e["attrs"]["chebi-id"].split(",")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moreover, there seem to be more identifiers attached to each entity, e.g., in the source version there are entries like 'epochem-id': 'EPOCHEM:NEW:CLASS:4',. Shall we include them as additional normalized entries with db_id : epochem? Might be a relevant NED task for some users of the dataset.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Create dataset loader for CHEBI (Chapati)
3 participants