Giter Club home page Giter Club logo

spacyopentapioca's Introduction

spaCyOpenTapioca

PyPI version

A spaCy wrapper of OpenTapioca for named entity linking on Wikidata.

Table of contents

Installation

pip install spacyopentapioca

or

git clone https://github.com/UB-Mannheim/spacyopentapioca
cd spacyopentapioca/
pip install .

How to use

After installation the OpenTapioca pipeline can be used without any other pipelines:

import spacy
nlp = spacy.blank("en")
nlp.add_pipe('opentapioca')
doc = nlp("Christian Drosten works in Germany.")
for span in doc.ents:
    print((span.text, span.kb_id_, span.label_, span._.description, span._.score))
('Christian Drosten', 'Q1079331', 'PERSON', 'German virologist and university teacher', 3.6533377082098895)
('Germany', 'Q183', 'LOC', 'sovereign state in Central Europe', 2.1099332471902863)

The types and aliases are also available:

for span in doc.ents:
    print((span._.types, span._.aliases[0:5]))
({'Q43229': False, 'Q618123': False, 'Q5': True, 'P2427': False, 'P1566': False, 'P496': True}, ['كريستيان دروستين', 'Крістіан Дростен', 'Christian Heinrich Maria Drosten', 'کریستین دروستن', '크리스티안 드로스텐'])
({'Q43229': True, 'Q618123': True, 'Q5': False, 'P2427': False, 'P1566': True, 'P496': False}, ['IJalimani', 'R. F. A.', 'Alemania', '도이칠란트', 'Germaniya'])

The Wikidata QIDs are attached to tokens:

for token in doc:
    print((token.text, token.ent_kb_id_))
('Christian', 'Q1079331')
('Drosten', 'Q1079331')
('works', '')
('in', '')
('Germany', 'Q183')
('.', '')

The raw response of the OpenTapioca API can be accessed in the doc- and span-objects:

raw_annotations1 = doc._.annotations
raw_annotations2 = [span._.annotations for span in doc.ents]

The partial metadata for the response returned by the OpenTapioca API is

doc._.metadata

All span-extensions are:

span._.annotations
span._.description
span._.aliases
span._.rank
span._.score
span._.types
span._.label
span._.extra_aliases
span._.nb_sitelinks
span._.nb_statements

Note that spaCyOpenTapioca does a tiny processing of entities appearing in doc.ents. All entities returned by OpenTapioca can be found in doc.spans['all_entities_opentapioca'].

Batching

Batched asynchronous requests to the OpenTapioca API via nlp.pipe(List[str]):

import spacy
nlp = spacy.blank("en")
nlp.add_pipe('opentapioca')
docs = nlp.pipe(
    [
        "Christian Drosten works in Germany.",
        "Momofuku Ando was born in Japan.".
    ]
)
for doc in docs:
    for span in doc.ents:
        print((span.text, span.kb_id_, span.label_, span._.description, span._.score))
('Christian Drosten', 'Q1079331', 'PERSON', 'German virologist and university teacher', 3.6533377082098895)
('Germany', 'Q183', 'LOC', 'sovereign state in Central Europe', 2.1099332471902863)
('Momofuku Ando', 'Q317858', 'PERSON', 'Taiwanese-Japanese businessman', 3.6012208212234302)
('Japan', 'Q17', 'LOC', 'sovereign state in East Asia, situated on an archipelago of five main and over 6,800 smaller islands', 2.349944834167907)

Local OpenTapioca

If OpenTapioca is deployed locally, specify the URL of the new OpenTapioca API in the config:

import spacy
nlp = spacy.blank("en")
nlp.add_pipe('opentapioca', config={"url": OpenTapiocaAPI})
doc = nlp("Christian Drosten works in Germany.")

Vizualization

NEL vizualization is added to spaCy via pull request 9199 for issue 9129. It is supported by spaCy >= 3.1.4.

Use manual option in displaCy:

import spacy
nlp = spacy.blank("en")
nlp.add_pipe('opentapioca')
doc = nlp("Christian Drosten works\n in Charité, Germany.")
params = {"text": doc.text,
          "ents": [{"start": ent.start_char,
                    "end": ent.end_char,
                    "label": ent.label_,
                    "kb_id": ent.kb_id_,
                    "kb_url": "https://www.wikidata.org/entity/" + ent.kb_id_}
                   for ent in doc.ents],
          "title": None}
spacy.displacy.serve(params, style="ent", manual=True)

The visualizer is serving on http://0.0.0.0:5000

alt text

In Jupyter Notebook replace spacy.displacy.serve by spacy.displacy.render.

spacyopentapioca's People

Contributors

davidberenstein1957 avatar hmkhalla avatar jordanparker6 avatar lgtm-migrator avatar shigapov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spacyopentapioca's Issues

AttributeError: 'NoneType' object has no attribute 'text' when using nlp.pipe()

Hi, when I process multiple text documents as a batch, I have failure with the error message: AttributeError: 'NoneType' object has no attribute 'text'. However, processing each text document by itself produces no such error. Here is a easy to reproduce example:

docs = ["""String of 126 characters. String of 126 characters. String of 126 characters. String of 126 characters. String of 126 characte""","""Any string which is 93 characters. Any string which is 93 characters. Any string which is 93 """]
nlp = spacy.blank("en")
nlp.add_pipe("opentapioca")
for doc in nlp.pipe(docs):
    print(doc)

Fulll stack trace below:

AttributeError                            Traceback (most recent call last)
<command-370658210397732> in <module>
      4 nlp = spacy.blank("en")
      5 nlp.add_pipe("opentapioca")
----> 6 for doc in nlp.pipe(docs):
      7     print(doc)

/databricks/python/lib/python3.8/site-packages/spacy/language.py in pipe(self, texts, as_tuples, batch_size, disable, component_cfg, n_process)
   1570         else:
   1571             # if n_process == 1, no processes are forked.
-> 1572             docs = (self._ensure_doc(text) for text in texts)
   1573             for pipe in pipes:
   1574                 docs = pipe(docs)

/databricks/python/lib/python3.8/site-packages/spacy/util.py in _pipe(docs, proc, name, default_error_handler, kwargs)
   1597     if hasattr(proc, "pipe"):
   1598         yield from proc.pipe(docs, **kwargs)
-> 1599     else:
   1600         # We added some args for pipe that __call__ doesn't expect.
   1601         kwargs = dict(kwargs)

/databricks/python/lib/python3.8/site-packages/spacyopentapioca/entity_linker.py in pipe(self, stream, batch_size)
    117                     self.make_request, doc): doc for doc in docs}
    118                 for doc, future in zip(docs, concurrent.futures.as_completed(future_to_url)):
--> 119                     yield self.process_single_doc_after_call(doc, future.result())

/databricks/python/lib/python3.8/site-packages/spacyopentapioca/entity_linker.py in process_single_doc_after_call(self, doc, r)
     66                                      alignment_mode='expand')
     67                 log.warning('The OpenTapioca-entity "%s" %s does not fit the span "%s" %s in spaCy. EXPANDED!',
---> 68                             ent['tags'][0]['label'][0], (start, end), span.text, (span.start_char, span.end_char))
     69             span._.annotations = ent
     70             span._.description = ent['tags'][0]['desc']

AttributeError: 'NoneType' object has no attribute 'text'

I don't know what about the lengths of the strings causes an issue, but they do seem to matter in some way. Adding or removing a couple characters from either string can resolve the issue.

Client data does not mirror online demo data

I sent this sentence over the client with corrected URL
"Greenhouse gasses have been thought to cause climate change"
and it returns nothing
The same sentence at https://opentapioca.wordlift.io/# returns something like this

[[Greenhouse]] gasses [[have]] [[been]] [[thought]] to [[cause]] [[climate]] change

I would have hoped it would spot "greenhouse gas" and "climate change" but still, the system returned nothing;
Note: when sent the same sentence as in the installation readme, it performs as advertised.

Add methods to highlights

In the same way by clicking a NER highlighting leads to a web side it would perhaps be possible to extend this functionality and pass a method to be run when clicking the highlighted NER.

500 Server Error: Internal Server Error for url: https://opentapioca.org/api/annotate

Every time I use spacyopentapioca I get the error: "500 Server Error: Internal Server Error for url: https://opentapioca.org/api/annotate".
I have found that the web portal of OpenTapioca I knew https://opentapioca.org/# also recieves the same error, therefore I've opened an Issue in the OpenTapioca repo, the developer responded with a new web portal from which I've found It nows query to a new url https://opentapioca.wordlift.io/api/annotate, this is likely to be the reason of the error.
Hope this information helps to get spacyopentapioca working again, thanks in advance.

HTTP Error

I am using the first sample example for the testing. Sometimes it provides the output but I frequently get this error while using the service.

requests.exceptions.HTTPError: 503 Server Error: Service Unavailable for url: https://opentapioca.org/api/annotate

Does it have any limitation?

'ent_kb_id' referenced before assignment

Hello,
while trying this example :
nlp("M. Knajdek"),
An error occurs in the entity_linker.py file
UnboundLocalError: local variable 'ent_kb_id' referenced before assignment on line 67 in the file.
This is due to the . separator.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.