Giter Club home page Giter Club logo

sptr's Introduction

SPEEAK-PC Terminology Recognition (SPTR)

This project implements an Automatic Terminology Recognition (ATR) and automatic indexing tool providing the integration with Apache Solr using Nature Language Processing technology. The tool performs batch processing over the entire corpus in Solr/Lucene indexes and enrich the indexes/documents with the specified metadata (i.e., industry terms). The automatic indexing process transforms textual data from unstructured data to semi-structured data, which enables more advanced knowledge mining, e.g., semantic search, text summarisation, cause analysis for business intelligence, etc.

The core of ATR is based on C-Value algorithm and contrastive corpus analysis. Following figure presents a general architecture which consists of 5 main phrases: 1) content extraction and normalisation; 2) solr indexing and pre-processing; 3) term extraction, scoring, ranking and filtering; 4) automatic term indexing; 5) search and export.

alt tag

Configuration

The tool supports various configurations including Part-Of-Tagging sequence patterns for term candidate submission forms, pre-filtering and cut-off threshold based post filtering. The tool also suppors dictionary tagging with exact matching and fuzzy matching configurable. To be processed by the ATR tool, the corpus must be processed by Solr with TR aware anlayser chain as pre-requisite for subsequent term extraction. TR aware anlayser chain can be configured in various ways so as to allow domain-specific customisation.

To pre-process and index content for candidate extraction, a solr schema.xml needs 2 things:

  • A unique key field

  • A content field (from where terms are extracted) indexed with Term Recognition (TR) aware Analyser Chain

    For term ranking, the content field's index analyzer needs to end in shingling (solr.ShingleFilterFactory). Term vectors must be enabled so that term statistics can be queried and used for ranking algorithms.Term Offsets can also be enabled to allow term highlighting.

Here is a sample TR aware content field type config : ```

		<tokenizer class="solr.StandardTokenizerFactory" />
		
		<filter class="solr.LowerCaseFilterFactory" />
		<filter class="solr.ASCIIFoldingFilterFactory"/>
		<filter class="solr.EnglishMinimalStemFilterFactory"/>
		<filter class="solr.ShingleFilterFactory" minShingleSize="2" maxShingleSize="6"
				outputUnigrams="true" outputUnigramsIfNoShingles="false" tokenSeparator=" "/>
	</analyzer>
	<analyzer type="query">				
		<tokenizer class="solr.StandardTokenizerFactory" />
		<!-- <filter class="solr.StopFilterFactory" ignoreCase="false" words="stopwords.txt" enablePositionIncrements="true" /> -->
		<filter class="solr.LowerCaseFilterFactory" />
		<filter class="solr.SynonymFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true" />				
		<filter class="solr.ASCIIFoldingFilterFactory"/>
		<filter class="solr.EnglishMinimalStemFilterFactory"/>
	</analyzer>
</fieldType>
```

And, a sample of content filed configured with the analyser:

```
<!--Main body of document extracted by SolrCell.-->
<field name="content" type="text_tr_general" indexed="true" stored="true" multiValued="false" termVectors="true" termPositions="true" termOffsets="true"/>
```

In term extraction phrase, a solr schema.xml needs 2 things:

  • A multiValued string field for storing term candidates
  • A solr analyser chain to normalise term candidates for ranking accuracy This needs to be consistent with content index analyser so that indexed n-grams will be matched with term candidates.
  • A field for storing final terms

A sample config of term candidate field :

```
<!-- A dynamicField field can be configured for terms needs be indexed and stored with term vectors and offsets.-->
<dynamicField name="*_tvss" type="string" indexed="true"  stored="true" multiValued="true" termVectors="true" termPositions="true" termOffsets="true"/>
```

A sample config of term solr normaliser:

```
<fieldType name="industry_term_normaliser" class="solr.TextField" positionIncrementGap="100">
	<analyzer>
		<tokenizer class="solr.StandardTokenizerFactory" />
		<!--<charFilter class="solr.PatternReplaceCharFilterFactory" pattern="(\-)" replacement=" " />-->
		
		<!-- setting of WordDelimiterFilterFactory is useful for compound words. Can be enabled to make sure tokens like "bloom485" or "TermRecognition" are split in order to improve accuracy. This can also be used to improve subsequent POS tagging and allow stop words like "year" to be matched
		-->
		<!-- see details via https://lucene.apache.org/core/4_6_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/WordDelimiterFilter.html -->
		<!-- <filter class="solr.WordDelimiterFilterFactory" protected="protectedword.txt" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/> -->
		<filter class="solr.LowerCaseFilterFactory"/>
		<filter class="solr.ASCIIFoldingFilterFactory"/>
		<filter class="solr.EnglishMinimalStemFilterFactory"/>
	 </analyzer>
</fieldType>
```

A sample config of final(filtered) terms:

```
<field name="industryTerm" type="industry_term_type" indexed="true" stored="true" multiValued="true" omitNorms="true" termVectors="true"/>
<!-- Experimental field used for normalised term via term variations analysis -->
<fieldType name="industry_term_type" class="solr.TextField" positionIncrementGap="100">
	<analyzer>
		<tokenizer class="solr.KeywordTokenizerFactory"/>		
		<charFilter class="solr.PatternReplaceCharFilterFactory" pattern="(\-)" replacement=" " />		
		<filter class="solr.LowerCaseFilterFactory"/>
		<filter class="solr.ASCIIFoldingFilterFactory"/>
		<filter class="solr.EnglishMinimalStemFilterFactory"/>
	 </analyzer>
</fieldType>
```

A Solr solrconfig.xml must be configured with Field Analysis Request Handler and can be configured with Solr Cell Update Request Handler (recommeded) and Language identification as an option.

Usage

The Term Recognition tool is run as a batch processing job and can be triggered by a simple shell script in Linux.

./industry_term_enrichment.sh

The run-time parameters are

  • pos_sequence_filter: a text file providing part-of-speech(pos) sequence pattern for filtering term candidate lexical units

  • stopwords: stop words list for filtering term candidates in a minimal manner

  • max_tokens: Maximum number of words allowed in a multi-word term; must also be compatible with ngram size range for solr.ShingleFilterFactory in solr schema.xml;

  • min_tokens: Minimum number of words allowed in a multi-word term; must also be compatible with ngram size range for solr.ShingleFilterFactory in solr schema.xml;

  • max_char_length: Minimum number of characters allowed in any term candidates units

  • min_char_length: Minimum number of characters allowed in any term candidates units;increase for better precision

  • min_term_freq: Minimum frequency allowed for term candidates; increase for better precision

  • PARALLEL_WORKERS: Maximum number of processes (for annotation and dictionary tagging) that can run at the same time

  • cut_off_threshold: cut-off threshold (exclusive) for term recognition

  • solr_core_url: Solr index core

  • solr_field_content: solr content field from where terminology and frequency information will be queried and analysed. Terminology Recognition aware NLP pipeline must be configured for this field.

  • solr_field_doc_id: solr document unique identifier field, default to 'id'

  • solr_term_normaliser: The solr terminology normalisation analyser

  • solr_field_term_candidates: solr field where term candidates will be stored and indexed

  • solr_field_industry_term: solr field where final filtered terms will be stored and indexed

  • tagging: a boolean config allows turn on and off term candidate extraction. Disabling this setting will only executing ranking for candidates and indexing filtered candidates

  • export_term_candidates: a boolean config allows to turn on and off term candidate export. Exporting (all) term candidates can help to evaluate and choose a suitable cut-off threshold.

  • export_term_variants: a boolean config allows to turn on and off term variants export.

  • term_variants_export_file_name: A file for exporting term (filtered terms) variants (CSV format by default)

  • dict_tagging: a boolean config allows to turn on and off dictionary tagging

  • dictionary_file: One term dictionary file is configured here to tag the indexed documents. The dictionary file must be in csv format with two columns (term surface form and descriptions) and must not include heading in first row.

  • dict_tagger_fuzzy_matching: A boolean config to turn on and off fuzzy matching based on normalised Levenshtein distance.

  • dict_tagger_sim_threshold: similarity threshold (range: [0-1]) for fuzzy matching

  • solr_field_dictionary_term: The Solr field to where the dictionary matched terms will be indexed and stored.

  • index_dict_term_with_industry_term: A boolean field to determine whether dictionary term can indexed either separately (different solr field) or with solr_field_industry_term

sptr's People

Contributors

jerrygaolondon avatar

Watchers

 avatar  avatar

Forkers

hayeong922

sptr's Issues

MaxRetryError


2016-01-11 11:44:29,085 [MainThread  ] - IndustryTermRecogniser - INFO - Term variation detection and aggregation...
Traceback (most recent call last):
  File "/mnt/Python34/lib/python3.4/site-packages/requests/packages/urllib3/connection.py", line 135, in _new_conn
    (self.host, self.port), self.timeout, **extra_kw)
  File "/mnt/Python34/lib/python3.4/site-packages/requests/packages/urllib3/util/connection.py", line 90, in create_connection
    raise err
  File "/mnt/Python34/lib/python3.4/site-packages/requests/packages/urllib3/util/connection.py", line 80, in create_connection
    sock.connect(sa)
OSError: [Errno 99] Cannot assign requested address

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/mnt/Python34/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py", line 559, in urlopen
    body=body, headers=headers)
  File "/mnt/Python34/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py", line 353, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/mnt/Python34/lib/python3.4/http/client.py", line 1088, in request
    self._send_request(method, url, body, headers)
  File "/mnt/Python34/lib/python3.4/http/client.py", line 1126, in _send_request
    self.endheaders(body)
  File "/mnt/Python34/lib/python3.4/http/client.py", line 1084, in endheaders
    self._send_output(message_body)
  File "/mnt/Python34/lib/python3.4/http/client.py", line 922, in _send_output
    self.send(msg)
  File "/mnt/Python34/lib/python3.4/http/client.py", line 857, in send
    self.connect()
  File "/mnt/Python34/lib/python3.4/site-packages/requests/packages/urllib3/connection.py", line 160, in connect
    conn = self._new_conn()
  File "/mnt/Python34/lib/python3.4/site-packages/requests/packages/urllib3/connection.py", line 144, in _new_conn
    self, "Failed to establish a new connection: %s" % e)
requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.HTTPConnection object at 0x7fb830c8f8d0>: Failed to establish a new connection: [Errno 99] Cannot assign requested address

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/mnt/Python34/lib/python3.4/site-packages/requests/adapters.py", line 370, in send
    timeout=timeout
  File "/mnt/Python34/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py", line 609, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "/mnt/Python34/lib/python3.4/site-packages/requests/packages/urllib3/util/retry.py", line 271, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='oakanalysis.shef.ac.uk', port=8983): Max retries exceeded with url: /solr/tatasteel/analysis/field?analysis.fieldvalue=auto+detection&wt=json&analysis.fieldtype=industry_term_normaliser (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fb830c8f8d0>: Failed to establish a new connection: [Errno 99] Cannot assign requested address',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/mnt/oakanalysis/shared/SPEEAK-PC-TermRecognition/src/integration.py", line 145, in <module>
    trTagger.terminology_tagging()
  File "/mnt/oakanalysis/shared/SPEEAK-PC-TermRecognition/src/IndustryTermRecogniser.py", line 165, in terminology_tagging
    self.synonym_aggregation(final_term_set)
  File "/mnt/oakanalysis/shared/SPEEAK-PC-TermRecognition/src/IndustryTermRecogniser.py", line 281, in synonym_aggregation
    norm_term_dict = dict((term, self.solrClient.get_industry_term_field_analysis(term)) for term in terms)
  File "/mnt/oakanalysis/shared/SPEEAK-PC-TermRecognition/src/IndustryTermRecogniser.py", line 281, in <genexpr>
    norm_term_dict = dict((term, self.solrClient.get_industry_term_field_analysis(term)) for term in terms)
  File "/mnt/oakanalysis/shared/SPEEAK-PC-TermRecognition/src/SolrClient.py", line 381, in get_industry_term_field_analysis
    analysis_result = self.field_analysis(term, field_type=pfield_type)
  File "/mnt/oakanalysis/shared/SPEEAK-PC-TermRecognition/src/SolrClient.py", line 371, in field_analysis
    response=self._send_request('GET', path)
  File "/mnt/oakanalysis/shared/SPEEAK-PC-TermRecognition/src/SolrClient.py", line 401, in _send_request
    response = requests.request(method=method, url=urljoin(url, path),headers=headers,data=data)
  File "/mnt/Python34/lib/python3.4/site-packages/requests/api.py", line 50, in request
    response = session.request(method=method, url=url, **kwargs)
  File "/mnt/Python34/lib/python3.4/site-packages/requests/sessions.py", line 468, in request
    resp = self.send(prep, **send_kwargs)
  File "/mnt/Python34/lib/python3.4/site-packages/requests/sessions.py", line 576, in send
    r = adapter.send(request, **kwargs)
  File "/mnt/Python34/lib/python3.4/site-packages/requests/adapters.py", line 423, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='xxx.shef.ac.uk', port=8983): Max retries exceeded with url: /solr/tatasteel/analysis/field?analysis.fieldvalue=auto+detection&wt=json&analysis.fieldtype=industry_term_normaliser (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fb830c8f8d0>: Failed to establish a new connection: [Errno 99] Cannot assign requested address',))

improve frequency feature by Solr proximity search for multi-word term (MWT)

One of significant challenge of MWT recognition is the coordinated variant. Most of statistical based term recognition algorithms rely on term frequency in corpus as important feature. However, term coordinated variants can affect the performance with lower recall. Solr proximity searches can potentially improve recall and thus may improve the accuracy of related statistic metrics. This proximity based frequency is referred as 'sloppyFreq'.

"To perform a proximity search, add the tilde character ~ and a numeric value to the end of a search phrase. For example, to search for a "apache" and "jakarta" within 10 words of each other in a document, use the search: "jakarta apache"~10 The distance referred to here is the number of term movements needed to match the specified phrase. In the example above, if "apache" and "jakarta" were 10 spaces apart in a field, but "apache" appeared before "jakarta", more than 10 term movements would be required to move the terms together and position "apache" to the right of "jakarta" with a space in between."
https://lucene.apache.org/solr/guide/6_6/the-standard-query-parser.html

see also performance concerns in https://www.searchtechnologies.com/blog/relevancy-ranking-course-part-3

see also the referred "slop factor" in https://lucidworks.com/2009/09/02/optimizing-findability-in-lucene-and-solr/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.