Giter Club home page Giter Club logo

patentsview-evaluation's Introduction

Python package pages-build-deployment

๐Ÿ“Š PatentsView-Evaluation: Benchmark Disambiguation Algorithms

pv_evaluation is a Python package built to help advance research on author/inventor name disambiguation systems such as PatentsView. It provides:

  1. A large set of benchmark datasets for U.S. patents inventor name disambiguation.
  2. Disambiguation summary statistics, evaluation methodology, and performance estimators through the ER-Evaluation Python package.

See the project website for full documentation. The Examples page provides real-world examples of the use of pv_evaluation submodules.

Submodules

pv_evaluation has the following submodules:

  • benchmark.data: Access to evaluation datasets and standardized comparison benchmarks. The following benchmark datasets are available:
    • Academic Life Sciences (ALS) inventors benchmark.
    • Israeli inventors benchmark.
    • Engineering and Sciences (ENS) inventors benchmark.
    • Lai's 2011 inventors benchmark.
    • PatentsView's 2021 inventors benchmark.
    • Binette et al.'s 2022 inventors benchmark.
  • benchmark.report: Visualization of key monitoring and performance metrics.
  • templates: Templated performance summary reports.

Installation

Install the released version of pv_evaluation using

pip install pv-evaluation

Rendering reports requires the installation of quarto from quarto.org.

Examples

Note: Working with the full patent data requires large amounts of memory (we suggest having 64GB RAM available).

See the examples page for complete reproducible examples. The examples below only provide a quick overview of pv_evaluation's functionality.

Metrics and Summary Statistics

Generate an html report summarizing properties of the current disambiguation algorithm (see this example):

from pv_evaluation.templates import render_inventor_disambiguation_report

render_inventor_disambiguation_report(
    ".", 
    disambiguation_files=["disambiguation_20211230.tsv", "disambiguation_20220630.tsv"],
    inventor_not_disambiguated_file="g_inventor_not_disambiguated.tsv"
)

Benchmark Datasets

Access PatentsView-Evaluation's large collection of benchmark datasets:

from pv_evaluation.benchmark import *

load_lai_2011_inventors_benchmark()
load_israeli_inventors_benchmark()
load_patentsview_inventors_benchmark()
load_als_inventors_benchmark()
load_ens_inventors_benchmark()
load_binette_2022_inventors_benchmark()
load_air_umass_assignees_benchmark()
load_nber_subset_assignees_benchmark()

Representative Performance Evaluation

See this example of how representative performance estimates are obtained from Binette's 2022 benchmark dataset.

Citation

Contributing

Contribute code and documentation

Look through the GitHub issues for bugs and feature requests. To contribute to this package:

  1. Fork this repository
  2. Make your changes and update CHANGELOG.md
  3. Submit a pull request
  4. For maintainers: if needed, update the "release" branch and create a release.

A conda environment is provided for development convenience. To create or update this environment, make sure you have conda installed and then run make env. You can then activate the development environment using conda activate pv-evaluation.

The makefile provides other development utilities such as make black to format Python files, make data to re-generate benchmark datasets from raw data located on AWS S3, and make docs to generate the documentation website.

Raw data

Raw public data is located on PatentsView's AWS S3 server at https://s3.amazonaws.com/data.patentsview.org/PatentsView-Evaluation/data-raw.zip. This zip file should be updated as needed to reflect datasets provided by this package and to ensure that original data sources are preserved without modification.

Testing

The minimal testing requirement for this package is a check that all code executes without error. We recommend placing execution checks in a runnable notebook and using the testbook package for execution within unit tests. User examples should also be provided to exemplify usage on real data.

Report bugs and submit feedback

Report bugs and submit feedback at https://github.com/PatentsView/PatentsView-Evaluation/issues.

patentsview-evaluation's People

Contributors

bethanne-card avatar olivierbinette avatar siddharthengineer23 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

olivierbinette

patentsview-evaluation's Issues

Disaggregate summary statistics by region

Provide disaggregated statistics and visualizations by an inventor's modal region. This would potentially help highlight regional biases and other issues related to name spelling conventions.

Docs/Example Request - Show Full RLData experiment + results in Python

Hello,

As an understanding check while reading the paper I attempted to recreate the experiment from Appendix 1. Appendix 1 says in part:

For this example, we considered the RLdata10000 dataset from Sariyar and Borg (2022). This is a
synthetic dataset containing 10,000 records with first name, last name, and date of birth attributes. There
is noise in these attributes and a 10% duplication rate. Ground truth identity is known for all records.
The disambiguation algorithm we consider matches records if any of the following conditions are met:
โ€ข records agree on first name, last name, and birth year,
โ€ข records agree on first name, birth day, and birth year, or
โ€ข records agree on last name, birth day, and birth year.
Note that this is not at all a good disambiguation algorithm. It has 52% precision and 83% recall.

I've attempted to reproduce those precision-recall metrics using that disambiguation algorithm in Python on the RLdata10000 dataset but haven't been able to. Any chance you'd be willing to share it as an example for future readers?

My (likely erroneous) implementation below in case it's helpful, which on my machine returns a precision of 0.5943 and a recall of 0.832:

import pandas as pd

pd.set_option('display.max_columns', None)

df = pd.read_csv('RLdata10000.csv')
comparisons = pd.merge(df, df, how="cross", suffixes=["_left", "_right"])

comparisons['true_match'] = (comparisons["ent_id_left"] == comparisons["ent_id_right"])

first_names_c1_match = comparisons['fname_c1_left'] == comparisons['fname_c1_right']
first_names_c2_match = (
    (comparisons['fname_c2_left'] == comparisons['fname_c2_right'])
    | (comparisons['fname_c2_left'].isna() & comparisons['fname_c2_right'].isna())
)
first_names_match = first_names_c1_match & first_names_c2_match


last_names_c1_match = comparisons['lname_c1_left'] == comparisons["lname_c1_right"]
last_names_c2_match = (
    (comparisons['lname_c2_left'] == comparisons['lname_c2_right'])
    | (comparisons['lname_c2_left'].isna() & comparisons['lname_c2_right'].isna())
)
last_names_match = last_names_c1_match & last_names_c2_match

birth_days_match = comparisons['bd_left'] == comparisons["bd_right"]
birth_years_match = comparisons['by_left'] == comparisons['by_right']

condition_1 = first_names_match & last_names_match & birth_years_match
condition_2 = first_names_match & birth_days_match & birth_years_match
condition_3 = last_names_match & birth_days_match & birth_years_match

comparisons['predicted_match'] = condition_1 | condition_2 | condition_3

comparisons = comparisons[comparisons['predicted_match'] | comparisons['true_match']]
comparisons = comparisons[comparisons['rec_id_left'] != comparisons['rec_id_right']]

comparisons['true_and_predicted_match'] = comparisons['true_match'] & comparisons['predicted_match']

num_true_matches = comparisons['true_match'].sum()
num_predicted_matches = comparisons['predicted_match'].sum()
num_true_and_predicted_matches = comparisons['true_and_predicted_match'].sum()

true_precision = num_true_and_predicted_matches/num_predicted_matches
true_recall = num_true_and_predicted_matches/num_true_matches
true_f1 = (2 * true_precision * true_recall) / (true_precision + true_recall)

print(f"There is a true precision of {round(true_precision, 4)}")
print(f"There is a true recall of {round(true_recall, 4)}")
print(f"There is a true f1 of {round(true_f1, 4)}")

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.