Giter Club home page Giter Club logo

acuiti's Introduction

NOTE: This is not an officially supported Google product.

Icon Matching with Shape Context Descriptors

This project involves using shape-context descriptors to find an arbitrary template icon in a given image. The overall algorithm has three steps: 1) edge detection, 2) clustering contours, and 3) using shape context descriptors to find the closest contour to the template icon. Shape context descriptors were introduced in this 2001 research paper by S. Belongie et al: https://papers.nips.cc/paper/1913-shape-context-a-new-descriptor-for-shape-matching-and-object-recognition.pdf and its implementation is available in OpenCV: https://docs.opencv.org/master/d8/de3/classcv_1_1ShapeContextDistanceExtractor.html. Here's an overview of the process:

Algorithm Overview

This project built and optimized the algorithm pipeline described above to achieve an icon matching algorithm that is:

  • scale-invariant
  • color-invariant
  • has ~95% recall/precision
  • takes about 1.5-2.5s on average

Repository Overview

Here's an overview of the files in this repository. There are three main types of files of interest (in bold) which are explained in further detail in each of the sections below.

Under modules/:

  • Benchmark Pipeline, which runs any icon matching algorithm on any dataset and outputs accuracy, latency, and memory information;
  • Icon Matching Algorithms, which includes our optimized implementation of the shape context descriptor algorithm that achieves ~95% recall/precision in 1-2s on average;
  • Analysis Utilities, which are tools used to run experiments to figure out how to optimize our shape context descriptor algorithm;

Under tests/:

  • Integration and Unit Tests, which test the functionalities above.

Under datasets/:

  • Small datasets used for integration tests. Actual datasets to validate results are much larger and not included in this repository.

Benchmark Pipeline

Running from the Command-Line

The end-to-end pipeline can be run from the command-line as such: python -m modules.benchmark_pipeline --tfrecord_path=datasets/small_single_instance_v2.tfrecord --output_path=small_single_instance.txt --multi_instance_icon=False --visualize=True --iou_threshold=0.6.

The results (accuracy, precision, recall, latency average/median, memory average/median) will then be printed to the output txt file as well as to logging.info like so:

Average seconds per image: 1.439400
Median seconds of images: 1.544500

Average MiBs per image: 6.865234
Median MiBs per image: 5.380859

Accuracy: 0.935484

Precision: 0.966667

Recall: 0.966667

The output txt file will additionally contain latency profiling information for the icon matching algorithm. The memory calculated is the auxiliary memory needed by the icon matching algorithm.

Here are more details on the flags:

usage: benchmark_pipeline.py [-h] [--tfrecord_path TFRECORD_PATH] [--iou_threshold THRESHOLD] [--output_path OUTPUT_PATH] [--multi_instance_icon MULTI_INSTANCE_ICON] [--visualize VISUALIZE]

Run a benchmark test on find_icon algorithm.

optional arguments:
  -h, --help            show this help message and exit
  --tfrecord_path TFRECORD_PATH
                        path to tfrecord (default: datasets/small_single_instance_v2.tfrecord)
  --iou_threshold THRESHOLD
                        iou above this threshold is considered accurate (default: 0.600000)
  --output_path OUTPUT_PATH
                        path to where output is written (default: )
  --multi_instance_icon MULTI_INSTANCE_ICON
                        whether to evaluate with multiple instances of an icon in an image (default: False)
  --visualize VISUALIZE
                        whether to visualize bounding boxes on image (default: False)

Running Programmatically

When run programmatically, the benchmark pipeline can also support some additional parameters, such as a custom icon detection algorithm. Here's an example:

benchmark = BenchmarkPipeline(tfrecord_path="datasets/small_multi_instance_v2.tfrecord")
correctness, latency_avg_secs, memory_avg_mibs = benchmark.evaluate(icon_finder_object=
                                                                icon_finder_shape_context.IconFinderShapeContext(clusterer=clustering_algorithms.DBSCANClusterer()))

(Note that correctness is a dataclass from which we can extract accuracy, precision, and recall by calling correctness.accuracy, correctness.precision, correctness.recall). Example usage of the benchmark pipeline for multi-instance cases can also be found in tests/integration_tests.py.

Modifying the Pipeline

The benchmark pipeline can be modified with these files:

  • modules/benchmark_pipeline.py which has the end-to-end pipeline, including a visualization option
  • modules/util.py which has tools to read in a dataset from a TfRecord file and custom Latency and Memory-tracking classes
  • modules/defaults.py can be modified to change the default icon finder algorithm, IOU threshold, output path, and dataset path to run the benchmark pipeline with.

Icon Matching Algorithms

Usage Example

A custom icon matching algorithm can be passed into the benchmark pipeline when run programmatically (see above), or run as a standalone. Here's an example of the latter for the Shape Context descriptor-based icon matching algorithm in particular:

bounding_boxes, __, __ = IconFinderShapeContext(clusterer=DBSCANClusterer(), desired_confidence=0.9, sc_min_num_points=90, sc_max_num_points=90, sc_distance_threshold=0.3, nms_iou_threshold=0.9).find_icons(image, icon)

  • clusterer is a clustering object from modules/clustering_algorithms.py
  • sc_min_num_points is the minimum desired number of points in a point set passed into the shape context descriptor algorithm (the more the number of points, the slower the algorithm will be)
  • sc_max_num_points is the maximum desired number of points in a point set passed into the shape context descriptor algorithm
  • sc_distance_threshold is the maximum shape context distance between an icon and image cluster for the image cluster to be under consideration (changing this is useful when we have a lot of clusters and want to quickly eliminate most of them)
  • nms_iou_threshold is the maximum IOU between two preliminary bounding boxes of image clusters before the lower confidence one is discarded by non-max-suppression algorithm (changing this is useful if we have an ensemble clustering approach)

Other Relevant Files

These are the other relevant files:

  • modules/algorithms.py includes a suite of algorithms for edge detection, shape context descriptor distance calculation, precision & recall calculation
  • modules/icon_finder.py is the abstract base class that the custom icon matching algorithm should inherit from.
  • modules/icon_finder_shape_context.py is the optimized version of the shape context algorithm pipeline that we used to achieve our current metrics.
  • modules/clustering_algorithms.py contains wrappers for Sklearn's clustering algorithms with custom defaults exposed for our use cases. These can be passed into the IconFinderShapeContext object.

Analysis Utilities

Analysis tools are provided in the following files:

  • modules/analysis_util.py contains tools to label cluster sizes, generate histograms, saving an icon/image as a pair, generate scatterplots, and scaling images/bounding boxes
  • modules/optimizer.py contains an optimizer to find best hyperparameters for clustering algorithms

acuiti's People

Contributors

a5malik avatar marilynzhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

acuiti's Issues

Try Another Clustering Algorithm

See if another clustering algorithm can do a better job of clustering (and hence higher recall) than DBSCAN. The recall we need to beat is 95% on the medium & large datasets!

Implement Parallelization

One iteration of processing an image takes 1-2 seconds. It could be interesting to think about parallelization in these ways:

  • Between processing images
  • Within processing one image, each cluster can be processed in parallel because there is no dependency between them.

Upsampling Points

It might be useful to upsample points whenever possible so that there isn't so much of a discrepancy between the number of points in the icon and the image patch (shape context alg will just add random points).

Basic Find Icon Algorithm Implementation

Canny edge detection, find contours, DBSCAN clustering, and shape context descriptor implementation. This is the initial pipeline that would be iteratively improved to achieve better accuracy, etc, results.

Upgrade to dataset v2

Go through the codebase and update places where we are still using the old set of datasets. Upgrade these.

Thresholding

We need to figure out what the optimal threshold is for determining the distance cutoff for which an image patch is considered a match to the template icon, whether this threshold is an absolute number or something more like a ratio.

Accuracy Metrics

Consider using precision and recall instead, to support having multiple bounding boxes (icon instances) in an image.

Improve Clustering with DBSCAN

Try out different values of eps and min_samples in the DBSCAN algorithm to make sure that the clustering going into shape context is optimal.

Set a distance-based threshold for bounding boxes

This will be a distance returned by shape context descriptor. It might be difficult to figure out a threshold, because the distances will be more fine-grained with more points, and coarse with less points.

Functionality to test zoomed-in images

Instead of just reading from a tfrecord, include functionality to also read in from other images. This will be helpful to test things like zoomed in images.

Figure out path forward for Pointset size dependency on Scale

The size of the pointset being input to shape context descriptor is a reflection of scale currently. We might have to change that from an absolute value to a relative one, or consider resizing the images that clients pass to us. (Including for the template icon.)

Integrate Scaling Logic into Find Icon Algorithm Itself

This is good for now, I just wanted to note that it would be good to eventually have these scaling factors as part of the main icon finding pipeline (rather than just the benchmarking), so that we can adjust the size of the inputs more easily in case they do need to be scaled to the correct size range.

Originally posted by @ewadkins in #30

Using Keypoints

From the two preliminary experiments so far, we know that the number of points kept matters, both for speed, and for accuracy. We’re going to try to find an optimal tradeoff now. Which points we keep also somewhat matters if there are few points. So, we’ll start by running contouring twice: once to know which are the keypoints, and another to get all the keypoints to help the DBSCAN clustering. After clustering, though, we’ll use the keypoint mask to start with only the keypoints.

Update README and/or docs

Include information for how to run the benchmark_pipeline from the command line, as well as the optimizer that can fine tune more parts via code, and changing the defaults.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.