Giter Club home page Giter Club logo

vision-ml's Introduction

Vision-ml

Build Status GitHub GitHub

See also Vision-ui a series algorithms for mobile UI testing.

A R-CNN (Region-based Convolutional Neural Networks) machine learning model for handling pop-up window in mobile apps.

Mobile UI Recognition

Vision-ml is a machine learning model that identifies the UI element that closes the Pop-up window and return its UI coordinate (x, y) on the screen.

A typical usage scenario would be:

  • In mobile testing, when using Appium or similar framework for UI automation, it is usually very tricky to locate the components on the Pop-up window which is rendered on top of the current screen.

  • Input a mobile App screenshot with the Pop-up, and you will get the predicted result (as shown in the blue box).

1 2 3

Requirements

Python3.6.x

# create venv before install requirements
pip install  -r requirements.txt

Usage

You can use Vision with a pre-trained model in "model/trained_model_1.h5", the number in the file name is for version control, you can update it in file named "all_config".

There are two ways of using Vision.

Predict an image with Python script

  1. Update your file path in "rcnn_predict.py"
model_predict("path/to/image.png", view=True)
  1. Run script and you will get the result
python rcnn_predict.py

Predict an image with a web server

  1. Start the web server

You can create server with Dockerfile

python vision_server.py
  1. Post image to web server
curl  http://localhost:9092/client/vision  -F "file=@${IMAGE_PATH}.png"
  1. The response from the web server will have the coordinate or the UI element, alone with a value of score 0 or 1.0 (0 means not found, 1.0 means found).
{
  "code": 0,
  "data": {
    "position": [
      618,
      1763
    ],
    "score": 1.0
  }
}

Train your own model

  • You can choose to use your train image if your close button has different feature from the given training set. Just take a screenshot and put it in the "image/" folder

  • Rename the train image with prefix "1" for close button and "0" for background.

  • You can refer to the given training images in the repo for examples.

Button image named 1_1.png:

Background image named 0_3.png:

  1. There are some images in this repo for training.
0_0.png 0_1.png 0_2.png 0_3.png 0_4.png 0_5.png 0_6.png 1_0.png 1_1.png 1_2.png 1_3.png 1_4.png 1_5.png 1_6.png
  1. Get augmentation of your image, run method in "rcnn_train.py"
Image().get_augmentation()
  1. Train your image, run method in "rcnn_train.py"
train_model()

Model layers and params

Model layers

  • Model input image will be processed from 3d to 1d and pixel value will be set to 255 or 0, which makes the model has great classification.
  • There are 5 layers and the 196,450 parameters in total, which is a light model and makes the training easier and the model is also robust for different Pop-up windows.
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 48, 48, 32)        320       
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 46, 46, 32)        9248      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 23, 23, 32)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 21, 21, 64)        18496     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 10, 10, 64)        0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 8, 8, 64)          36928     
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 4, 4, 64)          0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 4, 4, 64)          0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 1024)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 128)               131200    
_________________________________________________________________
dense_2 (Dense)              (None, 2)                 258       
=================================================================

Total params: 196,450
Trainable params: 196,450
Non-trainable params: 0

Training params

In all_config.py we have training params of batch_size and epochs

  • batch_size is the number of training image for updating the model
  • epochs is the number for training the model with all the training image.

Performance

With CPU of [email protected]:

  • Training a model takes 30s with 10 epochs
  • A mobile Pop-up window screen shot with 1080p takes 10s for calculating.

Reference

The R-CNN model refers to this paper.

vision-ml's People

Contributors

brighthai avatar chenchaoyi avatar dependabot[bot] avatar hollowman6 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vision-ml's Issues

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.