Giter Club home page Giter Club logo

pubg_aimbot's Introduction

PUBG_AIMBOT

Pubg Real Time Player Detection Using TF2


Steps for training the model on google colab

follow the below link and perfrom step by step execution
"https://colab.research.google.com/drive/1tO3nGFzFX5nvARWqdr7m5bMUVM9W5NaW#scrollTo=vMNPnML5y9b7"
(mail: [email protected] to get the access for the colab link)


Steps for training the model on local machine

  1. Create seperate virtual environment for TF2 packages installation "conda create -n (name_of_ur_choice) pip python=3.8" .
    Make sure you have anaconda installed , or visit "https://www.anaconda.com" to download the package .

  2. Now activate the created virtual environment "conda activate (name_of_ur_choice)"

  3. Now get the tensorflow2.x by running command "pip install --ignore-installed --upgrade tensorflow==2.2.0" .
    Check if the install is proper by running "python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"" .
    It should run without producing any error .

Note : This step is Optional . If you have GPU Nvidia GPU (GTX 650 or newer) then only go for step 4 or else train the model on CPU or colab

  1. If you have GPU in your system then make sure you have "CUDA Toolkit v10.1" , "CuDNN 7.6.5" installed and integrated with TF2.x . I have attached the links for downloading the above 2 packages .
    CUDA Toolkit v10.1 : https://developer.nvidia.com/cuda-10.1-download-archive-update2?target_os=Windows&target_arch=x86_64&target_version=10&target_type=exenetwork
    CuDNN 7.6.5 : https://developer.nvidia.com/rdp/cudnn-download
    If the above downloads and integrations are successful then this command should run without any error :
    "python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))""
    If it gives any error then kindly fix and try again .

  2. Now we download the TensorFlow Model Garden "https://github.com/tensorflow/models" You can either git clone https://github.com/tensorflow/models it to the working folder
    Or just download the zip from the link and extract at the working folder

  3. Now we install Protobuf "https://github.com/google/protobuf/releases" look for "protoc-3.12.3-win64.zip" and extract it in suitable location .
    Make sure you are setting up the environment variables as well .
    Run the command "protoc object_detection/protos/*.proto --python_out=." to check the installtion .
    It should run without any error , It might not return output but thats fine as long as it is not returning any error .

  4. Now we install COCO API , Run the following commands .
    a) -> pip install cython
    Visual C++ 2015 build tools must be installed and on your path for getting package (b).
    if you don't have it get it from this link "https://go.microsoft.com/fwlink/?LinkId=691126"
    b)-> pip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI

  5. Now we install Object Detection API. Now cd to "/models/research/" and run the following commands .
    -> cp object_detection/packages/tf2/setup.py .
    -> python -m pip install .
    Now to test the installations run "python object_detection/builders/model_builder_tf2_test.py" being inside "/models/research/" .
    It should return "Ran 20 tests in 68.510s OK (skipped=1)" type of output at the end .

(Delete both files "images/train.csv , images/test.csv" and "test.record , train.record" if you are building the model with your custom dataset)
9) Make sure you have dataset with proper images and xml files (pascal voc format) .
Split the train and test data , the folder structure should be like this
"images/train/"
"images/test/"

  1. Run "xml_to_csv.py" to get the .csv files for both test and train (custom) dataset .

  2. convert the test and train .csv files into the .record files by running following commands .
    a) !python generate_tfrecord.py --csv_input=images/train_labels.csv --image_dir=images/train --output_path=train.record
    b) !python generate_tfrecord.py --csv_input=images/test_labels.csv --image_dir=images/test --output_path=test.record
    It will create two .record files inside of the working directory .

  3. Download the model which you want to train from "https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md"
    Extract the file twice and put inside of the working directory .
    I have used "EfficientDet D0 512x512" model here and have also given the downloaded files . (feel free to use whatever model u want)

  4. make sure u have the "labelmap.pbtxt" and "efficientdet_d0_coco17_tpu-32/pipeline.config" files parameters according to your dataset .
    I have commented out everything inside both the files , and it is pretty intitutive .

  5. train the model via following command , it will craete "training/" folder and will store the checkpoints .
    "!python models/research/object_detection/model_main_tf2.py
    --pipeline_config_path={efficientdet_d0_coco17_tpu-32/pipeline.config}
    --model_dir={training/}
    --alsologtostderr
    --num_train_steps={number of steps you want model to train for}
    --sample_1_of_n_eval_examples=1
    --num_eval_steps={num_eval_steps}"

make sure that u have passed the paths correctly .

  1. Once the training is completed , generate the inference graph of the final checkpoint created during the model training . "inference_graph/" folder would be created .
    "!python models/research/object_detection/exporter_main_v2.py
    --trained_checkpoint_dir {training/}
    --output_directory {inference_graph/}
    --pipeline_config_path {efficientdet_d0_coco17_tpu-32/pipeline.config}"

    You can find your model inside "inference_graph/saved_model/saved_model.pb"
    I have uploaded the folder here so that u can avoid the training of the model (but it's better to train it if u have better dataset than given one) .

  2. Now run the "prediciton.py" file which will look for all the images present inside of the "images/test" directory and will give the results with the bounding boxes on it .

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! HAPPY CODING !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
@ Author
Ayush Mishra


Results returned by the model





Results produced during analysis



pubg_aimbot's People

Contributors

ayush4087 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.