Giter Club home page Giter Club logo

Alfonso Blanco.'s Projects

abalone_decisiontree_c4-5 icon abalone_decisiontree_c4-5

ABALONE_DECISIONTREE_C4-5: A procedure is attached that uses the Abalone file (https://archive.ics.uci.edu/ml/datasets/abalone) as test and training . After evaluating the entropy of each field, a tree has been built with the nodes corresponding to fields 0, 7 and 4 and branch values ??in each node: 1 for the root node corresponding to field 0, 29 for the next node in the hierarchy corresponding to field 7, and 33 in the last node corresponding to field 4. The values ??of each field have been associated with indices, which can encompass several real values. the values ??of these indices are those that have been considered for the calculation of entropies and for making a branching of values ??at each node. A hit rate of around 58% is obtained, that is, in the low range of the existing procedures to treat this multiclass file, which are detailed in the documentation to download from https://archive.ics.uci.edu/ml/ datasets / abalone The depth of the tree has been increased without obtaining significant improvements. Nor has it been significantly improved by applying adaboost. Resources: Spyder 4 On the c: drive there should be the abalone-1.data file downloaded from https://archive.ics.uci.edu/ml/datasets/abalone Functioning: From Spyder run: AbaloneDecisionTree_C4-5-ThreeLevels.py The screen indicates the number of hits and failures and in the file C:\AbaloneCorrected.txt the records of the test file (records 3133 to 4177 of abalone-1.data) with an indication of whether their predicted class values ??coincide with the reals, the predicted class value and the order number of the record in abalone-1.data The following programs are also attached: AbaloneDecisionTree_ID3.py and AbaloneDecisionTree_C4-5_parameters.py that have served to calculate the necessary parameters to build the tree. Cite this software as: ** Alfonso Blanco García ** ABALONE_DECISIONTREE_C4-5 References: https://archive.ics.uci.edu/ml/datasets/abalone

abalone_naivebayes_weighted_adaboost icon abalone_naivebayes_weighted_adaboost

ABALONE_NAIVEBAYES_WEIGHTED_ADABOOST: Two procedures are attached that use the Abalone file as test and training (https://archive.ics.uci.edu/ml/datasets/abalone). Both start from a treatment of the training part calculating the frequencies corresponding to each value of each field and applying a Naive Bayes probability calculation. In a second step, one of the procedures takes advantage of the previous result to apply weights based on each field to the wrong or true records. The other procedure uses Adaboost, using the adaboost routine published at https://github.com/jaimeps/adaboost-implementation (Jaime Pastor). A hit rate of around 58% is obtained, that is, in the low range of the existing procedures to treat this multiclass file, which are detailed in the documentation to download from https://archive.ics.uci.edu/ml/ datasets / abalone

bfs-no-conventional icon bfs-no-conventional

BFS-no-conventional Search according to the BFS algorithm according to an "unconventional" method, meaning the conventional one that is downloaded from http://www.paulgraham.com/acl.html link to code file acl2.lisp. In this "unconventional" version, all the paths that lead to the objective are obtained following the BFS strategy, not only the first path, which will be the shortest. Since there may be several paths that are the shortest, apart from facilitating the case in which the branches are weighted and only the shortest path is not decisive. A program BFS-no-conventional-only-first-path.cl is also provided that obtains only the first path that reaches the target (actually it is the same program in which the value of the option parameter has been modified). The result is an increase in time but a significant reduction in the memory occupied, which is useful in the case of large graphs. The code has some lack of "orthodoxy", such as the use of global variables. Requirements: Allegro CL 10.1 Free Express Edition The programs are loaded, the code is selected and Tools Incremental Evaluation is given. The test cases are then selected and Tools Incremental Evaluation is given again. References: ANSI Common Lisp by Paul Graham. http://www.paulgraham.com/acl.html link to code, file acl2.lisp Diverse material of practices of the subject of Artificial Intelligence of the Polytechnic School Superior, Computer Engineering, of the Autonomous University of Madrid.

carsbrands_inceptionv3 icon carsbrands_inceptionv3

Project that detects the brand of a car, between 1 and 49 brands, that appears in a photograph, with a success rate of more than 70% (using a test file that has not been involved in the training as a valid or training file, "unseen data") and can be implemented on a personal computer

carsbrands_resnet_pytorch icon carsbrands_resnet_pytorch

Project that detects the brand of a car, between 1 and 49 brands ( the 49 brands of Stanford car file), that appears in a photograph with a success rate of more than 80% (using a test file that has not been involved in the training as a valid or training file, "unseen data") and can be implemented on a personal computer.

carscolor icon carscolor

Project that from photos of cars, estimates its detailed colors ( not basic colors) based on the maximum values of the R G B histograms of each photo

carsmodels_resnet_pytorch icon carsmodels_resnet_pytorch

Project that detects the model of a car, between 1 and 196 models ( the 196 modelss of Stanford car file), that appears in a photograph with a success rate of more than 70% (using a test file that has not been involved in the training as a valid or training file, "unseen data") and can be implemented on a personal computer.

detectcardistanceandroadlane icon detectcardistanceandroadlane

Project that estimates the distance a car is on a road based on the relationship between the real size of the car and the size it appears in the video obtained. It also estimates the lane the car are traveling in at any given time based on the angle between the position of the car and camera, even guess lane change intentions

detecttrafficsign icon detecttrafficsign

Creation of a model based on yolov8 that uses the file downloaded from https://www.kaggle.com/datasets/valentynsichkar/traffic-signs-dataset-in-yolo-format/data as a custom dataset to detect traffic signs. The detected signals can be recognized using the project https://github.com/ablanco1950/RecognizeTrafficSign.

dpll_propositional_logical_inference icon dpll_propositional_logical_inference

DPLL_propositional_logical_inference: Starting from a FNC (Conjunctive Normal Form), that is, a series of clauses (literals joined by the or operator) joined by an and operator. Apply the DPLL algorithm and determine the values ​​of the literals that give a solution to the FNC. A clear explanation of the DPLL algorithm can be found at http://www.cs.us.es/~fsancho/?e=120. The tests have been implemented based on the examples that appear in a link to netlogo on that page. If you have an expedition in FBC (with connectors => and <=>) you can switch to an FNC, which would be the entrance to this project, downloading the https://github.com/bertuccio/inferencia-logica-proposicional project. This project can be completed with the DPLL algorithm by adding the instructions given from the definition of the DPLL function to the end. And activating the instructions that appear in the function pasa-lista-FBF-to-lista-FNC that would serve as an interface between both projects. In fact, DPLL_propositional_logical_inference is intended to complete Propositional Logical Inference, with the DPLL algorithm and share functions. Requirements: Allegro CL 10.1 Free Express Edition References: https://github.com/bertuccio/inferencia-logica-proposicional by Adrián Lorenzo Mateo (Bertuccio) who uses material from the Artificial Intelligence practices at the Higher Polytechnic School of the Autonomous University of Madrid. Informatics Engineering. http://www.cs.us.es/~fsancho/?e=120 by Fernando Sancho Caparrini. Higher Technical School of Computer Engineering of the University of Seville.

guessimageslfw_vgg16 icon guessimageslfw_vgg16

Simple application of VGG16 for the recognition of images, obtained from LFW, of a limited number of famous(15) with good performance (greater than 80%)

hastie_corrected_decisiontree icon hastie_corrected_decisiontree

Using the decision tree technique based on entropy calculation, this application calculates the hit rate of the HASTIE file with a hit rate higher than 99%

hastie_corrected_hitrate_vs_sensitivity icon hastie_corrected_hitrate_vs_sensitivity

Taking into account that the accuracy of statistical results depend on the accuracy of the input data, not only on the algorithm, a Hastie file has been created in which all the records have the correct class assigned and tests of hit rates and sensitivity have been carried out

hastie_naivebayes icon hastie_naivebayes

HASTIE_NAIVEBAYES: from the Hastie_10_2.csv file obtained by the procedure described in https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_hastie_10_2.html, obtains a success rate in the training of 88% and 84% in the test. The main difference is that in the statistical process, each field is sampled differently according to its contribution to the hit rate.

knn_min icon knn_min

kNN-MIN: A Spark-based design of the k-Neighbors Neighbors classifier for big data, using minimal resources and minimal code.

lanedetection_template icon lanedetection_template

Lane detection using cv.matchTemplate function, a simpler system than the one usually used to process the image and detect contours. Furthermore, it does not require establishing a region of interest.

lfw_svm_facecascade icon lfw_svm_facecascade

An image recognition process contained in the LFW database http://vis-www.cs.umass.edu/lfw/#download is carried out with extreme simplicity, taking advantage of the ease of sklearn to implement the SVM model. Cascading face recognition is also used to refine the images, obtaining accuracy greater than 70% in the test with images that do not appear in the training.

lfw_twomodels icon lfw_twomodels

A recognition process of images contained in the LFW database http://vis-www.cs.umass.edu/lfw/#download is carried out using two models, one based on the minimum distance between training image records and test and another that is an adaptation of the CNN KERAS model https://keras.io/examples/vision/mnist_convnet/. Both models are complementary. A module is also incorporated that takes advantage of the facility of sklearn to implement the SVM model with great simplicity.

licenseplate_clahe icon licenseplate_clahe

Through the use of Contrast Limited Adaptive Histogram Equalization (CLAHE) filters, completed with otsu filters, a direct reading of car license plates with success rates above 70% and an acceptable time is achieved

licenseplate_labeled_maxfilters icon licenseplate_labeled_maxfilters

From images of cars in which their license plates have been labeled, and passing filters, their recognition is attempted by pytesseract . As there is not a single filter that works for all the licensess, it is tried with several filters and The license plate number that has been detected the most times is assigned.

licenseplate_roboflowapi_filters_paddleocr icon licenseplate_roboflowapi_filters_paddleocr

This project detects the car license plate through a free Roboflow API, submits the detected car license plate image to a battery of filters and obtains the car license plate number using paddleOcr

licenseplate_wpod-net_maxfilters icon licenseplate_wpod-net_maxfilters

It's a Wpod-net demo, downloaded from https://github.com/quangnhat185/Plate_detect_and_recognize, for the recognition of car license plates, the use of labeled images is avoided, with lower accuracy

licenseplate_yolov8_filtercnn_paddleocr icon licenseplate_yolov8_filtercnn_paddleocr

Project that uses Yolov8 as license plate detector, followed by a filter that is got selecting from a filters collection with a code assigned to each filter and predicting what filter with a CNN process

licenseplateimage_thresholdfiltered icon licenseplateimage_thresholdfiltered

From some files of images and labels obtained by applying the project presented at https://github.com/ashok426/Vehicle-number-plate-recognition-YOLOv5, the images of license plates are filtered through a threshold that allows a better recognition of the license plate numbers by pytesseract. On 05/23/2022, a new version is introduced. On 07/04/2022 an ML version es added

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.