Giter Club home page Giter Club logo

bloombert's Introduction

BloomBERT: A Task Complexity Classifier

BloomBERT is a transformer-based NLP task classifier based on the revised edition of Bloom's Taxonomy.

Bloom's Taxonomy is a set of hierarchical models used in classifying learning outcomes into levels of complexity and specificity. Although mostly employed by educators for curriculum and assessment structuring, BloomBERT takes a novel approach in differentiating the difficulty of a task through classifying productivity related tasks into the cognitive domain of the taxonomy.

BloomBERT can be accessed via an API endpoint or a web application

Example Outputs:

Task Description BloomBERT Classification
Programming an automated solution Create
Preparing for the presentation of findings from market research Apply
Reviewing performance metrics for this quarter Evaluate

Bloom's Taxonomy:

Description of Bloom's Taxonomy Levels 1

Model Overview

BloomBERT was built by fine-tuning a DistilBERT model, a lighter version of the original BERT transformer language model developed by Google. It was developed using Tensorflow and the Hugging Face Transformers library, incorporating a sequence classification head (linear layer) on top of the DistilBERT pooled outputs. Utilising the pre-trained DistilBERT model, BloomBERT was trained with a labelled data set curated for the specific task classification on Google Colab.

Training Data Distribution:

Bloom's Level Count
Create 430
Evaluate 634
Analyse 560
Apply 671
Understand 2348
Remember 1532
Total 6175
Training Results:
    EPOCH: 40 
    training accuracy: 0.9820
    validation accuracy: 0.9109

Deployment Architecture

Overview:

BloomBERT Deployment Architecture

Frontend:



Developed using Streamlit with Python and hosted on Heroku servers through GitHub.
Frontend repository is available here.

Backend:



Developed using Python to implement FastAPI endpoints. Model was trained using Jupyter Notebook and Tensorflow libraries. Docker used to containerise the application for deployment onto Google Cloud Run.

FastAPI Endpoints

The API endpoints are currently deployed on Google Cloud.
Note some time may be required for the instance to start up.

Request:

GET https://bloom-bert-api-dmkyqqzsta-as.a.run.app

Response:

{
  "health_check": "OK", 
  "model_version": "1.0"
}

Request:

POST https://bloom-bert-api-dmkyqqzsta-as.a.run.app/predict

{
  "text": "Annotating key points in meeting minutes"
}

Response:

{
  "blooms_level": "Understand",
  "probabilities": {
    "Analyse": 0.00078,
    "Apply": 0.00075,
    "Create": 0.00054,
    "Evaluate": 0.00051,
    "Remember": 0.00261,
    "Understand": 0.99481
    }
}

Development Journey

  • Naive-Bayes (TF-IDF Vectorizer)
  • Naive-Bayes (TF-IDF Vectorizer + SMOTE)
  • SVC (TF-IDF Vectorizer)
  • SVC (TF-IDF Vectorizer + SMOTE)
  • SVC (word2vec, spaCy)
  • DistilBERT Transformer model

Model Performance Comparison:

Model NB (TF) NB (TF+SM) SVC (TF) SVC (TF+SM) SVC (w2v+sp) DistilBERT
Validation
Accuracy
0.77328 0.81538 0.86721 0.88421 0.81296 0.91090

1. Naive-Bayes (TF-IDF Vectorizer)

  • Starting with the Naive-Bayes algorithm that is often employed for multiclass classification problems, this model was used as a performance benchmark against other models.
Validation Accuracy: 0.77328

2. Naive-Bayes (TF-IDF Vectorizer + SMOTE)

  • After observing the presence of data imbalance, Synthetic Minority Oversampling Technique (SMOTE) was implemented in attempts to oversample minority data points to create a balanced dataset and achieve better classification results.
  • Using SMOTE successfully improved classification accuracy of the Naive-Bayes Model
Validation Accuracy: 0.81538

3. SVC (TF-IDF Vectorizer)

  • To improve model accuracy, I looked into using a Linear Support Vector Classifier (SVC) for the multi-classification problem.
  • SVCs were determined to outperform Naive-Bayes in this specific classification problem with a much higher validation accuracy observed.
  • However, the model still fails to generalise well when given inputs of similar semantics.
Validation Accuracy: 0.86721

4. SVC (TF-IDF Vectorizer + SMOTE)

  • Applying SMOTE to this model showed slight improvements in classification accuracy.
  • However, it still suffers from the same problems as the above few models.
Validation Accuracy: 0.88421

5. SVC (word2vec, spaCy)

  • To address the problem, I looked into using word2vec (word2vec-google-news-300) in replacement of TF-IDF Vectorizers to extract semantic relations from words within the sentences.
  • This model uses spaCy models to tokenise the inputs before feeding the tokens into the word2vec model.
  • Each word vector generated from the tokens are then averaged to form a sentence vector input for the SVC model.
  • Unexpectedly, there was a significant drop in accuracy compare to the previous model using TF-IDF.
Validation Accuracy: 0.81296

6. DistilBERT Transformer model

  • After doing more research, I approached the problem from another angle using deep learning.
  • Transformer models had demonstrated significant improvements over traditional NLP systems, excelling in processing sequential data and understanding language semantics.
  • DistilBERT was chosen as the transformer model of choice due to its smaller size and greater speed compared to the original BERT model.
  • The pre-trained model was then fine-tuned using the data set that I had developed using the taxonomy.
  • It achieved the best accuracy compared to previous models and generalised well to unseen data with similar semantics, providing satisfactory predictions that fit within the taxonomy.
  • This was the model chosen for BloomBERT.
Validation Accuracy: 0.91090

License

License: MIT

Source codes for model development are available under the MIT License. Developed by Ryan Lau Q. F.

Footnotes

  1. Bloom's Taxonomy Graphic โ†ฉ

bloombert's People

Contributors

ryanlauqf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

bloombert's Issues

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.