Email is one of the quickest means of communication widely utilized by both companies and individuals on a daily basis. Despite its convenience, there are drawbacks associated with using emails, with one of the major issues being 'SPAM.' Spam emails are unsolicited mails sent to a large number of users, serving various purposes such as advertising, phishing, spreading malware, and engaging in other malicious activities. The presence of spam can significantly impact user experience, leading to dissatisfaction. To enhance user experience and mitigate the negative effects of spam, companies that manage email services have implemented filters. These filters work to identify and segregate spam, ensuring that users do not interact with emails that may compromise their computers or expose them to scams. This proactive approach helps safeguard users from potential harm and maintains the integrity of the communication platform.
The project's primary objective is to construct a model capable of accurately predicting whether an email is classified as spam or not. The dataset utilized for this project was obtained from the Kaggle platform. Moreover, the model has also been deployed using streamlit cloud.
Spam_detection_notebook.ipynb: This Jupyter Notebook is an essential part of the project dedicated to analyzing mail text and building a model for spam detection.
app.py: The app that is deployed on streamlit.
functions.py: Python script containing functions for removing stopwords, punctuation and lemmatization.
label_encoder.joblib: An encoder extracted from the notebook, used for interacting with the app.
lr_Model.joblib: A model extracted from the notebook used in the app.
requirements.txt: text file with all the dependencies needed to run the app.
spam.csv: The data that is used for the project.
tf_vector.joblib: Vectorizer extracted from the notebook, used for interacting with the app.
Objectives
Importing Libraries
Data Collection
Preprocessing
Exploratory data Analysis
Feature Engineering
modeling
Model review
To make the most of this notebook and our analysis:
Clone this repository to your local machine. Ensure you have the required Python libraries and dependencies installed. Open the notebook in Jupyter Notebook or any compatible environment. Execute each cell in the notebook sequentially to reproduce the analysis and model development.
The analysis is ongoing, and the notebook is continuously updated with new findings and model improvements.