San Francisco, with its rich history and evolving urban landscape, confronts a myriad of crimes that stitch patterns across the city's tapestry. This analysis plunges into approximately 12 years of crime data, striving to predict the category of each criminal act, offering insights anchored in time and place.
The dataset, curated from the San Francisco Police Department's Crime Incident Reporting system, canvases incidents spanning 1/1/2003 to 5/13/2015. Key facets encompass:
- Dates: Marking each crime's moment.
- Category: The crime's classification, our prediction's target.
- Descript: A granular account of the incident.
- DayOfWeek: The day bearing witness to the crime.
- PdDistrict: The jurisdiction under which the police department operates.
The dataset's vast expanse, with its 878,049 entries across 27 columns, unravels upon inspection. The 'Category' attribute, in particular, sketches the contours of the city's criminal pulse. An exploratory foray amplifies patterns, revealing crime distributions across days and the prevalence of specific crime genres. Visual aids, from histograms to scatter plots, illuminate these narratives.
The challenge? Predicting crime categories, a multiclass classification puzzle. Enter XGBoost, the chosen analytical linchpin.
Why XGBoost?
- Gradient Boosting Framework: XGBoost, representing eXtreme Gradient Boosting, thrives as a gradient boosting algorithm, optimizing both precision and computational swiftness.
- Tackling Multiclass Classification: XGBoost's DNA inherently supports multiclass classification, offering a probability distribution across potential outcomes. This makes it tailor-made for our task of predicting multifaceted crime categories.
- Scalability and Velocity: Parallelizable by design, XGBoost utilizes all CPU cores during its learning phase, accelerating model genesis.
- Regularization: Built-in L1 (Lasso Regression) and L2 (Ridge Regression) regularization ensure the model remains resilient to overfitting.
Given its arsenal of strengths, XGBoost emerges as the go-to choice for tasks demanding computational prowess and pinpoint accuracy.
- Feature Sculpting: The dataset undergoes a metamorphosis, molding attributes to supercharge the model's predictive might.
- Model Training: XGBoost, having imbibed the data, unravels the intricate choreography between various features and their corresponding crime categories.
- Predictions: Post its learning odyssey, XGBoost ventures into predicting crime categories, classifying them based on its learned repertoire.
- Libraries: Python's rich palette, from
pandas
for data manipulation,matplotlib
andseaborn
for visualization, toxgboost
andsklearn
for modeling, fuel this analysis. - Environment: Python 3.x.
- Data Repository: Dive deeper into San Francisco's crime mosaic on Kaggle.