Abstract This study empirically compares multiple eXplainable Artificial Intelligence (XAI) techniques to interpret short-term (weekly) machine learning-based burglary predictions at the micro-place level in Ghent, Belgium.While previous research predominantly relies on SHAP to interpret spatiotemporal crime predictions, this is the first study to systematically evaluate SHAP alongside other XAI techniques, offering both global and local model interpretability within the yale law school colors context of crime prediction.Using data from 2014 to 2018 on residential burglary, repeat and near-repeat victimization, environmental features, socio-demographic indicators, and seasonal effects, we trained an XGBoost model with 76 features to predict weekly burglary hot spots.This model serves as a basis for comparing the interpretative power of different XAI techniques.
Our results show that built environment and land use characteristics are the most consistent global predictors of burglary risk.However, their influence varies substantially at the local level, revealing the importance of spatial context.While global feature importance rankings are broadly aligned across XAI techniques, local explanations, especially between SHAP and LIME, often diverge.These discrepancies highlight the need for careful cocktail tree for sale method selection when translating predictions into crime prevention strategies.
In addition, this study demonstrates that short-term burglary risks are influenced by complex interactions and threshold effects between environmental and social disorganization features.We interpret these findings through the lens of criminological theory, and argue for more integrated approaches that go beyond examining the isolated effects of specific crime predictors.Finally, we call for greater attention to the methodological implications that arise from applying different interpretability techniques, particularly when machine learning model outputs are used to inform crime prevention and policy decisions.