black and white bed linen

Model Recon: AI Safety Research

Transformer Model Reconnaissance and debugging solutions to keep AI secure and reliable.

About Model Recon

I’m building ModelRecon to bring clarity, transparency and safety to the rapidly-evolving world of AI. My goal is to build generalised, open-source tools for interpretability and explainability (XAI) — so that developers, researchers and deployers can understand why an AI model makes certain decisions, not just what it outputs. I believe that every AI system — whether for vision, language, or tabular data — deserves a “glass box,” not a “black box.”

At ModelRecon, I work on designing simple, easy-to-use Python-based libraries and pipelines that plug into existing ML workflows and produce human-readable explanations. I aim to lower the barrier for safe, accountable and auditable AI — especially for developers and teams without specialist ML-safety backgrounds.

run with me

Get updates on AI safety research

gray computer monitor

Contact Me For Discssions

I love to talk AI