Available June 2026 · Denton TX · Open to relocation

Medical Imaging AI · Clinical Decision Support

AI that works
inside the clinic,
not just the lab.

I build deep learning systems for clinical imaging — segmentation pipelines that measure fetal biometry within clinical error tolerances, diagnostic AI that explains its reasoning to the clinician reading it, and models compressed to run on the hardware a hospital actually has.

MS AI (Biomedical Concentration), University of North Texas. Available June 2026.

4.00/4
GPA · UNT
3
Deployed Systems
ASD Detection — HuggingFace Spaces
Fetal Head HC — HuggingFace Spaces
Histopathology — Django + React
5+
Imaging Modalities
Obstetric ultrasound (fetal biometry)
Structural MRI (neuroimaging)
Histopathology (digital pathology)
ECG + IMU (wearable biosignal)
YOLOv8 food/object detection
±1.75mm
Best HC Error · ISUOG ≤3mm

Why it matters

Clinical AI has an execution problem — not a research problem.

Most medical imaging AI stops at the benchmark. Getting it into clinical use requires a different kind of engineering.

01
Models that meet clinical standards
Not "statistically significant" results — systems validated against the actual thresholds clinicians use. Fetal HC within ISUOG's ±3mm. ASD detection sensitivity and specificity measured per acquisition site, not pooled. If it doesn't pass the clinical bar, it doesn't deploy.
02
Explainability built in, not bolted on
Clinicians need to understand what the model is attending to before they act on it. Every clinical system I build includes spatial heatmaps (GradCAM++), uncertainty estimates, and a structured report that tells the clinician exactly what the AI found — and where it is uncertain.
03
Deployment-aware from day one
A model too large to run on clinical hardware is not a clinical model. CNN pruning that delivers 2× compression with no accuracy penalty. Edge inference at 0.033ms. HuggingFace Spaces deployment. Model Cards with regulatory framing (FDA SaMD Class II, EU IVDR Class B).

Flagship Work

Two systems. Two hard clinical problems.

Each deployed, each validated against real clinical standards, each documenting not just what works but where the system fails.

★ Capstone System · Obstetric AI · Deployed
Automated Fetal Head Circumference Measurement

Every routine antenatal scan includes a manual HC measurement — a 2–4 minute calliper-placement procedure subject to inter-observer variation of up to 7mm. At 20 scans/day, that is 153 sonographer hours per year per unit.

This system automates that measurement to 1.75mm mean error — within ISUOG's ±3mm clinical threshold, and 3.4× more accurate than the previous published state-of-the-art. Static frame and cine-loop modes. Gestational age estimation via Hadlock formula. PDF clinical report with GradCAM++ and uncertainty maps. Late-trimester limitation documented explicitly in the Model Card.

✓ ISUOG ±3mm compliant ✓ 3.4× better than published SOTA ✓ Static + cine-loop modes Hadlock GA estimation GradCAM++ + uncertainty maps FDA SaMD Class II framing
01
1.75mm
HC Mean Error
97.36%
Dice (Segmentation)
0.9985
R² (HC accuracy)
153 hrs
Saved / unit / year
Live on HuggingFace Spaces
02
0.994
AUC-ROC
95.6%
Sensitivity
97.2%
Specificity
0.027
Brier Score (calibrated)
Live on HuggingFace Spaces
★ Origin Project · Neuroimaging AI · Deployed
Autism Spectrum Disorder Detection from Brain MRI

ASD diagnosis currently takes 18–24 months from referral in many health systems. Structural MRI is already routinely acquired in paediatric neurodevelopment referrals. This system analyses that existing data algorithmically as a pre-assessment triage layer.

Operating on raw NIfTI volumes, it identifies subjects with a probability of ASD, spatially highlights the neuroanatomical regions driving the inference (GradCAM + LIME), quantifies uncertainty via MC-Dropout, and produces a structured PDF with site-specific reliability indicators. 95.6% sensitivity across 1,067 subjects at 17 acquisition sites. Confident false positives (σ ≈ 0.005) documented as the dangerous failure mode.

✓ 95.6% sensitivity · 97.2% specificity ✓ Well-calibrated (Brier 0.027) GradCAM + LIME dual explanation MC-Dropout uncertainty 17-site validated FDA SaMD Class II framing
14 projects total — CNN pruning (novel Hybrid Crossover method), WECARE edge AI, histopathology, BirdCLEF, and more.
Edge AI Model Compression Digital Pathology Wearable Biosignal Big Data Multimodal Fusion
All 14 Projects ↗

Career Arc

Every project built something the next one needed.

This is not a list of coursework. It is a deliberate progression toward production clinical AI.

My B.Tech capstone in 2023 was an ASD detection project that ended with a question rather than an answer: the CNN reached 91% accuracy but the system couldn't explain to a clinician what it was looking at or why. That gap — between a model that works and a system that can be trusted — became the focus of everything that followed.

The MS at UNT was structured to close that gap systematically. Fetal head circumference taught real clinical measurement standards and temporal reasoning. CNN pruning addressed deployment constraints directly. Histopathology built the full-stack deployment capability. WECARE pushed inference to the edge. Every course had a purpose beyond the grade.

The result is a body of work across five imaging modalities, three deployed systems, and performance documented against clinical standards rather than academic benchmarks. The goal throughout was always the same: build AI that a clinician can use, trust, and act on.

View full project history →
B.T
2023 · B.Tech · MIT Manipal
ASD Detection — origin project
First encounter with clinical AI. CNNs beat ViTs at this data scale. The explainability gap became the thesis of everything that followed.
MS1
2024 · UNT · SDAI + ML + FE
Histopathology + deployment stack
First full-stack clinical deployment. Django REST + React. Data scaling study — non-monotonic performance showed more data ≠ better model.
MS2
Fall 2025 · UNT · CSCE 6260
Fetal HC + CNN Pruning + WECARE
Clinical measurement standards. Temporal reasoning. Novel compression method. Edge deployment. The complete clinical AI pipeline.
Now
2026 · Post-course rebuilds
Clinical-grade systems + governance
Both flagship systems rebuilt with XAI, Model Cards, bias audits, regulatory framing, and live deployment. Available June 2026.

Expertise

Where I'm ready to contribute on day one.

Not research directions — concrete capabilities already exercised in clinical contexts.

01
Clinical Image Segmentation & Measurement
U-Net segmentation on real clinical data (DICOM/NIfTI). Boundary-weighted loss, deep supervision, ellipse fitting to anatomical structures. Validated against clinical standards (ISUOG HC threshold). Multi-frame temporal analysis from cine-loop ultrasound.
U-NetHC18ISUOGOpenCV
02
Explainable AI for Clinical Outputs
GradCAM++ and LIME dual-method explanation with cross-validation. MC-Dropout uncertainty quantification. LLM-generated clinical narratives. PDF report generation including regulatory disclaimers and site-specific reliability indicators.
GradCAM++LIMEMC-DropoutReportLab
03
Model Compression & Edge Deployment
Structured CNN pruning (novel Hybrid Crossover method): 2× compression, +0.37% accuracy. TorchScript on-device inference at 0.033ms. HuggingFace Spaces deployment. Docker + Flask + AWS Streamlit. DICOM pipeline integration constraints.
PruningTorchScriptDockerAWS
04
Multi-Site & Multi-Modal Validation
Trained and validated on multi-site datasets (ABIDE-I: 17 sites, different scanners). Site-stratified performance analysis — scanner heterogeneity as a deployment constraint, not just a confound. Subgroup analysis documented in Model Cards.
ABIDE-ISite AnalysisNIfTIDICOM
05
Regulatory & Clinical Governance
Model Cards with intended use, out-of-scope uses, known failure modes, and regulatory pathway (FDA SaMD Class II, EU IVDR Class B). Bias audits stratified by clinically relevant subgroups. Performance documented in clinical language alongside technical metrics.
FDA SaMDEU IVDRModel CardIEC 62304
06
End-to-End Pipeline Engineering
Full-stack clinical AI: data ingestion → quality filtering → inference → aggregation → XAI → report generation → deployment. PyTorch, TensorFlow, scikit-learn. PySpark + AWS EMR for large-scale data. Streamlit, Django REST, React for clinical interfaces.
PyTorchTensorFlowPySparkDjango
Education
MS Artificial Intelligence
Biomedical Concentration · University of North Texas
GPA 4.00 · Graduating May 2026
4.00 / 4.00 GPA
B.Tech Computer Science & Engineering
Minor: Data Analytics · Manipal Institute of Technology
Dec 2023
Target Roles
Medical Imaging AI EngineerRadiology AI, segmentation pipelines, DICOM workflows
Machine Learning Engineer — HealthcareClinical decision support, model deployment, XAI
AI Research EngineerApplied research in medical imaging, ultrasound, pathology AI
Clinical AI ScientistModel validation, regulatory evidence, clinical integration
Contact
Available · June 2026
STEM OPT authorized (3-year) · H-1B sponsorship eligible