Computer Vision Pipeline
DEPLOYEDReal-time vision inference at the edge — with the monitoring layer that ensures it keeps working after the demo ends.
Achieves 94.3% mAP on custom detection task, up from 71% baseline
Inference latency of 28ms on NVIDIA Jetson at production resolution
Drift detection caught 3 distribution shifts in 4 months of operation
Model size reduced 4× through quantisation with <1% accuracy delta
A manufacturing client needed automated defect detection on a production line running at 60 items per minute. Existing cloud-based solutions introduced unacceptable latency. The model needed to run on edge hardware with limited compute, handle variable lighting conditions, and flag its own uncertainty rather than silently misclassify.
Started with YOLOv8 fine-tuned on a custom annotated dataset of 12,000 images across 8 defect classes. Applied aggressive data augmentation to simulate lighting variation, blur, and partial occlusion. Post-training quantisation reduced model size from 86MB to 22MB while preserving accuracy within tolerance. A calibrated confidence layer surfaces low-certainty predictions for human review rather than forcing a binary output.
Deployed a lightweight drift detection module alongside the model — it monitors the rolling distribution of confidence scores and triggers an alert when the distribution shifts beyond a threshold. This caught a camera calibration issue and a lighting change in the first month, both of which would have silently degraded accuracy for days without detection.
Edge deployment exposes assumptions you didn't know you had. The model that performed well in staging failed on production hardware due to a different colour profile on the industrial cameras. Sensor-specific preprocessing turned out to be as important as model architecture. Ship to the actual hardware early.