The fundamental challenge in image annotation is the speed-quality tradeoff. Manual annotation of complex segmentation masks is slow and expensive. Automated annotation is fast but introduces systematic errors — especially at object boundaries, occluded regions, and in domain-specific images (medical scans, satellite imagery, unusual lighting conditions) that differ from the model's training distribution.
Our approach combines SAM2 (Meta's Segment Anything Model 2) for AI-powered pre-annotation with expert human validation. SAM2 pre-draws segmentation masks or bounding box suggestions at 40–60% of the speed of manual annotation. Human annotators then validate, correct, and refine these suggestions — with particular attention to boundary accuracy, small objects, occluded targets, and edge cases that AI models consistently miss.
For medical imaging specifically (DICOM radiology, pathology slides, ultrasound), we maintain a separate annotator pool of MBBS-qualified clinicians and radiologists. Medical image annotation requires clinical knowledge — the difference between a tumour margin and a normal tissue boundary is not something that can be specified in annotation guidelines alone; it requires annotators who have studied anatomy and pathology.