Service — Computer Vision

Image Annotation

SAM2-powered pre-annotation reduces manual work by 40–60%. Bounding boxes, polygon segmentation, semantic and instance segmentation, keypoint detection, and medical DICOM labeling. Human experts validate every AI-suggested label — quality verified, not just completed.

40–60%
Annotation time reduction via SAM2 pre-annotation + human validation
≥88%
Gold standard accuracy on every batch — withheld and reannotated if below
6+
Annotation task types: bbox, polygon, semantic, instance, keypoint, DICOM
Medical
DICOM / radiology annotation by MBBS-qualified annotators only
Scroll
Bounding BoxesPolygon SegmentationSemantic SegmentationInstance SegmentationKeypoint DetectionDICOM LabelingSAM2 Pre-annotation3D Point CloudBounding BoxesPolygon SegmentationSemantic SegmentationInstance SegmentationKeypoint Detection
What It Is

Computer vision training data at speed, without sacrificing accuracy

Image annotation is the process of labeling images with structured information — object locations, boundaries, categories, or attributes — so that computer vision models can learn to perceive and understand visual content. Every object detection, segmentation, and recognition system depends on millions of carefully annotated images.

The fundamental challenge in image annotation is the speed-quality tradeoff. Manual annotation of complex segmentation masks is slow and expensive. Automated annotation is fast but introduces systematic errors — especially at object boundaries, occluded regions, and in domain-specific images (medical scans, satellite imagery, unusual lighting conditions) that differ from the model's training distribution.

Our approach combines SAM2 (Meta's Segment Anything Model 2) for AI-powered pre-annotation with expert human validation. SAM2 pre-draws segmentation masks or bounding box suggestions at 40–60% of the speed of manual annotation. Human annotators then validate, correct, and refine these suggestions — with particular attention to boundary accuracy, small objects, occluded targets, and edge cases that AI models consistently miss.

For medical imaging specifically (DICOM radiology, pathology slides, ultrasound), we maintain a separate annotator pool of MBBS-qualified clinicians and radiologists. Medical image annotation requires clinical knowledge — the difference between a tumour margin and a normal tissue boundary is not something that can be specified in annotation guidelines alone; it requires annotators who have studied anatomy and pathology.

What is SAM2 and why does it matter?
SAM2 (Segment Anything Model 2) from Meta is the current state-of-the-art model for zero-shot image and video segmentation. Given a point or bounding box prompt, it generates a high-quality segmentation mask covering the target object. For annotation workflows, this means complex polygon segmentation that would take an annotator 5–10 minutes manually can be pre-generated in seconds and validated in 30–60 seconds. The speed gain is real — the risk is boundary inaccuracy, which human review addresses.
What quality metrics apply to image annotation?
Standard metrics: Intersection over Union (IoU) ≥ 0.85 for bounding boxes, mean IoU ≥ 0.80 for segmentation masks, per-class precision/recall for classification tasks. Gold standard images — with pre-verified correct annotations — are injected at 6% rate throughout production. Any batch where gold standard IoU falls below threshold is withheld and reannotated. We publish these metrics in every QA report.
Image Object Detection Annotation
● CAR · 0.98
● PEDESTRIAN · 0.96
● MOTORCYCLE · 0.89
● TRUCK · 0.94
▼ CLASS DETECTION SUMMARY
CAR ×2
PEDESTRIAN ×2
MOTORCYCLE ×1
TRUCK ×1
TRAFFIC LIGHT ×1
7 OBJECTS · mAP 0.94 · QA PASS ✓
Computer Vision

SAM2-powered annotation, human-expert verified

AI pre-annotation reduces manual work 40–60%. Every bounding box, polygon, and keypoint is validated by expert annotators. Medical DICOM labeling included.

Get a Free Audit →
Live Annotation Interface

Object Detection Bounding Box Annotation Tool

Expert annotators draw precise bounding boxes, polygons, and semantic masks across millions of images — building training datasets for computer vision models.

ConcaveLabel Studio — Image Annotation · Dataset: Urban Traffic CV · Frame #14,882
Traffic scene annotation
PEDESTRIAN 0.96
PEDESTRIAN 0.91
CAR 0.98
CAR 0.97
MOTORCYCLE 0.89
TRUCK 0.94
TRAFFIC LIGHT 0.99
CLASS LEGEND
PEDESTRIAN2
CAR2
MOTORCYCLE1
TRUCK1
TRAFFIC LIGHT1
FRAME STATUS
7 objects · QA PASS
Annotation Types

Six annotation formats, all with SAM2 pre-annotation + human QA

📦
Bounding Boxes
Axis-aligned and rotated bounding box annotation for object detection. Supports single-class and multi-class labeling with attribute tagging (occluded, truncated, crowd). Compatible with YOLO, COCO, and Pascal VOC formats.
YOLOCOCOPascal VOCRotated bbox
🔷
Polygon Segmentation
Pixel-precise polygon masks for complex object boundaries. SAM2 generates mask candidates; annotators refine boundary accuracy. Ideal for irregular shapes: vehicles, animals, organic objects. More accurate than bbox for training segmentation models.
SAM2 pre-maskBoundary precisionCOCO polygon
🎨
Semantic Segmentation
Pixel-level class labeling covering every pixel in the image. Every pixel belongs to a category — sky, road, building, pedestrian, vegetation. Essential for scene understanding, autonomous driving, and satellite imagery analysis.
Every pixel labeledCustom class ontologyADE20K compatible
🔢
Instance Segmentation
Semantic segmentation that also distinguishes between different instances of the same class — person #1 vs person #2, car #1 vs car #2. Each instance gets a unique ID. Essential for crowd analysis, multi-object tracking setup, and surgical scene understanding.
Unique instance IDsCOCO formatCrowd handling
📍
Keypoint Detection
Labeling anatomical or structural keypoints — human pose estimation (17 COCO keypoints), face landmarks (68 points), hand skeleton, vehicle structural points, animal anatomy. Supports custom keypoint schemas and visibility flagging for occluded points.
Human poseFace landmarksCustom schemasVisibility flags
🏥
Medical DICOM Labeling
Radiology and medical image annotation by MBBS-qualified clinicians. CT, MRI, X-ray, ultrasound, and pathology slides. Tumour segmentation, organ delineation, lesion detection, fracture annotation. HIPAA-aligned workflow. All findings reviewed by a second clinical annotator.
MBBS annotatorsCT / MRI / X-rayHIPAA-alignedDual clinical review
The Process

SAM2 speed + human precision, QA verified

01
Data Audit & Ontology Design
We review a 100–200 image sample from your dataset to assess annotation complexity, object density, occlusion frequency, lighting variation, and edge case distribution. From this, we design your labeling ontology — class definitions with boundary rules, attribute schema, and worked examples for ambiguous cases. For medical projects, we additionally verify the clinical annotation standards required (WHO coding, radiological grading systems) with your clinical team.
Sample auditOntology designBoundary rulesClinical standards check
02
SAM2 Pre-Annotation
SAM2 runs on every image to generate preliminary segmentation masks or bounding boxes depending on your task type. For object detection tasks, we run SAM2 in automatic mask generation mode and then classify masks using a fine-tuned classifier. For specific object types, we run SAM2 with category-specific prompts. All pre-annotations are confidence-scored — low-confidence regions are flagged for priority human attention. Gold standard images (with manually verified correct annotations) are injected at 6% rate to monitor annotator accuracy continuously.
SAM2 auto-maskConfidence scoring6% gold injection40–60% time saving
03
Expert Human Validation & Refinement
Domain-specialist annotators review every AI-pre-annotated image. For standard images: trained annotators refine masks, correct boundary errors, add missed objects, remove false positives, and apply attribute labels. For medical DICOM images: MBBS-qualified clinicians review and annotate, with all findings reviewed by a second clinical annotator. Daily IoU tracking against gold standard images monitors annotator accuracy. Annotators below threshold receive re-calibration or are removed from production.
Domain specialist reviewMedical: dual clinical reviewDaily IoU monitoringBoundary refinement
04
Three-Tier QA & Delivery
Tier 1: Automated schema validation (missing required labels, out-of-bounds coordinates, unusual size distributions). Tier 2: 15% random sample reviewed by a second annotator — IoU calculated against original. Tier 3: Expert spot check on 5% sample plus all flagged images. Batch withheld if gold standard accuracy falls below 88%. Delivery in COCO JSON, Pascal VOC XML, YOLO TXT, or custom format. Full QA report with per-class IoU, annotator agreement rates, and data card.
Auto schema validation15% peer re-annotationIoU ≥ 0.85 gateCOCO / VOC / YOLO format
Use Cases

Industries we specialise in for image annotation

🏥
Medical Imaging
Tumour segmentation on CT/MRI, fracture detection on X-rays, lesion delineation on pathology slides, organ segmentation for surgical planning AI, ultrasound structure annotation. All annotated by MBBS/MD-qualified clinicians with specialty matching — radiologist for radiology, pathologist for pathology.
🚗
Autonomous Vehicles
Object detection and segmentation for vehicles, pedestrians, cyclists, road markings, traffic signs, and Indian-specific objects (auto-rickshaws, cattle, two-wheelers, unstructured intersections). We specialise in Indian road conditions that Western annotation providers cannot label accurately.
🌾
Agriculture & Satellite
Satellite and drone imagery annotation for crop type classification, disease detection, yield estimation, and PMFBY insurance claim assessment. Annotators understand Indian crop varieties, disease patterns, and regional soil types — essential for accurate labeling that cannot be done by non-specialist annotators.
🏭
Manufacturing & Quality Control
Defect detection and quality control image annotation for industrial AI — surface defects, dimensional measurement, assembly verification, and product classification. Supports training models for visual inspection systems on production lines with domain-specific defect taxonomies.
Pricing

Per-image pricing,
complexity-based tiers

Priced per image based on annotation type, object density, and domain. SAM2 pre-annotation is included in all tiers — pricing reflects human QA time required.

Get a Dataset Quote →
Bounding boxes (simple, <10 objects/image)₹5–12 / image
Polygon / segmentation (standard)₹15–35 / image
Dense segmentation (>20 objects/image)₹35–80 / image
Keypoint annotation₹8–25 / image
Medical DICOM (clinical annotators)₹80–300 / image
Satellite / aerial imagery₹25–80 / image

Get 100 images annotated free

Send us a sample of 100 images from your dataset. We will annotate them using our SAM2+human pipeline and return the labeled dataset with IoU metrics — no cost, no commitment.