India has the world's second-largest road network — 6.37 million kilometres — and some of the world's most complex driving conditions. Mixed traffic with two-wheelers, three-wheelers, pedestrians, cattle, and freight vehicles sharing undemarcated road space. Unstructured intersections without traffic signals. Road conditions ranging from expressways to rural kutcha roads. Monsoon visibility. Night driving without road lighting.
None of these scenarios exist in the nuScenes, KITTI, Waymo Open, or Argoverse datasets that most AV models are benchmarked on. A model trained on Western AV datasets and tested on Indian roads will miss auto-rickshaws at roundabouts, fail to classify cattle as obstacles, and misread hand signals from traffic policemen. These are annotation coverage gaps — the training data simply never included these scenarios.
The AV annotation market is also the most technically demanding in the industry. LiDAR point cloud annotation, multi-camera sensor fusion, temporal consistency across video frames, 3D cuboid labeling with centimetre-level accuracy — these require not just domain knowledge but purpose-built annotation tooling and workflows. We have built both.