We plan on building tailored data packages for different types of customer. For example, we can build targeted foreground datasets to capture object classes like scooter and wheelchair riders under-represented in the existing datasets, or new vehicle models that will hit the road in the next couple of years and need to be correctly detected and recognised by other autonomous users.
The second type of data is focused on different backgrounds that represent the environments where autonomous vehicles need to operate. These could be specific settings such as airports and campuses, or broader country-specific backgrounds like local countryside scenes.
Next, we can build reinforcement datasets for autonomous buses and shuttles to increase the recognition accuracy along specific routes. When a bus travels the same route thousands of times over and accumulates the risk in doing so, the AI brain of the bus must make absolutely no mistakes (e.g. missed detections or false positives) all along the route.
Finally, we will generate a variety of edge cases and scenarios to test the perception function of autonomous agents. With our data we can count the number of missed detections and false positives to verify and certify the safety of driverless operation.