Our technology should also enable a de-centralized sensor fusion architecture. An OEM working on an autonomous car can put in a dedicated neural network for each kind of imaging sensor on the vehicle.
And we will provide our datasets to train those networks on the right kind of data. So the CPU will only need to combine top-level outputs provided by those networks.
Imagine there is a pedestrian ahead of your car. The cameras are better at object recognition, so they’ll tell you it looks like a person, but the LiDAR can give you a more accurate distance measurement to the same obstacle even if it can’t recognise it well. Here, you don’t need to process Gb/s of raw data from each sensor as people do today, but just have your CPU or ECU perform top-level situational analysis.