Building Responsibility in AI: Transparent AI for Highly Automated Vehicle Systems
Replacing a human driver is an extraordinarily complex task. While machine learning (ML) and its’ subset, deep learning (DL) are fueling breakthroughs in everything from consumer mobile applications to image and gesture recognition, significant challenges remain. The majority of artificial intelligence (AI) learning applications, particularly with respect to Highly Automated Vehicles (HAVs) and their ecosystem have remained opaque - genuine “black boxes.” Data is loaded into one side of the ML system and results come out the other, however, there is little to no understanding at how the decision was arrived at. To make these systems accurate, these AI systems require lots of data to crunch and the sheer computational complexity of building these DL based AI models also slows down the progress in accuracy and the practicality of deploying DL at scale.