The viability of automated driving is heavily dependent on the performance of perception systems to provide real-time accurate and reliable information for robust decision-making and maneuvers. These systems must perform reliably not only under ideal conditions, but also when challenged by natural and adversarial driving factors. These include inclement weather conditions and the presence of occluded objects. Both of these types of interference can lead to perception errors and delays in detection and classification. Hence, it is essential to measure the robustness of the perception systems of autonomous vehicles (AVs) and explore strategies for making perception more reliable. This paper approaches this problem by evaluating the performance of perception based on uncertainty quantification (UQ) based on an ensemble of models, under adverse driving scenarios in both simulation environments and real-world driving conditions. A notional architecture for assessing perception performance is proposed that comprehends multiple input sources, ROS-based interface to AI models, and extensible AI architecture that provides detection and classification as outputs to uncertainty and post processing. Inputs can include videos of adversarial examples from real vehicle data and the CARLA simulation platform. A perception assessment criterion is developed based on an AV's stopping distance at a stop sign on varying road surfaces, such as dry and wet asphalt, and vehicle speed. Five state-of-the-art computer vision models are used, including YOLO (v8-v9), DEtection TRansformer (DETR50, DETR101), Real-Time DEtection TRansformer (RT-DETR) in our experiments. Diminished lighting conditions, e.g., resulting from the presence of fog and low sun altitude, are seen to have the greatest impact on the performance of the perception models. Additionally, adversarial road conditions such as occlusions of roadway objects increase perception uncertainty and model performance drops when faced with a combination of adversarial road conditions and inclement weather conditions. Also, it is demonstrated that the greater the distance to a roadway object, the greater the impact on uncertainty, hence diminished robustness.
The viability of automated driving is heavily dependent on the performance of perception systems to provide real-time accurate and reliable information for robust decision-making and maneuvers. These systems must perform reliably not only under ideal conditions, but also when challenged by natural...
See full abstract
The viability of automated driving is heavily dependent on the performance of perception systems to provide real-time accurate and reliable information for robust decision-making and maneuvers. These systems must perform reliably not only under ideal conditions, but also when challenged by natural and adversarial driving factors. These include inclement weather conditions and the presence of occluded objects. Both of these types of interference can lead to perception errors and delays in detection and classification. Hence, it is essential to measure the robustness of the perception systems of autonomous vehicles (AVs) and explore strategies for making perception more reliable. This paper approaches this problem by evaluating the performance of perception based on uncertainty quantification (UQ) based on an ensemble of models, under adverse driving scenarios in both simulation environments and real-world driving conditions. A notional architecture for assessing perception performance is proposed that comprehends multiple input sources, ROS-based interface to AI models, and extensible AI architecture that provides detection and classification as outputs to uncertainty and post processing. Inputs can include videos of adversarial examples from real vehicle data and the CARLA simulation platform. A perception assessment criterion is developed based on an AV's stopping distance at a stop sign on varying road surfaces, such as dry and wet asphalt, and vehicle speed. Five state-of-the-art computer vision models are used, including YOLO (v8-v9), DEtection TRansformer (DETR50, DETR101), Real-Time DEtection TRansformer (RT-DETR) in our experiments. Diminished lighting conditions, e.g., resulting from the presence of fog and low sun altitude, are seen to have the greatest impact on the performance of the perception models. Additionally, adversarial road conditions such as occlusions of roadway objects increase perception uncertainty and model performance drops when faced with a combination of adversarial road conditions and inclement weather conditions. Also, it is demonstrated that the greater the distance to a roadway object, the greater the impact on uncertainty, hence diminished robustness.
Hide full abstract