Robustness of Deep Learning-based VRU Detection Models
This research project highlights the critical role of accurate pedestrian detection in assisted and automated driving systems to enhance road safety. Real-world deployment faces challenges like image corruption and occlusions, addressed here through robust, stylized, and occluded training techniques. Robust Training uses intentionally corrupted examples to simulate real-world scenarios, significantly improving model resilience to various corruptions. Categorized training further refines performance by tailoring augmentations to specific corruption types. Stylized Training employs Adaptive Instance Normalization (AdaIN) to introduce texture and style variations, enriching the dataset. This approach boosts pedestrian detection accuracy, achieving notable improvements in mean Average Precision (mAP50), particularly in handling noise, blur, and weather-related distortions. Occluded Training generates datasets simulating different occlusion levels, enhancing model performance on occluded samples with a ∼2% increase in mAP50. This method demonstrates effectiveness in improving detection in challenging scenarios. Overall, the combined techniques improve model adaptability and robustness in diverse conditions, achieving a ∼2-4% performance boost. This work establishes a solid foundation for deploying reliable pedestrian detection models in complex environments, advancing the safety and reliability of automated driving systems.
Funding agency: Nile University
Duration: 2022-2024 | Topics: assisted and asutomated driving, VRU detection, model robustness