This study introduces a multimodal approach for enhancing the accuracy of Driver Monitoring Systems (DMS) in detecting driver distraction. By integrating data from vehicle control units with vision-based information, the research aims to address the limitations of current DMS. The experimental setup involves a driving simulator and advanced computer vision, deep learning technologies for facial expression recognition, and head rotation analysis. The findings suggest that combining various data types—behavioral, physiological, and emotional—can significantly improve DMS’s predictive capability. This research contributes to the development of more sophisticated, adaptive, and real-time systems for improving driver safety and advancing autonomous driving technologies.
A Multimodal Approach to Understand Driver’s Distraction for DMS
Generosi, Andrea
;
2024-01-01
Abstract
This study introduces a multimodal approach for enhancing the accuracy of Driver Monitoring Systems (DMS) in detecting driver distraction. By integrating data from vehicle control units with vision-based information, the research aims to address the limitations of current DMS. The experimental setup involves a driving simulator and advanced computer vision, deep learning technologies for facial expression recognition, and head rotation analysis. The findings suggest that combining various data types—behavioral, physiological, and emotional—can significantly improve DMS’s predictive capability. This research contributes to the development of more sophisticated, adaptive, and real-time systems for improving driver safety and advancing autonomous driving technologies.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.