The Influence of Driver’s Age on Glance Allocations during Single-Task Driving and Voice vs. Visual-Manual Radio Tuning
Abstract Driving behaviors change over the lifespan, and some of these changes influence how a driver allocates visual attention. The present study examined the allocation of glances during single-task (just driving) and dual-task highway driving (concurrently tuning the radio using either visual-manual or auditory-vocal controls). Results indicate that older drivers maintained significantly longer single glance durations across tasks compared to younger drivers. Compared to just driving, visual-manual radio tuning was associated with longer single glance durations for both age groups. Off-road glances were subcategorized as glances to the instrument cluster and mirrors (“situationally-relevant”), “center stack”, and “other”. During baseline driving, older drivers spent more time glancing to situationally-relevant targets. During both radio tuning task periods, in both age groups, the majority of glances were made to the center stack (the radio display).
Abstract The challenge of developing a robust, real-time driver gaze classification system is that it has to handle difficult edge cases that arise in real-world driving conditions: extreme lighting variations, eyeglass reflections, sunglasses and other occlusions. We propose a single-camera end-toend framework for classifying driver gaze into a discrete set of regions. This framework includes data collection, semi-automated annotation, offline classifier training, and an online real-time image processing pipeline that classifies the gaze region of the driver. We evaluate an implementation of each component on various subsets of a large onroad dataset. The key insight of our work is that robust driver gaze classification in real-world conditions is best approached by leveraging the power of supervised learning to generalize over the edge cases present in large annotated on-road datasets.
Observed Differences in Lane Departure Warning Responses during Single-Task and Dual-Task Driving: A Secondary Analysis of Field Driving Data
Abstract Advanced driver assistance systems (ADAS) are an increasingly common feature of modern vehicles. The influence of such systems on driver behavior, particularly in regards to the effects of intermittent warning systems, is sparsely studied to date. This paper examines dynamic changes in physiological and operational behavior during lane departure warnings (LDW) in two commercial automotive systems utilizing on-road data. Alerts from the systems, one using auditory and the other haptic LDWs, were monitored during highway driving conditions. LDW events were monitored during periods of single-task driving and dual-task driving. Dual-task periods consisted of the driver interacting with the vehicle’s factory infotainment system or a smartphone to perform secondary visual-manual (e.g., radio tuning, contact dialing, etc.) or auditory-vocal (e.g. destination address entry, contact dialing, etc.) tasks.
Abstract This paper presents the results of a study of how people interacted with a production voice-command based interface while driving on public roadways. Tasks included phone contact calling, full address destination entry, and point-of-interest (POI) selection. Baseline driving and driving while engaging in multiple-levels of an auditory-vocal cognitive reference task and manual radio tuning were used as comparison points. Measures included self-reported workload, task performance, physiological arousal, glance behavior, and vehicle control for an analysis sample of 48 participants (gender balanced across ages 21-68). Task analysis and glance measures confirm earlier findings that voice-command interfaces do not always allow the driver to keep their hands on the wheel and eyes on the road, as some assume.