2019
Riener, Andreas; Geisler, Stefan; Pfleging, Bastian; von Sawitzky, Tamara; Detjen, Henrik
8th Workshop Automotive HMIs: User Interface Research in the Age of New Digital Realities Konferenzbeitrag
In: Mensch und Computer 2019, Gesellschaft für Informatik eV (GI) 2019.
Abstract | Links | BibTeX | Schlagwörter: Automation, Display devices, Experiments, Human engineering, Parking, Railroad cars, Testing, User interfaces, Visualization
@inproceedings{riener20198th,
title = {8th Workshop Automotive HMIs: User Interface Research in the Age of New Digital Realities},
author = {Andreas Riener and Stefan Geisler and Bastian Pfleging and Tamara von Sawitzky and Henrik Detjen},
doi = {10.18420/muc2019-ws-282},
year = {2019},
date = {2019-09-09},
booktitle = {Mensch und Computer 2019},
organization = {Gesellschaft für Informatik eV (GI)},
abstract = {Even though many aspects of automated driving have not yet become reality, many human factors issues have already been investigated. However, recent discussions revealed common misconceptions in both research and society about vehicle automation and the levels of automation levels. This might be due to the fact that automated driving functions are misnamed (cf. Autopilot) and that vehicles integrate functions at different automation levels (L1 lane keeping assistant, L2/L3 traffic jam assist, L4 valet parking). The user interface is one of the most critical issues in the interaction between humans and vehicles -- and diverging mental models might be a major challenge here. Today's (manual) vehicles are ill-suited for appropriate HMI testing for automated vehicles. Instead, virtual or mixed reality might be a much better playground to test new interaction concepts in an automated driving setting.
In this workshop - motivated by the conference theme - we will look into the potential of new digital realities for concepts, visualizations, and experiments in the car, e. g., by replacing all the windows with displays or transferring the entire environment into a VR world. We are further interested in discussing novel forms of interaction (speech, gestures, gaze-based interaction) and information displays to support the driver/passenger.},
keywords = {Automation, Display devices, Experiments, Human engineering, Parking, Railroad cars, Testing, User interfaces, Visualization},
pubstate = {published},
tppubtype = {inproceedings}
}
Even though many aspects of automated driving have not yet become reality, many human factors issues have already been investigated. However, recent discussions revealed common misconceptions in both research and society about vehicle automation and the levels of automation levels. This might be due to the fact that automated driving functions are misnamed (cf. Autopilot) and that vehicles integrate functions at different automation levels (L1 lane keeping assistant, L2/L3 traffic jam assist, L4 valet parking). The user interface is one of the most critical issues in the interaction between humans and vehicles -- and diverging mental models might be a major challenge here. Today's (manual) vehicles are ill-suited for appropriate HMI testing for automated vehicles. Instead, virtual or mixed reality might be a much better playground to test new interaction concepts in an automated driving setting.
In this workshop - motivated by the conference theme - we will look into the potential of new digital realities for concepts, visualizations, and experiments in the car, e. g., by replacing all the windows with displays or transferring the entire environment into a VR world. We are further interested in discussing novel forms of interaction (speech, gestures, gaze-based interaction) and information displays to support the driver/passenger.
In this workshop - motivated by the conference theme - we will look into the potential of new digital realities for concepts, visualizations, and experiments in the car, e. g., by replacing all the windows with displays or transferring the entire environment into a VR world. We are further interested in discussing novel forms of interaction (speech, gestures, gaze-based interaction) and information displays to support the driver/passenger.