The automobile industry is promising that autonomous vehicles will be much safer on the road with fewer errors made by human beings. However, despite being pretty advanced, self-driving cars’ interaction with human psychology hinders seamless usability. According to freshly published research, the gap is not due to a glitch in the system or the engineering, but between understanding the technology and optimizing it for human behavior behind the wheel.
Autonomous, at the cost of vigilance
Ronald McLeod, Honorary Professor of Engineering Psychology at Heriot-Watt University, writes in his book, “Transitioning to Autonomy“, that there is a massive communication gap in how self-driving systems communicate with humans. Many drivers have not felt at ease sitting behind the smart wheels, given that the car makes decisions on behalf of the human driver.
The autonomous tech interface has often failed to clearly decipher what’s ahead, contrary to what the driver can see and perceive. For instance, the sudden appearance of an object or a person in front of the car can lead to a crash or trigger a technical error by the car, while the driver remains a front-seat spectator. The driver, unsure of the possible changes in the car’s reaction, is left questioning the system’s dependability.
Even a segment leader like Tesla hasn’t been able to figure it out, and there are numerous investigations following serious crash incidents involving cars with autopilot and full self-driving (FSD) systems engaged. Additionally, the company advises users that they should be ready to take control if they see the system going haywire.
This often adds to the anxiety about self-driving cars and how independent they truly are. Simply put, drivers can’t relax and are constantly anxious about subtle hints of danger in relying too much or being overly skeptical.
Anxious passenger in the driver’s seat
The constant stress on the human psyche with smart cars not only leads to the driver’s fatigue, but also makes them utterly sensitive to unexpected incidents with autonomous technology. It feels more like a trap of “watch and wait” for the drivers, as the driving ultimately is dependent on human supervision. Professor McLeod describes it as a vigilance task. The convenience of self-driving cars is linked to low attention by the driver, resulting in slower reaction time in emergencies.

Broadly, drivers face cognitive stress behind the wheel of an autonomous car. Brands like Tesla look forward to the creation of smart cars on a full swing, and this also requires focusing on bridging the gap between human psychology and self-driving systems. Expert assessments through AI models, such as Human-in-the-loop (HITL) presents a solid approach that includes human insight and their feedback related directly to autonomous driving technology. The book highlights the need for a solution to bridge this gap and create human-centric interfaces to make self-driving cars safer in the future.
