Yellow Box Problem

Search for glossary terms (regular expression allowed)

Glossaries

Term Definition
Yellow Box Problem

This ethical dilemma in self-driving cars refers to scenarios where unavoidable accidents occur, posing challenges in decision-making

The "Yellow Box Problem" in the AI world isn't as widely used as a formal term, but it refers to a specific ethical dilemma faced by self-driving cars in situations where an accident is unavoidable. It gets its name from the yellow box markings used at some intersections to prevent vehicles from blocking them, highlighting the potential for dangerous conflicts in these areas.

Here's a breakdown of the problem:

The Scenario:

Imagine a self-driving car approaching a yellow box intersection. Suddenly, it detects an obstacle (e.g., pedestrian, another car) that it cannot avoid colliding with, regardless of its actions.

The Dilemma:

The AI system controlling the car has two main options:

  1. Stay in its lane and collide with the obstacle: This could result in injury or death, violating one of the core ethical principles of self-driving cars (to minimize harm).
  2. Swerve out of the lane and potentially enter the yellow box: This could violate traffic rules and potentially cause harm to other vehicles or pedestrians in the yellow box, also violating ethical principles.

No Easy Answer:

There's no universally accepted solution to the Yellow Box Problem. Each option has its drawbacks, and the "best" choice depends on various factors, including:

  • Severity of potential harm in each scenario: Weighing the potential consequences of each option, both for the self-driving car's occupants and others involved.
  • Traffic laws and regulations: Following them is crucial, but in an unavoidable collision situation, ethical considerations might supercede strict adherence.
  • Transparency and explainability: The AI system's decision-making process should be transparent and explainable, even in such complex situations.