Refine Your Search

Search Results

Viewing 1 to 5 of 5
Technical Paper

STEAM & MoSAFE: SOTIF Error-and-Failure Model & Analysis for AI-Enabled Driving Automation

2024-04-09
2024-01-2643
Driving Automation Systems (DAS) are subject to complex road environments and vehicle behaviors and increasingly rely on sophisticated sensors and Artificial Intelligence (AI). These properties give rise to unique safety faults stemming from specification insufficiencies and technological performance limitations, where sensors and AI introduce errors that vary in magnitude and temporal patterns, posing potential safety risks. The Safety of the Intended Functionality (SOTIF) standard emerges as a promising framework for addressing these concerns, focusing on scenario-based analysis to identify hazardous behaviors and their causes. Although the current standard provides a basic cause-and-effect model and high-level process guidance, it lacks concepts required to identify and evaluate hazardous errors, especially within the context of AI. This paper introduces two key contributions to bridge this gap.
Journal Article

The Missing Link: Developing a Safety Case for Perception Components in Automated Driving

2022-03-29
2022-01-0818
Safety assurance is a central concern for the development and societal acceptance of automated driving (AD) systems. Perception is a key aspect of AD that relies heavily on Machine Learning (ML). Despite the known challenges with the safety assurance of ML-based components, proposals have recently emerged for unit-level safety cases addressing these components. Unfortunately, AD safety cases express safety requirements at the system level and these efforts are missing the critical linking argument needed to integrate safety requirements at the system level with component performance requirements at the unit level. In this paper, we propose the Integration Safety Case for Perception (ISCaP), a generic template for such a linking safety argument specifically tailored for perception components. The template takes a deductive and formal approach to define strong traceability between levels.
Journal Article

Modes of Automated Driving System Scenario Testing: Experience Report and Recommendations

2020-04-14
2020-01-1204
With the widespread development of automated driving systems (ADS), it is imperative that standardized testing methodologies be developed to assure safety and functionality. Scenario testing evaluates the behavior of an ADS-equipped subject vehicle (SV) in predefined driving scenarios. This paper compares four modes of performing such tests: closed-course testing with real actors, closed-course testing with surrogate actors, simulation testing, and closed-course testing with mixed reality. In a collaboration between the Waterloo Intelligent Systems Engineering (WISE) Lab and AAA, six automated driving scenario tests were executed on a closed course, in simulation, and in mixed reality. These tests involved the University of Waterloo’s automated vehicle, dubbed the “UW Moose”, as the SV, as well as pedestrians, other vehicles, and road debris.
Technical Paper

An Analysis of ISO 26262: Machine Learning and Safety in Automotive Software

2018-04-03
2018-01-1075
Machine learning (ML) plays an ever-increasing role in advanced automotive functionality for driver assistance and autonomous operation; however, its adequacy from the perspective of safety certification remains controversial. In this paper, we analyze the impacts that the use of ML within software has on the ISO 26262 safety lifecycle and ask what could be done to address them. We then provide a set of recommendations on how to adapt the standard to better accommodate ML.
Journal Article

Automated Decomposition and Allocation of Automotive Safety Integrity Levels Using Exact Solvers

2015-04-14
2015-01-0156
The number of software-intensive and complex electronic automotive systems is continuously increasing. Many of these systems are safety-critical and pose growing safety-related concerns. ISO 26262 is the automotive functional safety standard developed for the passenger car industry. It provides guidelines to reduce and control the risk associated with safety-critical systems that include electric and (programmable) electronic parts. The standard uses the concept of Automotive Safety Integrity Levels (ASILs) to decompose and allocate safety requirements of different stringencies to the elements of a system architecture in a top-down manner: ASILs are assigned to system-level hazards, and then they are iteratively decomposed and allocated to relevant subsystems and components. ASIL decomposition rules may give rise to multiple alternative allocations, leading to an optimization problem of finding the cost-optimal allocations.
X