Closing the Gap between IEC 61511 and the use of Artificial Intelligence in Plant Safety
There is an urgent need to reconcile the established best practices of functional safety and requirements of applicable safety standards, such as IEC 61511, with the increasing use of Machine Learning (ML) and other forms of Artificial Intelligence (AI). Possible uses of AI/ML in plant automation are motivated by variety of objectives such as improving safety monitoring, optimizing production and reducing human supervision. Most applications of AI in this context unavoidably involve some measure of uncertainty. Traditional methods and measures for managing safety risk, such as those embodied in IEC 61511, are not necessarily helpful for addressing this inherent uncertainty. Other industries such as advanced automotive have taken steps to close the gap between functional safety and AI/ML safety – for example, the recent publication of ISO/PAS 8800. A key part of this strategy is recognizing (in the words of ISO/PAS 8800) that with the use of AI/ML “it is not possible to provide detailed requirements on the process or product characteristics required to achieve an acceptably low level of residual risk associated with the use of AI systems”. Instead, closing the gap between functional safety and AI/ML safety requires assurance argumentation – ideally, in the form of a structured argument that captures the critical thinking that links safety claims to supporting evidence. This presentation will share insights gained from extensive experience with AI/ML safety from a variety of technical domains, including a method for adapting conventional hazard and risk assessment method to take account of AI/ML in SIL determination. This presentation will also explain how safety assurance argumentation can be combined with conventional functional safety to close the gap between IEC 61511 and the use of AI/ML in a plant.