Human-Centered Artificial Intelligence: Trusted, Reliable and Safe

Mar 02, 2020

Well-designed technologies that offer high levels of human control and high levels of computer automation can increase human performance, leading to wider adoption. The Human-Centered Artificial Intelligence (HCAI) model clarifies how to (1) design for high levels of human control and high levels of computer automation so as to increase human performance, (2) understand the situations in which full human control or full computer control are necessary, and (3) avoid the dangers of excessive human control or excessive computer control. The new goal of HCA is more likely to produce designs that are Trusted, Reliable & Safe (TRS). Achieving these goals will dramatically increase human performance, while supporting human self-efficacy, mastery, creativity, and responsibility. 

Design guidelines and independent oversight mechanisms for prospective design reviews and retrospective analyses of failures will clarify the role of human responsibility, even as automation increases.  Examples of failures, such as the Boeing 737 MAX, will be complemented by positive examples such as elevators, digital cameras, medical devices, and TRS cars.

Ben Shneiderman is an Emeritus Distinguished University Professor in the Department of Computer Science, Founding Director (1983-2000) of the Human-Computer Interaction Laboratory (http://hcil.umd.edu), and a Member of the UM Institute for Advanced Computer Studies (UMIACS) at the University of Maryland. He is also a 2018 PWIAS International Visiting Research Scholar.

This event is co-sponsored by CAIDA: UBC ICICS Centre for Artificial Intelligence Decision-making and Action. 

This event is free to attend, and no registration is required.