Intelligent Assistant Systems: Support for Integrated Human-Machine Systems

Reference: Boy, G. & Gruber, T. R. Intelligent Assistant Systems: Support for Integrated Human-Machine Systems. 1990.

Abstract: The increasing automation of engineered systems has been accompanied by increased complexity of the interactions between human users and the machines. The well-known dangers of "human error" in complex systems such as power plants, aircraft, and space systems present a challenge to the design of these machines and the procedures for operating them. With the increased automation of "intelligent systems," one might expect that humans play less important roles in the function of the system. In reality, humans need to stay in the loop as automated systems take on more intelligent tasks. Machines can be built to perform many of the routine tasks allocated to humans today. However, since it is difficult to design machines that can handle unexpected situations, humans will remain in the control loop to take care of such situations. The current practice of design engineering is often machine-centered: optimized for the taks that can be performed by the machine itself, neglecting to support the complementary roles of the human in the loop. This paper takes an alternate view of design: that the designed artifact is an integrated human-machine system (IHMS). In this view, the human operator is a functional component of an intelligent system, contributing to the overall performance of the system. Performance often includes intelligent activity, where the human and machine share responsibilities and perform complementary tasks. A class of programs that support the human in the loop called intelligent assistant systems (IAS) mediate interactions with the machine and perform some of the intelligent functions required for the system. In this paper we present an analysis of the properties of IHMSs and how they differ from machine-centered systems. These properties motivate three important design problems for the design of IASs: how to adapt to changing operator knowledge and skills, how to distribute intelligent functions, and how to share autonomy between human and machine agents. We describe the roles of IASs in the context of these problems, and lay out some of the desgin objectives to consider when building them. We illustrate the ideas with the example of an IAS for cooperative fault diagnosis in the space shuttle. Experience with this application suggests areas for future research and development.

Full paper available as hqx, ps.

Jump to... [KSL] [SMI] [Reports by Author] [Reports by KSL Number] [Reports by Year]
Send mail to: ksl-info@ksl.stanford.edu to send a message to the maintainer of the KSL Reports.