Sun December 08 2002, 10:04 PM
BFCockpit Automation May Bias Decision-making
This from Air Safety Week;
Cockpit
Automation May Bias Decision-making
A highly
automated cockpit can lull aircrews into a false sense of
security, leaving them more prone to making errors in
certain situations.
In a new study that is the
first direct comparison of cockpit tasks carried out
with and without the aid of a computer, the error rate
among student volunteers was some 65% when they were
presented with intentionally false prompts from the
computer, even though other instrument readings
contradicted the prompt.
The study was carried out by
Dr. Linda Skitka, an associate psychology professor
at the University of Illinois in Chicago. Colleagues
Kathleen Mosier at San Francisco State University and Mark
Burdick at NASA's Ames Research Center assisted in the
study and co-authored the report.
Using a basic
flight simulator, 80 student volunteers were divided
into two groups. Half were to "fly" with the aid of a
computer system; the other half were to rely on
instruments. Both groups were told that their instruments were
100% reliable. The volunteers flying with the
automated aid were told that it was highly reliable but not
infallible. The idea was to test for errors of commission and
of omission. An error of commission involved
complying with an erroneous computer prompt, even though
the instruments provided contradictory information.
An error of omission involved failing to respond to
a correct computer prompt (i.e., one consistent
with information displayed on the instruments).
The six omission errors were constant for both the
automated and non-automated conditions. As such, they
provided a means of directly comparing the relative levels
of vigilance between the two groups. It was a
deliberate effort to assess if automated decision aids lead
to a decrease in vigilance. "Put simply, the answer
is yes," Skitka and her colleagues wrote.
In
other words, computers may be playing to basic human
weaknesses, described thusly:
Cognitive
laziness. In a telephone interview, Skitka said, "People
like to take short cuts." The tendency to go with
"first impressions," for example.
Social loafing.
People expend less effort in a group situation.
Individuals tend to "slack off" when responsibilities are
shared. When the computer is part of the group, the same
tendency applies.
Diffusion of responsibility. People
tend to conform to the demands of authority figures.
Computers, too, may be seen as authority figures, smarter
than their users. People are more likely to blindly
follow the computer's message, even in the face of
contradictory information.
As a consequence, the introduction of
computerized decision aids to reduce errors may be creating
new types of errors. "Changing the context changes
the opportunity for error," Skitka observed.
Are
there solutions? Skitka conceded that there are no
quick and easy answers. Pointing out errors in training
is the most immediate action that can be taken.
There may be personality differences, also, in which
some people are innately more vigilant than others.
This possibility is a venue for further research.
Engineering the pilot out of the airplane is not an option,
she declared. "You cannot program for every possible
contingency."
"Flying" With and Without an Automated Monitoring
Aid
Results in brief:
Despite the presence of
a 100% reliable system (gauges), much higher error
rates were observed when the computer failed to prompt
needed tasks, with a volunteer response rate of 59%. The
response rate among volunteers relying solely on
instruments was 97%.
When the computer gave correct
direction, the correct response rate was higher for
computer-aided volunteers (83%) than for those relying solely on
instruments (72%)
The automated decision aid provided no
subjective reduction in perceived workload.
Source:
Skitka, et. al,
http://www.idealibrary.com