Swissair111.org    forums.swissair111.org    Discussion  Hop To Forum Categories  SR111 Messages    Air Safety Week Article - High Price of Lessons Not Learned from Disasters

Moderators: BF, Mark Fetherolf
Go
New
Find
Notify
Tools
Reply
  
Air Safety Week Article - High Price of Lessons Not Learned from Disasters
 Login/Join
 
posted
Study Highlights High Price of Lessons Not Learned from Disasters

"The surest sign of a system in decay is one that cannot self-
correct." - Anonymous
A new report on the airplane certification process reveals that many
of the same problems identified two decades ago remain uncorrected
today. However, by characterizing the issue as a lack of information,
the report may downplay the tardiness of corrective actions even in
the face of well-documented deficiencies.

The problem extends to approval of post-production modifications and
equipment installations, such as in-flight entertainment (IFE)
systems.

The new report, titled the "Commercial Airplane Certification Process
Study" (CPS), is significant, not only for its discouraging findings,
but because its revelations deal with a subject of primary
importance. Certification is the process by which aircraft, engines,
equipment, systems and components are approved for use. Certification
ideally should address the interaction of these elements to safely
function in the airplane as a total entity. As such, certification
standards define safety standards. If standards are deficient, or
silent on key issues, safety suffers.

Moreover, U.S. standards tend to set the height of the safety bar
globally. Other nations' regulatory bodies take their cue from the
standards required by the U.S. Federal Aviation Administration (FAA).
Its influence is far-reaching and internationally recognized. If the
FAA fails to act or follow-through, other national authorities'
implicit trust in the FAA to act in a timely manner may be misplaced.

The study was conducted under the auspices of the FAA. Of the 34
members of the study team, nearly half were from the FAA, with the
remaining half comprised of manufacturer, airline, and pilot union
representatives. A few members were from research institutions
heavily dependent on FAA funding. Although a large reason for the
study was increasingly vocal discontent from the National
Transportation Safety Board (NTSB) about obsolescent certification
standards, an NTSB representative was not on the study team. An NTSB
member was on the seven-person oversight board. So also was the
former associate FAA administrator for regulation and certification.
Although the report dwelt on the maintenance aspects of safety and
certification, representation from mechanics' unions is notably
absent from the list of participants, in contrast to the listing of
three pilots representing their unions.

The rigor of certification standards takes on added importance in
light of the November 2001 crash of an American Airlines [AMR] A300.
The Flight 587 accident involves the first loss of a tailfin
manufactured of composite material, and the possible interaction of
the rudder control system and pilot rudder inputs with the tail
structure. As such, it raises almost certain implications regarding
certification requirements (see ASW, Jan. 14).

The 2002 study is an outgrowth of fatal accidents in recent years in
which the causes led back to the cracked bedrock of certification
standards. The NTSB cited deficiencies in a number of areas: (1) the
redundancy and reliability of the B737 rudder power control unit, (2)
the long-known hazard posed by flammable vapors in fuel tanks, (3)
the vulnerability of horizontal stabilizers to improper or inadequate
maintenance, (4) inadequate requirements for aircraft to operate in
icing conditions, (5) out-of-date standards for flight data recorders
(FDRs) that have vastly complicated and frustrated accident
investigations, and a host of other issues. Among the others:
overhead bins, which tend to collapse during crashes.

Other organizations have weighed in with their concern. For example,
the Transportation Safety Board (TSB) of Canada proclaimed last year
that airliners are unacceptably vulnerable to the dangers of in-
flight fire the day they leave the factory because of inadequate
standards for determining the fire resistance of many of the
materials used in their construction (see ASW, Sept. 10, 2001). The
TSB's call for tougher standards was an outgrowth of its
investigation into the fatal 1998 crash of a Swissair MD- 11, most
probably from a runaway fire caused by electrical arcing. TSB
investigator-in-charge Vic Gerden declared, "A single spark should
not bring down an airplane with 229 people in it."

"If there were no combustible materials in an airplane, there would
be no fire," Gerden said simply. His remarks coincided with issuance
of the TSB's bill of particulars regarding materials in general, and
the inadequacy of a simple 60? Bunsen burner flame test to certify
aircraft wiring.

Similarly to the TSB, officials with the UK's Air Accidents
Investigation Branch (AAIB) have commented about the danger of fire
in inaccessible areas and the need for better fire detection (see
ASW, Dec. 11, 2000, Jan. 1, 2001). This vulnerability, too, is a
design certification issue.

Gerden's observation that fire feeds on combustible materials is a
variation on a similar concern expressed by NTSB officials. They have
said repeatedly if there were no flammable vapors in fuel tanks,
there would be no explosions.

Past prefigures present
The certification study is a reflection of these accumulating
concerns that standards have not been upgraded with the times, nor
have they been tightened in response to the accumulated evidence from
accidents and incidents. It is perhaps the most comprehensive
assessment since an expert team headed by the late George M. Low
conducted a 1980 review of certification standards. Performed under
the auspices of the National Research Council, the operating arm of
the National Academy of Sciences, this examination, too, was prompted
by fatal accidents that pointed to shortcomings in certification
standards.

The 1980 and the 2002 reports cover much the same ground with respect
to aircraft design, the potential for human error, and the often huge
disparity between assumed operating and maintenance conditions and
actual experience in service. The Low report was an outgrowth of the
fatal 1979 crash of a DC-10 during takeoff at Chicago's O'Hare
International Airport. The left engine and pylon separated, causing
loss of hydraulic fluid, retraction of slats on the left wing leading
to stall and loss of control. In the grim postmortem, investigators
were dismayed to discover that engines and pylons were being removed
as a unit by a forklift to save time and effort. The practice was
completely at odds with the maintenance envisioned by the airplane's
designers, which called for separate removal of engine and pylons.

The 2002 report contains a new listing of accidents since Low's
effort two decades ago. However, many of the same problems documented
by Low's team persist. Mostly that standards are tightened in
reaction to high-profile disasters, not upgraded as part of a
proactive program of oversight, continual assessment, revalidation of
key assumptions and correction.

Notwithstanding the common aspects of these two reports, there are
significant differences. The 1980 report contained specific
recommendations. The charter for the 2002 certification review did
not call for recommended actions; rather, the report offers a number
of findings and observations that can serve as the basis for
corrective action.

An FAA official said a response team has been formed, and it is
slated to identify necessary actions by the end of this month. The
implementation strategy will involve "methodical, significant changes
that will really make a difference," the official assured.

A want of action
Bernard Loeb, former head of aircraft accident investigations for the
NTSB, is skeptical. "The FAA is forever undertaking new programs to
get out information," he said. "They start these programs, they peter
out, then there's an accident and they get started again."

"The problem isn't a lack of disseminated knowledge, it's the failure
to act on known problems," Loeb declared.

He pointed to an NTSB recommendation issued after the fatal 1963 fuel
tank explosion on a B707. Loeb recalled that the Bureau of Safety,
the predecessor to the NTSB, issued a recommendation saying flammable
vapors should be eliminated in fuel tanks. That declared deficiency
continued through the 1996 explosion of the center wing tank of a
Trans World Airlines B747, which stimulated a renewed call from the
NTSB for corrective action.

The fatal 1996 crash of a ValuJet DC-9 from in-flight fire in the
forward belly hold is another "perfect example," Loeb asserted. The
vulnerability of the cargo hold to fire was known. "Recommendations
were made. They didn't do anything," he recalled. After the ValuJet
crash, belly holds in the entire fleet were retrofitted with fire
detection and suppression equipment, and the certification standards
were upgraded.

The fatal January 2000 crash of an Alaska Airlines [ALK] jet is
another example. As one of America's ten largest airlines, Alaska was
one of the spear-carriers for the FAA's new Air Transport Oversight
System (ATOS). ATOS was to be implemented among the big ten first,
and then expanded to cover the rest of the industry. However, from
the NTSB's four days of fact-finding hearings into the crash, it was
evident that the vaunted ATOS program had virtually no bearing on the
circumstances involved in the crash (see ASW, Jan. 1, 2001). ATOS
would not have caught the maintenance problems that led to mechanical
failure of the horizontal stabilizer. When the stabilizer finally
broke free of the tailfin, the doomed airplane tumbled end-over-end
into the Pacific Ocean off the coast of Los Angeles.

Moreover, ATOS, as the brave new world of oversight, began with a
bang of promising rhetoric but has since come under wilting
criticism. In recent testimony to Congress, Kenneth Mead, inspector
general for the Department of Transportation, said three years after
ATOS was launched progress has been incremental, at best, and ATOS
isn't fully established at any of the 10 major air carriers.

One of the NTSB's most vocal concerns of recent years is not even
mentioned in the report. Safety board officials have frequently and
strongly expressed their frustration with the paucity of data
available from flight data recorders. For example, NTSB officials
decried the fact that in three investigations of B737 rudder
malfunctioning, two of them fatal accidents, the FDRs recorded nine
parameters of aircraft performance, at the most, and in none of these
cases was the position of the rudder pedals recorded (see ASW, March
29, 1999). Safety board officials have highlighted the crying need to
bring FDR requirements into the 21st century (see ASW, March 15,
1999). As an indication of the low priority accorded this issue, FDR
does not even appear in the report's list of acronyms. Yet the issue
of FDR standards has been one of the recurring themes of inadequate
response.

As far as Loeb is concerned, more progress might be made faster by
focusing on those known certification shortcomings documented by the
NTSB, the TSB and other bodies which have identified certification
shortcomings. Their extant recommendations provide a ready list for
high-priority action.

Not always assured redundancy
In certain respects, the 2002 certification review does not probe as
deeply into basic issues as the 1980 report. The 2002 review does
address embarrassing failures in the supplemental type certification
process, notably regarding in-flight entertainment systems. However,
it gives scant mention to the problem of certifying software for
today's increasingly computerized jets. It mentions electrical
wiring, and the need for better separation from structure to prevent
chafing and subsequent electrical arcing, but the report does not
mention (1) the potential hazard posed by routing wires inside fuel
tanks, or (2) the potential hazard of routing low-power signal wires
and power-supplying wires in the same bundle. It does not mention one
of the most controversial certification issues since the fatal 1994
crash of a USAir B737 - the design of the airplane's rudder power
control unit (RPCU). During the airplane's original certification,
questions were raised about the design of the dual-concentric servo
valve that formed the very heart of the RPCU.

The issue of flight control system certification may be even more
timely in the wake of the Flight 587 accident, and the issue of fixed
versus variable-ratio rudder limiting doubtless will be an avenue of
inquiry in the investigation.

As part of the controversy over the B737 rudder control system, the
certification report does not address a striking FAA interpretation
of a catastrophic failure condition that emerged in the wake of the
USAir crash. If the pilots had been able to recover from what was
believed to have been an uncommanded rudder reversal caused by a dual
slide jam in the RPCU, the FAA decreed: "It is not a catastrophic
event as defined by FAA regulations [as] this condition will not
always result in an accident."

In other words, an "extremely improbable" yet potentially
catastrophic situation is allowable if the pilots have the time and
presence of mind to recover the airplane. Hence, the corollary to the
FAA interpretation of a catastrophic failure condition is that it's
only catastrophic if it kills every time (see ASW, Oct. 18, 1999).

Analytical illusions
The term "extremely improbable," not discussed substantively in the
2002 certification study, was examined in some detail in the 1980
report. The term is one of the foundation stones of safety analysis
in the certification process. As codified in a 1982 advisory circular
(AC 25.1309-1), an extremely improbable event is one that occurs on
the order of just once every billion flight hours (1 x 10-9). To put
the frequency of such an event in perspective, once in some 116,000
years of continuous flying a single point failure of such severity
would occur that the aircraft and its occupants could be lost. A
fleet of 150 aircraft, each operated 2,000 hours per year, would
accumulate some nine million hours of total flying in roughly 30
years of operation.

However, if the analyses support such a high level of safety, why do
airplanes crash at a rate of between one in 10 million to one in a
100 million flying hours - orders of magnitude below the extremely
improbable standard? The answer lies partly in the assumption that
airplanes leave the factory in pristine condition, with no
manufacturing defects, and that they are maintained strictly per
specifications, procedures and schedules.

This is not always the case. Moreover, presumed loss from a single
point failure just once every billion hours is based on the
presumption that every system on the aircraft meets the one-in-a-
billion requirement. But consider an aircraft with 50 systems, each
of which can generate a single point failure that can down an
aircraft at a frequency of one time in a billion hours. In this case,
a particular fleet could lose five aircraft every 10 million hours.
By this calculation, the standards could be interpreted to accept the
loss, on a statistical basis, of an aircraft roughly every three to
four years.

As the 1980 certification report observed, the 1 x 10-9 standard
features an inherent weakness: "The failure of safe-life and fail-
safe structure that surrounds such systems is currently not required
to be considered within the system's design requirement." Rather, the
Low report suggested that the worst conceivable combination of
failures should be considered when a design is reviewed for
certification. Shrapnel damage from an exploding engine should not be
dismissed as "extremely improbable," it cautioned. Rather, consider
what might happen if such shrapnel could simultaneously pierce
through two closely spaced hydraulic lines of two theoretically fully
independent and redundant systems. This worst-case approach, the 1980
report intoned, "has not been generally applied."

Improving overall safety
However, the 2002 report does acknowledge the potential pretensions
of assumed redundancy, and the need to consider combinations of
circumstances, however, unlikely their frequency:

"Every assumption should be examined to understand the sensitivity of
the assumption on the results ... [and] the design should be changed
to reduce the sensitivity ... One unanticipated failure mode may
occur and have a major effect on the airplane's safety ... [It]
should be addressed by looking at key protective features to
determine if additional safeguards are needed."

As a pertinent example, the report mentioned ignition sources in fuel
tanks as one possible single-point failure, made single-point by the
presence of explosive vapors. Reducing the explosiveness of vapors
would provide for greater protection against a single point hazard.

Redundancy in subsequent service also must be protected, the report
warned: "This redundancy is not required and is not always found when
design changes, maintenance, repairs or alterations involving
critical airworthiness areas are accomplished." In other words, the
one-in-a-billion standard doesn't apply. "Extremely improbable" can
degrade to "more probable." And for this reason, the report
declares, "Establishing such redundant verification requirements ...
in critical airworthiness areas would improve the overall safety of
commercial air carrier operations."

Above all, if action remains sparse in the face of prolific data, the
certification conundrum will remain unchanged.

The State of Certification
Information flow. Critical information may not be available to those
who could act upon it.
Human factors. Failure to account for the human element is a common
thread in accidents.
Lessons learned. Significant safety issues learned through accidents
are sometimes lost with time and must be re-learned at a very high
price.
Safety awareness. Many of the accidents reviewed followed one or more
previous incidents that were not acted upon because those involved
were unaware of the signifi- cance of what they had observed. Often
the reason for this lack of awareness was failure to view the
significance of the event at the airplane level, rather than at the
system or subsystem level.
Source: FAA, Commercial Airplane Certification Process Study, March
2002, p. 88

The Canadians' Certification Concerns
On aircraft materials in general:

"Existing material flammability standards allow the use of flammable
materials as well as materials that propagate flame within
predetermined limits. In addition to the associated fire risk, the
majority of these materials pose additional hazards, as there is no
regulation requiring that other flammability characteristics - such
as heat release, smoke generation and toxicity - be measured.
Currently, the most stringent fire tests are reserved for materials
located in inaccessible cabin areas. As a consequence, some of the
most flammable materials within the pressurized portions of the
aircraft are located in hidden, remote or inaccessible areas. These
areas pose a high risk of being involved in potentially
uncontrollable in-flight fires."

On aircraft wiring:

"The failure of aircraft wiring has the capacity to play an active
role in fire initiation ... despite the potential for wire to
initiate a fire, the only material flammability test mandated for the
certification of aircraft wire, including its associated insulation
material, is the '60? Bunsen burner test' ... In effect, the sole
material flammability performance criterion mandated for aircraft
wire insulation material is the determination of how a single
unpowered wire will behave when involved in a fire in progress."

Source: TSB, Material Flammability Standards, Aug. 28, 2001

Then and Now
Two examples comparing findings of a 1980 certification study to the
situation today:

Maintenance oversight

1980: "The committee finds that the detailed quality control audit
teams formerly employed to augment the [FAA] inspectors' ability to
monitor the airlines' maintenance programs have been reduced to more
infrequent visits."

From: Improving Aircraft Safety - FAA Certification of Commercial
Passenger Aircraft, National Academy of Sciences, 1980, p. 11

2002: "Preliminary findings from investigations of the January 2000
crash of Alaska Airlines Flight 261 indicated that the crash may have
been caused by an aircraft maintenance problem. FAA had not performed
an inspection of Alaska Airlines' internal maintenance review program
in two years, and was not routinely conducting comprehensive reviews
of these systems at other carriers. In response to our audit ...
[the] FAA has agreed to perform more comprehensive annual
inspections ... The key now is to follow- through."

From: FAA's Fiscal Year 2003 Budget Request, March 13 statement to
Congress of Kenneth Mead, Inspector General, Department of
Transportation

'False confidence' in design

1980: "As it studied the record of aircraft accidents, as well as
present design philosophies, the committee came to recognize a
serious shortcoming in the current regulations and in how they are
applied. The problem has to do with interpretation of the regulations
that permits a manufacturer to demonstrate in the design of an
aircraft that certain failures simply cannot occur and that, once
demonstrated, the consequences to other structure and systems of such
an 'impossible' failure need not be taken into account."

From: Improving Aircraft Safety - FAA Certification of Commercial
Passenger Aircraft, National Academy of Sciences, 1980, p. 8

2002: "Catastrophic events such as thrust reverser deployment in
flight, and fuel tank explosions, have, as one root cause, an
incorrect assumption ... In the case of the thrust reverser ... the
assumption was that the airplane was controllable in the event of
such a deployment. During the development of the Boeing 767, this was
demonstrated in flight, but only at low speed and with thrust at
idle. This was assumed to be the worst condition, erroneously, as
found later in the case of Lauda Air in Thailand (ASW note: the
engines were at climb power when the reverser deployed, and the crew
had but four seconds to assess, decide and react correctly).

"In the case of fuel tank explosions, the assumption was that the
design, operation and maintenance practices would prevent ignition
sources ... A second assumption was that the tank could be flammable
at any time and there was no need to examine the probability of the
tank being flammable. The combination of these assumptions created a
false confidence in the success of the designs ... and the failure to
keep ignition sources out of the tank may have led to three center
tank explosions in the last 11 years.

"In both of these examples ... the design was shown to comply with
the certification requirements."

From: Commercial Airplane Certification Process Study, March 2002, p.
24

A Gap in Standards
Case study: In-Flight Entertainment (IFE Systems)

"There is not a regulation that directly prohibits the powering of
miscellaneous, non-required systems (in this case IFE) from busses
that also power essential or critical level systems. However, the
desire is to power IFE systems from busses that power other
miscellaneous, non-required systems. As an example, the most reliable
busses supply power to the most critical systems, whereas those
busses that are the first to be shed (either manually or
automatically) supply power to systems such as galleys, telephones,
in-seat power supply, and IFE systems. The higher level busses are
the last to be shed, if at all. Therefore, connecting an IFE system
to an essential or critical-level bus without a dedicated means to
remove power inhibits the ability of the crew to remove power from
the IFE system ... in a smoke/fumes emergency."

Source: FAA, Interim Policy Guidance for Certification of In-Flight
Entertainment Systems, Sept. 18, 2000

----------------------------------------------------------------------
----------

Inaction, Not Lack of Information, Is the Central Certification
Problem

ASW Interviews Bernard Loeb, former head of accident investigations
Dr. Bernard Loeb, D.Sc., spent 24 years with the National
Transportation Safety Board (NTSB), the last six years before his
retirement in 2000 as director of the Office of Aviation Safety. In
this capacity, he had direct knowledge of virtually all the major
safety-related certification issues in recent times; as a veteran of
numerous battles over certification standards, Dr. Loeb is uniquely
qualified to comment on the Federal Aviation Administration's (FAA)
Certification Process Study (CPS).

Prior to joining the NTSB, he served for 15 years in various
aeronautical engineering positions in government and industry. Dr
Loeb twice received the Presidential Distinguished Rank award, the
highest available to civil servants, and he also received the
Chairman's Award, the highest recognition at the NTSB. He earned his
doctorate in engineering science at The George Washington University.

ASW: Did Safety Board concerns about certification shortcomings have
anything to do with the launch of this review?

Loeb: I believe the FAA initiated the CPS in response to then-NTSB
Chairman Jim Hall's request to me and to Dr. Vernon Ellingstad at the
final Board meeting on August 23, 2000, concerning the fatal 1996
crash of Trans World Airlines [TWA] Flight 800 (see ASW, Aug. 28,
2000). At that meeting, Chairman Hall reminded me and Dr. Ellingstad
that he had asked us to undertake a study of airplane certification
during the USAir Flight 427 accident hearing in March 1999 (see ASW,
March 29, 1999). He stressed that he wanted us to get going on the
study. I received calls from Tom McSweeny's staff after the TWA 800
final Board meeting asking about the nature of our study. At the
time, Mr. McSweeny was associate FAA administrator for regulation and
certification. A few months later, we learned the FAA was going to
initiate its own study.

ASW: As director of aircraft accident investigations, what were some
of your concerns about the adequacy of certification standards?

Loeb: Issues about adequacy of the design and certification of
airplanes had arisen in numerous NTSB accident and incident
investigations in the past, including:

Certification for flight into icing conditions such as those raised
in the fatal 1994 crash of an American Eagle [AMR] ATR 72 at
Roselawn, Indiana, and of a Comair [COMR] EMB 120 at Monroe,
Michigan, in 1997,
Smoke detection/fire suppression raised in the 1996 ValuJet DC-9
crash in the Everglades,
Redundancy of critical systems (especially flight control systems)
raised in the fatal crashes of a United Airlines [UAL] B737 at
Colorado Springs in 1991 and of a USAir B737 near Pittsburgh in 1994,
and perhaps the fatal 2000 crash of an Alaska Airlines [ALK] MD-83
(see ASW, Feb. 7, 2000),
And the fuel tank explosion aboard a Trans World Airlines [TWA] B747
off the coast of Long Island in 1996.

ASW: Are you satisfied with the composition of the task force?

Loeb: My concerns are less about the composition and more about
whether any group put together by the FAA and stuffed with industry
representatives can really do an independent assessment of their
design and certification process.

ASW: What are some of the strengths of this report that should be
taken to heart?

Loeb: The report is on target noting that the design and
certification process does not adequately consider human performance
issues, especially the concept that aircrew and maintenance personnel
do not always perform as we expect.

The report also is on target on the need for redundancy. It is also
close to being on target saying that fuel tank vapor flammability
should be reduced. I believe reduction is not adequate. Fuel tank
explosions need to be prevented and elimination of explosive vapors
is one way of doing that. However, in the more than five years since
TWA Flight 800 blew up, even reduction of flammability has not been
accomplished.

ASW: The report calls for greater sharing of information. Is that the
central problem?

Loeb: I believe there is a need for improved information sharing. The
NTSB had addressed the need to improve communications between the
flight standards and certification offices in the FAA as a result of
the American Eagle crash, and in even stronger terms after the Comair
crash. At the time, the FAA said it was working on inter- office
communications while characterizing them as good.

However, I do not believe information sharing is the central problem.
More than adequate information was at hand and known in most of the
major design and certification issues arising out of our accident and
incident investigations. Rather, the designers and certification
officials simply did not act on the known information. The problem
was not lack of data but the lack of action.

ASW: Have the "lessons learned" been sufficiently documented to
warrant action?

Loeb: In many cases the lessons learned have been documented. The
NTSB has often made recommendations to the FAA to address lessons
learned, but the FAA failed to act. One prime example is the FAA's
failure to act after we determined and told the FAA that the fire
protection in Class D cargo holds was inadequate. We were concerned
that the fire liners would not contain, as intended, a fire in the
hold until the airplane could be landed. The FAA failed to require
detectors and extinguishers in Class D compartment as we had
recommended and ValuJet Flight 592 crashed into the Everglades,
partly as a result of this failure to act. The inaction did not stem
from a lack of information.

ASW: How do you explain the perceived slowness in assimilating the
harsh lessons into improved certification standards and practices?

Loeb: I'm not certain why. I apprehend that the FAA relies too much
on the industry input, and acting on lessons learned can cost money.
I believe the FAA needs to make the hard decisions the industry may
be reluctant to make.

ASW: The study conceded, "Certification standards might not reflect
the actual operating environment." Isn't that a revealing statement?
After all, the Safety Board has documented failings in the operating
environment that point to gaps in certification standards.

Loeb: It is quite an indictment. Even more so given that there have
been numerous fatal accidents over the past 30-40 years in which it
was abundantly clear that operating and maintenance personnel do not
always act or react in ways that we expect, and that airplanes and
components too can fail in unexpected ways because they weren't
designed and tested for the proper conditions. The Continental
Airlines [CAL] DC-10 brake failure after a rejected takeoff at Los
Angeles in 1978 is a good example of this failure to design and
certify for a real operating environment. The brakes were tested in
like-new condition to determine their stopping performance. However,
most brakes in line service are worn. And worn brakes cannot absorb
as much energy as new brakes. We did two studies following the
Continental accident, as it wasn't the first case like this. One
study looked at brake certification and the other examined rejected
takeoffs. We finally got action from the FAA, but what a fight it
took.

ASW: You articulated the concept of "reliable redundancy" for
critical flight control systems. Is that concept adequately addressed
in this report?

Loeb: The report addresses the need for more redundancy, and that's
good. However, it fails even to mention the B737 rudder actuation
system, which was the genesis of the NTSB's recommendation on
reliable redundancy (see ASW, April 26, 1999). That is unfortunate.
The absence of any discussion on this matter suggests to me that the
CPS ducked a central issue. The report also does not mention the
possible lack of redundancy in the DC-8/MD-80 horizontal stabilizer
system which, in the crash of Alaska Flight 261, failed in a way that
had not been predicted in the design and certification process.

ASW: There have been numerous documented incidents involving software
failures, quirks, glitches and unanticipated behavior. With ever more
lines of software code going into modern jets, does this report
adequately address the certification of software?

Loeb: The report does not adequately address this significant issue.

ASW: The report cites the nagging persistence of human error, and it
suggests that this conundrum might be mitigated with deployment of
additional technology. Do you agree?

Loeb: Additional technology can help reduce some human errors.
Enhanced terrain warning systems [TAWS] are a good example (see ASW,
March 25 and Dec. 10, 2001).

Historically, the FAA has been slow to promote development of such
systems and then to require their use. There are other areas where
the NTSB recommended improved technology to help prevent human error,
with no real follow-up action from the FAA. As an example, following
the Comair crash, the NTSB recommended a means to detect the presence
of icing and to modify the stall warning to the pilot to reflect the
lower angles of attack at which the airplane will stall with ice-
contaminated wings (see ASW, Aug. 31, 1998 and Sept. 7, 1998). Ice
detectors were installed on the EMB-120s and stall warnings were
modified as a consequence of NTSB recommendations out of the Comair
crash investigation, but the FAA did not change the certification
standards.

ASW: Where does this report specifically address the certification
concerns raised by the NTSB in recent years? Are the top concerns
dealt with? What's been overlooked?

Loeb: The report does address redundancy in the design of critical
systems and the flammability of fuel vapors. However, there is no
mention in the report of the absurd design practice of putting heat-
producing equipment - air conditioning packs - immediately below the
tanks carrying fuel, with no means to prevent the heat from migrating
into the tank. And, although the report addresses the need to do a
better job identifying potential system malfunctions, the report does
not specifically recognize that it may not be possible to identify
all conceivable system malfunctions. We don't know what we don't
know. We need to design and certify with that uncertainty
specifically recognized.

ASW: Are adequate checks and balances in place for the Supplemental
Type Certificate (STC) process?

Loeb: I do not believe so. The Swissair MD-11 crash in 1998
demonstrates that. Its in-flight entertainment network (IFEN) was
installed in a way that was incompatible with the electrical design
philosophy of that aircraft. But the installation received an FAA-
approved STC (see ASW, Sept. 13, 1999).

ASW: Is there a point where a derivative airplane design (in terms of
size, weight, components, etc.) is sufficiently different that it
should be considered a new-design aircraft for certification purposes?

Loeb: I have great concerns about the certification of derivative
aircraft, and I do not believe the process works well enough now. A
close look is needed to better understand when it is unsafe to allow
a derivative aircraft to be approved under the original type
certificate. I also believe that all critical systems and components
probably should meet the latest regulations when a derivative model
is being certified (ASW note: Precisely this point was made by Tony
Broderick, former FAA associate administrator for regulation and
certification. In his keynote remarks at a safety conference three
years ago, Broderick criticized the practice of "grandfathering
essentially forever new production and derivative designs." See ASW,
Nov. 23, 1998).

ASW: In light of the findings in this report, what actions do you
recommend?

Loeb: An independent study of the design/certification process based
on what has been uncovered in prior accidents and incidents.
Knowledgeable experts not tied to the current system should conduct
such a study. Loeb, e-mail loebber1@aol.com

Reliable Redundancy Needed
"Because the complexity of the 737 rudder system ... its lack of
redundancy in the event of a single-point failure ... the FAA should
require that all existing and future 737s have a reliably redundant
rudder actuation system. This redundancy could be achieved by
developing a multiple-panel rudder surface or providing multiple
actuators for a single- panel rudder surface ...

"Another possible way to achieve redundancy in the rudder control
system would be to modify it so that the standby rudder PCU [power
control unit] would be automatically activated and the main rudder
PCU would automatically be deactivated if the main rudder PCU
actuator system moves the rudder without a pilot command ... The
Safety Board concludes that ... the 737 rudder system designed and
certificated by the FAA is not reliably redundant."

Source: NTSB letter to FAA, April 16, 1999

----------------------------------------------------------------------
----------

Considerable Refinement Needed

The Air Transport Oversight System (ATOS) launched with such
breathless fanfare by the Federal Aviation Administration (FAA) three
years ago has miles to go before full implementation and full
realization of its promise. "The system needs considerable
refinement," was the sobering judgment rendered recently by Kenneth
Mead, inspector general for the Department of Transportation
(DOT/IG). At the time of his March 13 remarks, Mead was commenting to
the U.S. House of Representatives'Appropriations Committee on the
FAA's fiscal 2003 budget request. Mead is slated to offer an
expansion of his remarks about the state of ATOS at a House aviation
subcommittee hearing this week. His earlier remarks offer a foretaste
of what the legislators are likely to hear:

"ATOS will rely on analysis of data ... to focus inspection
activities on areas within the carriers' operations that pose the
greatest safety risks. However, three years after FAA initiated ATOS,
the new system is not completed at any of the 10 major air carriers
and much work remains to implement the system.

"When interviewed, 71 percent of [FAA] inspectors said they had not
had adequate training on the new system.

"With the last year, FAA has taken steps to address problems in ATOS
and has made incremental progress, such as hiring staff to analyze
ATOS data. However, to get the system operating as intended, FAA must
complete implementation of the new system, provide critical inspector
training ... and must fully integrate ATOS into its oversight of the
remaining air carriers."
 
Posts: 90 | Registered: Fri March 29 2002Reply With QuoteReport This Post
posted Hide Post
I think this is a good time to resurrect this excellent article that appeared in Air Safety Week.
 
Posts: 2580 | Location: USA | Registered: Sun April 07 2002Reply With QuoteReport This Post
  Powered by Social Strata  
 

Swissair111.org    forums.swissair111.org    Discussion  Hop To Forum Categories  SR111 Messages    Air Safety Week Article - High Price of Lessons Not Learned from Disasters

© YourCopy 2002