Chapter 9: The Path Forward – 737 Max AI tragedy and AI Opportunity

The urgent need for teachers to guide and teach  ethical, appropriate, and tested use of ai

In the afterglow of our explorations, as we saunter through this labyrinth of technology and pedagogy, let us remember Ernest Hemingway two plane crashes in two days—forever etched in the annals of daring—interlocked with the caprices of destiny and machinery alike.

However, another aviation story presents us with a rather disturbing question:  Has AI ever killed a human being?  Accidentally or otherwise? Unfortunately yes, but most people aren’t aware of this. The story relates to the early deployment of an ai safety system catastrophic failure.

Front view of 737 Max-A now successful Aircraft that started with a major safety failure.

Our academic sojourn faces a disquieting underbelly, exemplified by the dichotomy of MCAS, a supervised AI system painstakingly trained on labeled data concerning an aircraft’s angle of attack and control surface adjustments.

This AI system, armed with what seemed like an endless cache of data, was heralded as a marvel. And yet, despite the precision of its supervision, its deployment was nothing short of catastrophic. Hastened by corporate urgency, MCAS took to the sky, tragically unready. The cost? An irreversible forfeiture of 346 lives. These souls became the unwilling martyrs to a technology rushed to fruition, a grim testament that our race toward the future must never outpace our diligence and foresight.
https://www.fierceelectronics.com/electronics/killer-software-4-lessons-from-deadly-737-max-crashes

In the realm of AI security, one cannot overlook the crucial contributions of Emily M. Bender a Linguist from the University of Washington in highlighting the immediate risks associated with large language models. These models, despite their advancements, harbor significant downsides including substantial energy consumption exacerbating the climate crisis, the propagation of bias and misinformation, and a notable deficiency in accurately parsing non-English languages. This linguistic shortcoming potentially endangers non-English speakers, especially when such models integrate into critical infrastructures like emergency-response systems. Bender staunchly advocates for transparency in the developmental stages of these models, urging creators to explicitly state the languages employed in their development, a guideline swiftly gaining traction as the “Bender rule.” This move is a step towards averting the dangerous presumption that these systems offer equal efficiency across all linguistic landscapes, thus forging a pathway toward a more secure and inclusive AI ecosystem.

Such cautionary tales present us, educators and pioneers, with an ethical dilemma as imposing as any African jungle. It is the unyielding truth that the audacity of technological promise must be met with the scrutiny of ethical consideration. As we implement tools like Khanmigo, Claude, or even autonomous vehicles with safety records surpassing human capability, let us be vigilant. For we are not merely curators of information but guardians of the lives that will be shaped, for better or worse, by these innovations.

License

Icon for the Creative Commons Attribution 4.0 International License

AI Supercharged Learning Copyright © 2023 by A7technology inc. is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book