skip to main content

The Promises and Perils of Artificial Intelligence

August 1, 2023

Ai August Commentary

Last May, a group of more than 350 scientists and artificial intelligence (AI) experts signed the following one-sentence  statement issued by the non-profit Center for AI Safety: “Mitigating the risk of extinction from A.I. should be a global priority alongside other society-scale risks, such as pandemics and nuclear war.”

Here we have perhaps the starkest and most ominous warning about artificial intelligence (AI). Because the statement was signed by high-level AI experts and because it compares AI to pandemics and nuclear warfare it needs to be taken seriously. But is it the case that AI is as threatening to human civilization as pandemics and nuclear bombs? Are the predictions that AI could annihilate humans believable?

Fears Involving New Technologies Are Inevitable

There are always extreme fears that attend the introduction of new technologies. People worried when the steam locomotive was introduced that the human body could not withstand speeds greater than 30 miles per hour (50 km/hr). Television was supposed to rot the minds of children and lead to the end of the written word. Many experts found the Center for AI Safety’s warning statement to be extreme. “AI is simply nowhere near gaining the ability to do this kind of damage,”  wrote Nir Eiskovits, Professor of Philosophy and Director of the Applied Ethics Center at the University of Massachusetts, Boston. “…AI won’t blow up the world.”

Nevertheless, Eiskovits does worry about less dramatic downsides of AI. “…AI’s ability to create convincing deep-fake video and audio is frightening, and it can be abused by people with bad intent…the increasingly uncritical embrace of it, in a variety of narrow contexts, means the gradual erosion of some of human’s most important skills. Algorithms are already undermining people’s capacity to make judgements, enjoy serendipitous encounters and hone critical thinking.”

While the claim that AI will annihilate human civilization seems hyperbolic, it is the somewhat less dramatic risks of unregulated AI that experts caution us about. There is significant potential for AI to be used in the service of disinformation campaigns, for example. Surveillance may become easier and thus privacy more elusive. And there is little question that emerging algorithms generated by AI will affect some people’s jobs, although this may also at the same time help others. As Bhaskar Chakravorti noted in Fortune magazine, “While AI might help workers get more done in less time, and this increased productivity could increase wages of those employed, it could lead to a severe loss of wages for those whose jobs are displaced.”  

Take the case of reading X-rays, for example. In a study by  Stanford University investigators, a newly created AI tool was able to perform as well as expert radiologists in reading chest X-rays that contained evidence of 14 pathologies. Ultimately, this could mean less work for radiologists. On the other hand, it is theoretically plausible that with improvements AI could do a significantly better job than humans at reading X-rays; remember that AI does not get fatigued or distracted and performs at the same level regardless of external circumstances. Hence, while one part of the work force may be disadvantaged by AI, in this case diagnostic radiologists, a larger segment of the public could see the benefit of having their X-rays read more quickly and accurately.

AI Offers Significant Potential for Benefit

In general, concerns over the potential risks inherent in widespread application of AI must be balanced by the very real potential, already being realized in some spheres, for AI to provide important advances that will benefit our lives. New drug discovery is one such area. Right now, scientists at universities and pharmaceutical companies must use laborious computer and laboratory procedures to identify potential new molecules that may be useful therapeutics for previously poorly treated diseases. Now, however, scientists have the capacity to train AI models to quickly identify such putative therapeutic molecules. We are almost certainly going to see a bevy of new medications developed using AI, medications that hold promise to be more effective and have fewer risks than existing drugs. Already, for example, scientists report the development of AI methods that can  predict the efficacy and adverse side effect risks of tools that use the exciting CRISPR gene technology system. This represents a revolution—the application of AI—on top of a revolution—the CRISPR system—in new drug development.

AI is also already being used to identify rare inherited diseases. AI can be trained to identify patterns in the sequence of millions of base pairs that make up our genes and to locate anomalies that are obscure to current methods. A review in the June 29, 2023, issue of the  New England Journal of Medicine tells the story of a six-month-old infant with a seizure disorder of previously unknown origin. With a single blood test, however, an AI algorithm was able to quickly identify a genetic abnormality causing the baby’s seizure within eight hours after the blood was drawn. This scenario will become increasingly common, opening the possibility of remarkable new interventions to treat diseases on a molecular basis.

On a more mundane but nevertheless extremely important level, AI promises to help healthcare systems control one of its most pressing problems: the demands on healthcare workers to meet requirements of new electronic health record systems. Right now, healthcare providers complain bitterly of the time it takes them to enter all the necessary information generated by a patient visit into the electronic health recorder, something that is responsible for considerable burnout among doctors, nurses, and other healthcare professionals.  An AI solution allows the doctor and patient to record the visit; AI algorithms then take the information and format it into the electronic health record. The doctor, or other healthcare professional, only needs to proofread the resulting entries. No longer will your doctor be fixated on her computer terminal, furiously typing notes while you answer and ask questions. Instead, the doctor will be able to interact with you, knowing that her AI “sidekick” is taking all the notes necessary and transcribing them directly into the electronic health record.

Regulation Is Essential

We won’t be able to stop the development of AI, nor do we want to. Its promises are too great. But we clearly need to begin now to regulate it so that its risks are controlled and mitigated. Already, the European Union has advanced proposed legislation called the AI Act (AIA) that would place important guardrails around AI.  According to one report, “High-risk systems as defined by the AIA necessitate compliance with rigorous premarket obligations, ranging from risk management to various control measures and transparency requirements.”

It is predictable that a new technology—particularly one as technically obscure to so many of us as AI—would induce substantial and sometimes exaggerated fears. “The fear that AI might eventually replace humans in most areas of life challenges our sense of purpose and identity,” writes a group interested in the  neuroscience of AI-related fears. The group goes on to offer this reassurance: “While these fears are valid, it is crucial to remember that AI is a tool created by humans and for humans. AI does not possess consciousness or emotions; it only mimics cognitive processes based on its programming and available data.” Misinformation about AI seems to be a growing problem. A report of an AI drone that killed its operator was widely circulated a few months ago; it was pure fiction. Writing about this episode in the  New Scientist, Mathew Sparkes noted that: “This story is just the latest in a string of dramatic tales told about AI that has at points neared hysteria.”

Artificial intelligence will probably not cause mass extinction or destroy the world, and it has many potential upsides. At the same time, its risks are real. It is important that we not respond to this new technology with hysteria but rather with  a balanced stance. That must include carefully crafted legislated regulation that controls what AI can do and makes its every aspect transparent. The European Union has staked out an impressive first attempt at this; we hope there will be international cooperation among governments to regulate AI, making it safe and allowing its potential to be realized.



Categories: AI
Tags: ,