The Ethics of Artificial Intelligence: Striking a Balance

TwitterLinkedInGoogle+Facebook

Although AI threatens to upend jobs, its bigger promise is delivering a more efficient and sustainable world.

By Kevin Delaney, Senior Writer, Connected Futures
 

From banks to battlefields, artificial intelligence (AI) is on the rise. But as AI influences more and more key decisions, it also raises complex questions.

Will smart machines eliminate workers or help them? Should we worry about AI developing a mind of its own?

“We need to look at the technology pieces that are around right now. And how they might interact in ways that we haven’t anticipated,” said Dr. Colin Allen, a professor of the history and philosophy of science and medicine at Indiana University Bloomington, and co-author of Moral Machines: Teaching Robots Right from Wrong.

“We need not be so focused on the Terminator/singularity scenarios,” he added, referring to that moment when machines might one day become fully self-aware and superintelligent. In the Terminator’s case, with disturbing consequences. 

AI doesn’t have to be all doom and gloom. Business leaders, tech innovators, and politicians are gaining a better understanding of human-machine interactions, while starting to see AI’s promise of a more efficient and sustainable world.

“We are just at the beginning,” said Dr. Peter Asaro, a philosopher of science, technology, and media at the New School and co-founder of the International Committee for Robot Arms Control. “These ethical issues in society are going to have to be worked out. Where do we want machines? How are we going to manage the consequences of automation in different sectors?”

AI is not here to threaten jobs and ways of life. It will mark a change in jobs, but many believe those displaced will have better jobs. And that AI will simply replace or take over some of the more tedious tasks.

Did the Luddites Have a Point?

From truck drivers to white-collar knowledge workers, AI promises big changes. And change breeds fear.

“The Luddites are often cartoonishly portrayed as people who hated machines,” Allen said of 19th-century workers who fought technology. “But they were worried about machines taking their jobs.”

The challenge is to balance similar worries with the great benefits that AI promises.

“My view is that we shouldn’t be blind proponents of these systems, but we shouldn’t be blind critics of them either,” said Dr. David Danks, a professor of philosophy and psychology at Carnegie Mellon University, which recently received a $10 million grant to explore the ethics of AI.  

Danks argues that the real strength of AI systems is to augment workers, not replace them. By taking over time-consuming, menial tasks, and unleashing humans to focus on what they are best at: creating and innovating.

For example, at Duke University, scientists and statisticians are spending less time waiting for resources to free up and more time working on projects.

Duke’s IT organization is using pre-configured virtual machines fully equipped with analytic tools.

“The core mission of the university is improving academics and leading in research,” said Richard Biever, Duke University’s chief information security officer. “This is about making resources available to researchers immediately.”

Other organizations also understand AI’s strength in taking over tedious, labor-intensive jobs.  

Nick Rockwell, the chief technology officer of The New York Times, is exploring new ways to cope with the deluge of data coming the paper’s way. While freeing reporters and editors to do what they do best.

“What kinds of stories are buried in these incredible data sets that are being generated?” He asked. “How do we find them? How do we action them? How do we work with these giant data sets?”

When asked if AI plays a role, Rockwell was emphatic: “It has to, because it’s well beyond human capability to sift through this volume of data.”

IT is another area where AI shows great potential. “AI can free resources for IT professionals to be more creative in the design of their systems,” noted Danks.

Cybersecurity, for example, is a constant burden for IT teams. AI can help.

“No question that some of the most sophisticated AI systems right now are in cyber-defense,” Danks said, “in terms of being autonomous to develop novel strategies and take actions.”

Still, Danks warns that AI will cause serious upheavals in many walks of life. “I don’t think that we, as a society, are prepared to deal with the fact that suddenly there are going to be large numbers of people who will have to find new opportunities,” he said.

Truck drivers, for example.

“If you talk about eliminating truck drivers” said Asaro, “you are eliminating the No. 1 employment opportunity for men in many states. What happens to those workers? What do we expect them to do in society, and what opportunities will be made for retraining? If we fail, then I think we are ripe for a lot of social unrest.”

Thinking through such consequences will be critical. As opposed to tossing new technology out into the world, Frankenstein-like, without considering consequences.

“We are sort of like the apocryphal frog in the water that is slowly raised to boiling,” said Allen. “These changes are coming, and we keep making small adjustments in our behavior to deal with them. But how do we go from collectively just reacting to technology changes, to anticipating them?”

Managing the Team (Human and Non)

Change demands leadership. One quality of good leaders is knowing the strengths and weaknesses on their teams. More and more, AI will be another team member.

“AI will always have certain weaknesses in understanding human cultural contexts and social implications,” Asaro said. “Humans will always be weaker at analyzing large data sets. As managers create teams that have both humans and AI, it will be crucially important to understand this.”

Asaro also warns of “automation bias” — the tendency to ascribe too much weight to machine advice

Decision-makers must balance all viewpoints — human and machine — and ensure that information fed to AI systems is accurate. After all, an algorithm is only as good as its data.

“The challenge for AI engineers,” Asaro added, “is going to be to design AI systems that let people challenge and investigate and uncover why the system came to that conclusion.”

If doctors receive a diagnosis, for example, they will want to know exactly why the machine reached that conclusion. The same for an autonomous weapons system.

Knowing the strengths and weaknesses of AI systems is critical from the start.

“AI is not a magic bullet,” said Danks. “Or a cure-all that can fix every problem.”

He stresses that AI’s impact must be assessed in the first stages of a project’s development.

“People think this is a horribly intractable problem,” said Danks. “But having ethical, socially responsible AI is not a difficult goal. As long as we think about it from the beginning. You can’t just bolt on ethics at the end.”