Calls to Share Negative Sides of New Technology

by Cade Metz

Егемен Қазақстан
09.11.2018 79

Eoin Ryan

San Francisco

In July, two of the world’s top artificial intelligence labs unveiled a system that could read lips.

Designed by researchers from Google Brain and DeepMind — labs owned by Google’s parent company, Alphabet — the automated setup could at times outperform professional lip readers. When reading lips in videos, it identified the wrong word about 40 percent of the time, while the professionals missed about 86 percent.

In a paper that explained the technology, the researchers described it as a way of helping people with speech impairments. In theory, they said, it could allow people to communicate just by moving their lips.

But the researchers did not discuss the other possibility: better surveillance. A lip-reading system is what policymakers call a “dual-use technology,” and it reflects many new technologies emerging from top A.I. labs. Systems that automatically generate video could improve movie making — or feed the creation of fake news. A self-flying drone could capture video at a football game — or kill on the battlefield.

Now a group of 46 researchers, called the Future of Computing Academy, is urging the research community to rethink the way it shares. When publishing new research, they say, scientists should explain how it could affect society in negative ways as well as positive.

“The computer industry can become like the oil and tobacco industries, where we are just building the next thing, doing what our bosses tell us to do, not thinking about the implications,” said Brent Hecht, a professor at Northwestern University in Illinois who leads the group. “Or we can be the generation that starts to think more broadly.”

When publishing new work, researchers rarely discuss the negative effects. This is partly because they want to put their work in a positive light — and partly because they are more concerned with building the technology than with using it. Public companies rarely discuss the potential downsides of their work. Mr. Hecht and his colleagues are calling on journals to reject papers that do not explore those downsides.

Can Google’s lip-reading system help with surveillance? Maybe not today. While “training” their system, the researchers used videos that captured faces head-on and close-up. Images from overhead street cameras “are in no way sufficient for lip-reading,” said Joon Son Chung, a researcher at the University of Oxford.

But cameras are getting better and researchers are constantly refining the A.I. techniques that drive these systems. Chinese researchers just unveiled a project that aims to use similar techniques to read lips “in the wild,” accommodating varying lighting conditions and image quality.

Stavros Petridis, a research fellow at Imperial College London, said that this kind of technology could eventually be used for surveillance. He said, “Today, no matter what you build, there are good applications and bad applications.”

© 2018 New York Times News Service

To read more articles from The New York Times, click here

Comments(0)

ADD A COMMENT