[ExI] hawking on the singularity

spike spike66 at att.net
Wed Dec 3 21:07:52 UTC 2014


 

 

 

http://www.foxnews.com/tech/2014/12/03/stephen-hawking-artificial-intelligen
ce-could-end-human-race/

 

 

 


Stephen Hawking: Artificial intelligence could end human race


 <http://www.livescience.com/> LiveScience

By Tanya Lewis

Published December 03, 2014

 
<http://www.foxnews.com/tech/2014/12/03/stephen-hawking-artificial-intellige
nce-could-end-human-race/> Facebook709
<http://www.foxnews.com/tech/2014/12/03/stephen-hawking-artificial-intellige
nce-could-end-human-race/> Twitter514
<http://www.foxnews.com/tech/2014/12/03/stephen-hawking-artificial-intellige
nce-could-end-human-race/> livefyre2425
<http://www.foxnews.com/tech/2014/12/03/stephen-hawking-artificial-intellige
nce-could-end-human-race/> Email
<http://www.foxnews.com/tech/2014/12/03/stephen-hawking-artificial-intellige
nce-could-end-human-race/> Print

 

Stephen Hawking recently began using a speech synthesizer system that uses
artificial intelligence to predict words he might use. (Flickr/NASA HQ
PHOTO.)

The eminent British physicist Stephen Hawking warns that the development of
intelligent machines could pose a major threat to humanity.

"The development of full artificial intelligence (AI) could spell the end of
the human race," Hawking  <http://www.bbc.com/news/technology-30290540> told
the BBC.

The famed scientist's warnings about AI came in response to a question about
his new voice system. Hawking has a form of the progressive neurological
disease called amyotrophic lateral sclerosis (ALS or Lou Gehrig's disease),
and uses a voice synthesizer to communicate. Recently, he has been using a
new system that employs artificial intelligence. Developed in part by the
British company Swiftkey, the new system learns how Hawking thinks and
suggests words he might want to use next, according to the BBC.

Humanity's biggest threat?

Fears about developing intelligent machines go back centuries. More recent
pop culture is rife with depictions of machines taking over, from the
computer HAL in Stanley Kubrick's "2001: A Space Odyssey" to Arnold
Schwarzenegger's character in "The Terminator" films.

Inventor and futurist Ray Kurzweil, director of engineering at Google,
refers to the point in time when machine intelligence surpasses human
intelligence as "the singularity," which he predicts could come as early as
2045. Other experts say such a day is a long way off.

It's not the first time Hawking has warned about the potential dangers of
artificial intelligence. In April, Hawking penned an
<http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_517
4265.html?ncid=txtlnkusaolp00000058> op-ed for The Huffington Post with
well-known physicists Max Tegmark and Frank Wilczek of MIT, and computer
scientist Stuart Russell of the University of California, Berkeley,
forecasting that the creation of AI will be "the biggest event in human
history." Unfortunately, it may also be the last, the scientists wrote.

And they're not alone billionaire entrepreneur Elon Musk called artificial
intelligence "our biggest existential threat." The CEO of the spaceflight
company SpaceX and the electric car company Tesla Motors told an audience at
MIT that humanity needs to be "very careful" with AI, and he called for
national and international oversight of the field.

It wasn't the first time Musk warned about AI's dangers. In August, he
tweeted, "We need to be super careful with AI. Potentially more dangerous
than nukes." In March, Musk, Facebook founder Mark Zuckerberg and actor
Ashton Kutcher jointly invested $40 million in an AI company that is working
to create an artificial brain.

Overblown fears

But other experts disagree that AI will spell doom for humanity. Charlie
Ortiz, head of AI at the Burlington, Massachusetts-based software company
Nuance Communications, said the concerns are "way overblown."

"I don't see any reason to think that as machines become more intelligent
which is not going to happen tomorrow they would want to destroy us or do
harm," Ortiz told Live Science.

Fears about AI are based on the premise that as species become more
intelligent, they have a tendency to be more controlling and more violent,
Ortiz said. "I'd like to think the opposite. As we become more intelligent,
as a race we become kinder and more peaceful and treat people better," he
said.

Ortiz said the development of super-intelligent machines is still an
important issue, but he doesn't think it's going to happen in the near
future. "Lots of work needs to be done before computers are anywhere near
that level," he said.

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20141203/6bddc9f1/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: application/octet-stream
Size: 3503 bytes
Desc: not available
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20141203/6bddc9f1/attachment.obj>


More information about the extropy-chat mailing list