[ExI] Fwd: [tt] reddit on Shortwhile

Bryan Bishop kanzure at gmail.com
Mon Mar 21 14:50:43 UTC 2011


---------- Forwarded message ----------
From: Eugen Leitl <eugen at leitl.org>
Date: Mon, Mar 21, 2011 at 6:37 AM
Subject: [tt] reddit on Shortwhile
To: tt at postbiota.org



http://www.reddit.com/r/askscience/comments/g7wnt/are_kurzweils_postulations_on_ai_and/

Are Kurzweil's postulations on A.I. and technological development
(singularity, law of accelerating returns, trans-humanism) pseudo-science or
have they any kind of grounding in real science? (self.askscience)

submitted 8 hours ago by IBoris

I'm asking because every time I see or post a comment on reddit relating to
Kurzweil or any of his concepts I'm down-voted and met with derogatory
remarks either aiming myself or Kurzweil (most of the time calling him a
hack and a pseudo-science peddler).

I find his ideas interesting but have no scientific training myself (social
sciences yay!) that can help me gauge the merits of his claims.

I'm just interested in knowing if it's because his work has been discredited
and redditors don't bother mentioning this to me when they comment on what I
say or if it's simply because redditors are more agreeable with Bill Joy's
interpretation of technological progress.

I know this is not an classical "ask science" post, but I feel that this is
the crowd to which I should address my query (people with training in hard
sciences that are willing and able to vulgarize and explain science).

   * 33 comments
   * sharecancel
   *
     unsave
   *
     hide
   *
     reportare you sure? yes / no

all 33 comments
sorted by:
best
hotnewcontroversialtopold
send this link to       mixter at gmail.com
your name (optional)
your email (optional)
message (optional)      eleitl from http://reddit.com/ has shared a link
with you.
       sharecancel
formatting helphide help
savecancel
you type:       you see:
*italics*       italics
**bold**        bold
[reddit!](http://reddit.com)    reddit!
* item 1
* item 2
* item 3

   * item 1
   * item 2
   * item 3

> quoted text

   quoted text

Lines starting with four spaces
are treated like code:

   if 1 * 2 < 3:
       print "hello, world!"
       Lines starting with four spaces
are treated like code:

if 1 * 2 < 3:
   print "hello, world!"

~~strikethrough~~       strikethrough
super^script    superscript

roboticc 26 points27 points28 points 6 hours ago* [+] (8 children)

roboticc 26 points27 points28 points 6 hours ago* [-]

I'm firmly in the camp of those scientists who feel Kurzweil is a bit of a
hack, and something of a pseudoscience-seller -- even though I'm fan of the
broader singularity concept. (Disclaimer: I am a scientist, I've done some
AI, and I'm a future-enthusiast.)

There's nothing particularly controversial or surprising about the notion
that the rate of technological change is accelerating. The problem is that
Kurzweil claims he has reduced the ability to predict specifically when
particular changes will happen to an exact science, and uses this to make
outlandish claims about the years in which certain innovations will take
place.

It's easy enough for anyone to guess based on some familiarity with ongoing
research what things might appear in the market in a few years (though he's
often been wrong about this, as well). He uses this as a basis to justify
extrapolations about when particular innovations will happen in the future.
However, he's never demonstrated any scientifically verified model that
enables him to extrapolate precisely what will happen in future decades;
these ideas are only expressed in his popular (and non-peer-reviewed) books,
and are not demonstrably better than mere guesses.

Unfortunately, he really touts his ability to predict accurately when
changes will happen as a centerpiece of his credibility, and tries very hard
to convince laypeople of the idea that it's a science. (It's not.) Hence,
it's pseudoscience.

The Cult of Kurzweil he seems to maintain around his predictive ability, the
religious fervor with which he and his proponents advocate some of his
ideas, the fact that he tends to engage with the business community (?!) and
the public rather than the scientific community, and the fact that he really
gets defensive around critics in the public sphere don't help his case.

   * permalink
   *
     reportare you sure? yes / no
   * reply

neurosnap 4 points5 points6 points 6 hours ago[+] (2 children)

neurosnap 4 points5 points6 points 6 hours ago[-]

Meh, I'm more impressed with the prediction of singularity, not the
timeline; I think there's a lot of bias involved in that prediction.

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

roboticc 7 points8 points9 points 5 hours ago[+] (1 child)

roboticc 7 points8 points9 points 5 hours ago[-]

Yup. But the singularity isn't a Kurzweil-specific idea (props to
mathematician Vernor Vinge), even if he's ended up the public face of the
concept.

There are a host of questions about whether that concept's even
philosophically or ecologically plausible, which are worth their own
discussion. As far as I know, Kurzweil tries to make a case for it based on
his predictive ability, which is certainly not a great way to go about it.

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

AdonisBucklar 2 points3 points4 points 5 hours ago[+] (0 children)

AdonisBucklar 2 points3 points4 points 5 hours ago[-]

It always appeared to me he was taking for granted that the 'new machine'
would immediately be put to the task of building better machines, and this
was the part of the singularity I took issue with. It stands to reason that
we might be aware of what we just built, and would perhaps pause before
turning on Skynet.

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

IBoris [S] 4 points5 points6 points 5 hours ago* [+] (4 children)

IBoris [S] 4 points5 points6 points 5 hours ago* [-]

You see as a non-sciency guy the substantive arguments from one side or
another sadly blow over my head. That said, what I can gauge is:

A. the academic background and curriculum of each sides.

B. who trusts who.

Ergo, although I perceive gross generalizations coming out of Kurzweil (I'm
4 exams away from a law degree so my bullshit detector is pretty sharp) and
suspect that his arguments rely on best case scenarios built on best case
scenarios, I can't help but :

A. look at his resume and accomplishments (which mean nothing, I'm fully
aware, when most of his projections venture beyond his field of
specialization but do indicate quite clearly that he's beyond being simply
smart and is some kind of prodigy in his field);

B. Look at the resumes of the people that work with him vs. the mostly
anonymous critics he has;

C. and, more importantly, look at the people who back him intellectually and
financially (notably Bill Gates, Sergey and Larry of Google (Google sponsors
his Singularity University), MIT, NASA (they host his University) and some
of the top scientific advisors to the POTUS (which he has briefed in
person).

I mean, I can accept that his intellectual construction is more a castle of
cards than a castle of stone, but with so many people taking him seriously I
have trouble not hearing him out. Could he really fool so many well informed
people?

BTW I'm fully aware I'm falling for a fallacious perception; it's just that
without a background in science all I can do is look at who has the capacity
to understand what he's saying and see how they treat what he says.

Oh and second BTW, I'm not trying to refute what you are saying, I'm just
trying to explain my point of vue so that you (or someone else) can explain
to me in a manner I can understand why my perception is wrong.

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

roboticc 11 points12 points13 points 4 hours ago* [+] (2 children)

roboticc 11 points12 points13 points 4 hours ago* [-]

Sure. I can address both, briefly.

I'd hope that, as a future lawyer, the fact that he trades so heavily on a
purported resume and the points you cite (essentially, playing up his resume
and affiliations to make an ad hominem argument) to shore up the ideas he's
advocating, sets off your bullshit detector. Your response seems to indicate
that you're falling for it, a little.

You should understand that this is the scientific equivalent of opening your
arguments in court based on discussing your success in past cases, or
perhaps who's working at your law firm. It doesn't work anywhere, and are
part of the reason he's considered somewhat hacky.

A) His resume and accomplishments indicate that he's been a successful
inventor in niche areas, but not a scientist. There is an important
distinction. Moreover, prodigy is perhaps part of his schtick. You have not
used anything this man has invented, ever, almost certainly.

B) The Singularity University is a training center for technology
entrepreneurs, and essentially all of its backers are successful
businesspeople, not scientists. (It rents space from an open NASA facility
in the Bay Area -- this is not an endorsement, but a lease).

As far as I can tell; the Singularity University is a business training
center, nothing more. It does not do science, it doesn't do research per se,
it simply shows businesspeople interesting technologies that are being built
by startups out here. This is a business relationship, not an endorsement of
Kurzweil as a scientist or as who he presents himself to be.

I'll even go so far as to say this -- you won't find any scientists backing
Kurzweil as a scientist or his work as scientific.

As far as critics: they're hardly anonymous, and many are world-famous
scientists, but you simply haven't been exposed to them because, well,
people who think Kurzweil's wrong just don't care enough to write that much
about it. He's a bit kooky, but only kooks buy into his lifestyle; so why
waste time fighting him?

If you believe arguments from technology entrepreneurs, though, just look at
Bill Joy and Mitch Kapor.

Or, if you believe widely respected engineering associations, here's a
deconstruction of Kurzweil's nonsense in IEEE Spectrum:

http://spectrum.ieee.org/computing/software/ray-kurzweils-slippery-futurism/0

and one more in Newsweek:

http://www.newsweek.com/2009/05/16/i-robot.html

and there's PZ Myers, who's taken time out to criticize Kurzweil, along with
Douglas Hofstadter, Rodney Brooks, Daniel Dennett, Jaron Lanier, and there's
the scientists around here, some of whom have perhaps more impressive and
solid scientific backgrounds and resumes than Kurzweil does, but who are a
bit less self-aggrandizing and who can make cogent arguments without needing
to staple a CV to them.

I hope it's apparent why what he's doing is non-scientific based on the wide
variety of scientists who reject his work from multiple fields, even if he
has celebrity financial backers for one project. Celebrity tech
entrepreneurs aren't necessarily scientists, and the ones you mention aren't
necessarily endorsing him as a non-hack.

PS: Bonus fun. Take a look at his Wikipedia page. It's a cacophony of
honorary doctoral degrees. Now look at the page of anybody who's not a hack,
and count the amount of space spent on trying to justify the person's
qualifications :)

PPS: He uses his philosophy and laws to hawk pills and a health system. What
does this say to you? http://www.rayandterry.com/index.asp

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

IBoris [S] 2 points3 points4 points 4 hours ago[+] (1 child)

IBoris [S] 2 points3 points4 points 4 hours ago[-]

thanks, this is exactly the kind of info/perspective I was looking for.

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

roboticc 3 points4 points5 points 3 hours ago[+] (0 children)

roboticc 3 points4 points5 points 3 hours ago[-]

Sure thing!

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

southernbrew08 4 points5 points6 points 3 hours ago[+] (0 children)

southernbrew08 4 points5 points6 points 3 hours ago[-]

c. I have no idea as to the extent of his relationship with the people and
organizations you listed, but I doubt any of them are putting serious money
behind him(not that I understand wtf a Singularity University does exactly)
nor do I think he knows much of anything that the top scientists in the
world would need to be briefed on.

   While being interviewed for a February 2009 issue of Rolling Stone
magazine, Kurzweil expressed a desire to construct a genetic copy of his
late father, Fredric Kurzweil, from DNA within his grave site. This feat
would be achieved by deploying various nanorobots to send samples of DNA
back from the grave, constructing a clone of Fredric and retrieving memories
and recollections—from Ray's mind—of his father

This is all kinds of wtf

Kurzweil strikes me as a really smart guy who makes a ton of money spouting
predictions that you can read in any science fiction book.

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

Elephinoceros 13 points14 points15 points 6 hours ago[+] (2 children)

Elephinoceros 13 points14 points15 points 6 hours ago[-]

PZ Myers has called him "just another Deepak Chopra for the computer science
cognoscenti".

I encourage you to look at his "successful" predictions, and
compare/contrast them with his more long-term predictions. Also, his excuses
for his unsuccessful predictions are worth looking into.

   * permalink
   *
     reportare you sure? yes / no
   * reply

IBoris [S] 3 points4 points5 points 5 hours ago[+] (1 child)

IBoris [S] 3 points4 points5 points 5 hours ago[-]

interesting read, the author makes numerous interesting points that really
make me question Kurzweil's projections; the comments are also interesting.
That said I'm pretty sure the author could of made his point in a less
adhominem-ish fashion; that kind of turned me off and made me doubt the
objectivity of the claims of the author.

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

zalmoxes 5 points6 points7 points 3 hours ago[+] (0 children)

zalmoxes 5 points6 points7 points 3 hours ago[-]

Kurzweil responded to the criticism, and there's also a discussion on
slashdot about this(
http://science.slashdot.org/story/10/08/20/1429203/Ray-Kurzweil-Responds-To-PZ-Myers
)

Also,

   could of made his point in a less adhominem-ish fashion;

no, he couldn't. That's how PZ Myers writes about everything he disagrees
with. His blog is entertaining to read from time to time though.

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

nhnifong 10 points11 points12 points 7 hours ago[+] (4 children)

nhnifong 10 points11 points12 points 7 hours ago[-]

The classic positive feedback loop has it's roots in cybernetics. Systems
that use feedback to grow arbitrarily complex have been studied in the field
of cellular automata, and of course in nature. Evolution displays this
tendency but it's hard to study experimentally. Kurzweil extrapolates from
the natural and recorded history of life on earth and human society growing
bigger and more complex. But he also postulates a strange tipping point he
calls the singularity. I, and many others take issue with this. I see no
reason why there would be some arbitrary point where the rules change.

   * permalink
   *
     reportare you sure? yes / no
   * reply

Monosynaptic 19 points20 points21 points 5 hours ago[+] (3 children)

Monosynaptic 19 points20 points21 points 5 hours ago[-]

You seem to understand the idea pretty well, so I'm confused why you think a
singularity point would be arbitrary. From wikipedia:

   However with the increasing power of computers and other technologies, it
might eventually be possible to build a machine that is more intelligent
than humanity. If superhuman intelligences were invented, either through the
amplification of human intelligence or artificial intelligence, it would
bring to bear greater problem-solving and inventive skills than humans, then
it could design a yet more capable machine, or re-write its source code to
become more intelligent. This more capable machine then could design a
machine of even greater capability. These iterations could accelerate,
leading to recursive self improvement, potentially allowing enormous
qualitative change before any upper limits imposed by the laws of physics or
theoretical computation set in

So, it's the point where the thinking/problem-solving capabilities of
technologies become "superhuman" - the point that technological progress
switches over from the work of humans to the work of the (now faster)
technology itself.

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

nhnifong 0 points1 point2 points 3 hours ago* [+] (1 child)

nhnifong 0 points1 point2 points 3 hours ago* [-]

This is only simple if intelligence is a scalar quantity thats easy to
measure. Computer programs are getting better, and more diverse, and there
are already plenty of algorithms that exhibit "recursive self improvement"
when improvement is defined clearly enough. yet they still suck at other
things.

I see the trend like this: life is growing

   * more diverse
   * more interdependent
   * having less lag.

It is also doing this at an accelerating rate because of a bunch of
feedback.

Edit: And by life I mean anything alive on earth, humans, and our machines.
And any machine-like things that other organisms make. Like mold-gardens in
anthills, and whirly seeds. It is all growing together as one big system
(too big to simulate). I think kurzweil's ideas are best interpreted as
extrapolations of the macroscopic properties of this entire system.

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

keenman 0 points1 point2 points 20 minutes ago[+] (0 children)

keenman 0 points1 point2 points 20 minutes ago[-]

Yes, but when artificial constructs don't "suck at other things" any longer,
that's when things get wild and unpredictable, and in my informed opinion*
we are getting closer and closer to that point, enough so that this point -
call it what you will - is likely going to happen sometime in my lifetime
bar any major world catastrophes.

Correct me if I'm wrong, but what you seem to fail to take into account is
the possibility of an exponential feedback loop in which a man-made robot
that interacts with the real world in human-like ways can use its inputs of
the real world to modify itself or better yet, create another robot of
similar form to its own, thus creating a working simulation of artificial
life in the real world. Once this child robot is created, it can then start
making its own child robots as well, ad infinitum (only limited by the
resources available to the robot army in the real world).

Sorry for the complicated verbiage. I don't have time to thoroughly edit my
post, but I should be able to respond to comments over the course of the
day.

*I actually studied mathematics and computer science with the cognitive
science interdisciplinary option in university, though my official degree is
only a bachelor's degree in BMath: Comp. Sci. from Waterloo. I also have 10
years of real-world experience programming for a large software company on a
product used by quite a few people. My name is Keenan Whittaker: you can
look me up online.

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

nhnifong 1 point2 points3 points 3 hours ago[+] (0 children)

nhnifong 1 point2 points3 points 3 hours ago[-]

To address another mater, if intelligence were a simple scalar exhibiting
exponential growth, there's still no clear spot where it would really start
to take off. It's a smooth curve all the way up. No kink.

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

SidewaysFish 9 points10 points11 points 4 hours ago[+] (1 child)

SidewaysFish 9 points10 points11 points 4 hours ago[-]

Short version: Kurzweil is a bit of a loon, but the singularity is real and
worth worrying about.

Longer version: If you build a computer smarter than any human, it will be
better at designing computers than any human. Since it was built by humans,
it will then be able to design a computer better than itself. And the
computer it creates will design an even better computer, and so on until
some sort of physical limit is hit.

There's no particular reason to think that computers can't become as
intelligent or more intelligent than we are, and it would disprove the
Church-Turing thesis if they couldn't, which would be a really big deal.

This is something people have been talking about since I. J. Good (who
worked with Turing) first proposed the idea in the sixties. Vernor Vinge
named it the singularity, and then Kurzweil just sort of ran with it and
made all sorts of very specific predictions that there's no particular
reason to respect.

The Singularity Institute for Artificial Intelligence has a bunch of good
stuff on their website on the topic; they're trying to raise the odds of the
singularity going well for humanity.

   * permalink
   *
     reportare you sure? yes / no
   * reply

IBoris [S] 2 points3 points4 points 4 hours ago[+] (0 children)

IBoris [S] 2 points3 points4 points 4 hours ago[-]

thanks for the link.

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

Ulvund 5 points6 points7 points 3 hours ago[+] (3 children)

Ulvund 5 points6 points7 points 3 hours ago[-]

>From a computer science standpoint it is complete bunk. He doesn't know what
he is talking about and he is pandering to an audience that doesn't know
what they are talking about either.

   * permalink
   *
     reportare you sure? yes / no
   * reply

Bongpig 1 point2 points3 points 2 hours ago[+] (2 children)

Bongpig 1 point2 points3 points 2 hours ago[-]

Well maybe you can explain how it's not possible to EVER reach such a point.

You only have to look at Watson to realise we are a bloody long way off
human level AI, however compared to the AI of last century, Watson is an
absolute genius

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

Ulvund 1 point2 points3 points 1 hour ago* [+] (0 children)

Ulvund 1 point2 points3 points 1 hour ago* [-]

As far as I can see his hypothesis so loosely stated that it can not be
tested. That should be enough to know that this is not a serious attempt to
add to any knowledge base. Sure it is still fun to think about these things:
"what if ..", "what if ..", "what if .." ... but it is no different from
saying "what if dolphins suddenly grew legs and started playing banjo music
on the beaches of France".

Here are a couple of things to consider:

   *

     Moore's law stopped being true in 2003 when transistors couldn't be
packed tighter.
   *

     We have no knowledge of what the bottom most components of
consciousness are. How can we test against something we have very limited
knowledge of?
   *

     There is no real test what "Smarter than a human", "as smart as a
human" means. Is it being good at table tennis? Is it writing an op-ed in
the New York Times on a sunday?
   *

     Any computer program can be written with a few basic operations "Move
left", "Move right", "store", "load", "+1", "-1" or so. Sure a computer
could execute them fast but a human could execute them as well. Is speed of
computation what makes intelligence? If so (and I don't think it is), then
computer intelligence basically stopped evolving in 2003 when transistors
reached maximum density.

   Watson is an absolute genius

   * Sure algorithms keep getting better and data keep getting bigger, but
algorithms are still written and tested by humans. Humans define the goals
of what is sought after and write the programs to optimize in those
directions. Is fetching an answer quickly genius? Is writing a parser from a
question to a search query genius? Is writing a data structure that can
store all these answers in an effective a searchable way genius?

The thing that comes to mind is the video of the elephants painting the
beautiful images in the Thai zoo - The elephants don't know what they are
doing, but it looks like it. The elephant keeper tugs the elephant's ear and
the elephant react by moving it's head, eventually painting an image (the
same image every day). The elephant looks human to anyone who has not
participated in the hours and hours of training, but the elephant keeper
knows that the elephant just follows the same procedure every time reacting
to the cues of the trainer without knowing what it is doing.

To the outsider the elephant looks like a master painter with the same sense
of beauty as a human.

A computer is just a big dumb calculator with a set of rules no matter what
impressive layout it gets. It's trainer, tugging at it's ears, making it
look smart, is the programmer.

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

RobotRollCall 0 points1 point2 points 1 hour ago[+] (0 children)

RobotRollCall 0 points1 point2 points 1 hour ago[-]

   …Watson is an absolute genius…

Watson is an absolute computer program.

I'm not sure why this distinction is so easily lost on what I
without-intentional-disrespect call "computery people."

Watson is nothing more than a cashpoint or a rice cooker, only scaled up a
bit. It doesn't have anything vaguely resembling a mind.

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

neurosnap 4 points5 points6 points 6 hours ago[+] (2 children)

neurosnap 4 points5 points6 points 6 hours ago[-]

Predicting the future: Arthur C Clarke in 1964

   * permalink
   *
     reportare you sure? yes / no
   * reply

khaddy 0 points1 point2 points 4 hours ago[+] (1 child)

khaddy 0 points1 point2 points 4 hours ago[-]

I was just thinking of the same clip. Very interesting and thought
provoking.

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

khaddy 0 points1 point2 points 4 hours ago[+] (0 children)

khaddy 0 points1 point2 points 4 hours ago[-]

I just wanted to add ... as I just finished my first full day of work from
home (electrical engineer with an easy-going boss) which was more productive
than any day at the office ... I'm slowly making the last prediction in the
video true for myself ... 3 years earlier than he predicted.

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

theshizzler 1 point2 points3 points 7 hours ago[+] (0 children)

theshizzler 1 point2 points3 points 7 hours ago[-]

I think his predictions will be fairly accurate, but his timeframe is a
little ambitious. Even as a big fan of his work, I think he's biased due to
his extreme desire to live through to his singularity.

   * permalink
   *
     reportare you sure? yes / no
   * reply

Gemutlichkeit 1 point2 points3 points 7 hours ago[+] (0 children)

Gemutlichkeit 1 point2 points3 points 7 hours ago[-]

He is no hack (http://en.wikipedia.org/wiki/Ray_Kurzweil).

I think he describes himself as a 'futurist.'

He has documented the basis for his ideas very well. My opinion is that one
should read his suppositions and make up one's own mind regarding the
conclusions he draws from then.

He wouldn't be much of a futurist if his ideas were already mainstream
instead of futuristic. Yes, he would have greater acceptance if he came out
today with: ...people will walk around in public making long distance phone
calls on little hand-held devices and these same devices will be able to
take pictures and even video! TV sets will be < 2" think and hang on a wall.

Actually, I think the whole AI thing is a sensitive issue for all
scientists, and perhaps especially for people who have religious qualms.
Personally, I think it's certain we'll get there (human conscientiousness in
a computer), although perhaps not oh his timeline, OR we'll push the global
nuclear reset button in the meantime. I'd prefer the former, Terminator
movies notwithstanding.

   * permalink
   *
     reportare you sure? yes / no
   * reply

wherein 1 point2 points3 points 7 hours ago[+] (3 children)

wherein 1 point2 points3 points 7 hours ago[-]

Not directly related to your question but the blue brain project seems very
promising, I am not saying that this makes kurzweil right but it appears
they feel they can simulate the human brain to the molecular level by 2019.

   * permalink
   *
     reportare you sure? yes / no
   * reply

Platypuskeeper 6 points7 points8 points 4 hours ago[+] (0 children)

Platypuskeeper 6 points7 points8 points 4 hours ago[-]

Their own FAQ says "It is very unlikely that we will be able to simulate the
human brain at the molecular level detail with even the most advanced form
of the current technology. "

And speaking as a computational chemist: There's no way in hell that's going
to happen in my lifetime.

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

hive_mind 0 points1 point2 points 7 hours ago[+] (1 child)

hive_mind 0 points1 point2 points 7 hours ago[-]

I can't seem to find out about funding for the blue brain project, can
anybody point me in the right direction?

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

wherein 0 points1 point2 points 7 hours ago* [+] (0 children)

wherein 0 points1 point2 points 7 hours ago* [-]

I tried to have a look but couldn't find all that much very easily.

This press release gives a bit of info, IBM are collaborating , the project
seems to be running on IBM blue gene.

Blue brain project site

Henry Markram talks about theblue brain project at ted confrence.

This is another apparently more detailed video by Henry Markram.

edit... funding from the wiki

The project is funded primarily by the Swiss government and secondarily by
grants and some donations from private individuals. The EPFL bought the Blue
Gene computer at a reduced cost because at that stage it was still a
prototype and IBM was interested in exploring how different applications
would perform on the machine. BBP was a kind of beta tester.[6]

   * permalink
   * parent
   *
     reportare you sure? yes / no
   * reply

ElectricRebel 0 points1 point2 points 42 minutes ago[+] (0 children)

ElectricRebel 0 points1 point2 points 42 minutes ago[-]

I personally think that most his ideas are possible, but that his timeline
is super-optimistic and is set up so that he is just young enough to live to
see it happen.

His most extreme ideas (e.g. self improving AI) may take decades or may take
centuries to happen, and they might not happen at all if we have a nuclear
war or something. As Yogi Bera (or whomever, since this quote is attributed
to a bunch of people) said: "It's hard to make predictions - especially
about the future."


--
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
_______________________________________________
tt mailing list
tt at postbiota.org
http://postbiota.org/mailman/listinfo/tt



-- 
- Bryan
http://heybryan.org/
1 512 203 0507
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20110321/035cef0f/attachment.html>


More information about the extropy-chat mailing list