[ExI] Vint Cerf on AI, at ORNL

William Flynn Wallace foozler83 at gmail.com
Wed Sep 4 22:18:58 UTC 2019


Seemingly missing from all the intelligence definitions in your post is the
ability to adapt to novel situations, which to me is really important.
This must be really hard for an AI, since it cannot generalize from past
problems (I am assuming that that's all it can do, which might be wrong).
 bill w

On Wed, Sep 4, 2019 at 4:30 PM Dave Sill via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Wed, Sep 4, 2019 at 4:22 PM Stuart LaForge via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>> Quoting Dave Sill:
>>
>>   "Risk factors in the Overhyping
>> > of AI and Machine Learning."
>> >
>> > Per Cerf's abstract: "Artificial Intelligence, especially Machine
>> > Learning have become part of an overhyped mantra (that goes for
>> > blockchain, too). It is vital that the expectations for these
>> > technologies be tempered by deeper appreciation for their strengths
>> > and weaknesses. ML is narrow and brittle, producing dramatic results
>> > but also failing dramatically (see Generative Adversarial Networks).
>>
>> I agree that current ML is narrow and brittle. However small
>> biological brains are brittle as well. Think about how much difficulty
>> glass windows give houseflies.
>>
>> But increased size is no barrier to failure. Humans fail dramatically
>> also. Google "epic fail" for proof.
>>
>> I don't disagree with you, I just don't think anything you (and Cerf)
>> have said detracts from the observation that artificial neural
>> networks are approaching the functionality of biological ones at an
>> encouraging rate.
>>
>> What dramatic failure of GANs are you referring to?
>>
>
> That's Cerf, not me. I don't know what he's referring to. I'll report back
> after the talk.
>
> > Finally, ML is NOT Artificial General Intelligence."
>> [snip]
>>
>> Is a housefly Natural General Intelligence?
>
>
> I suppose...but not very much of it. Nowhere near human level, which is
> what AGI generally refers to.
>
>
>> How about your one of your amygdala in isolation?
>
>
> Not really.
>
>
>> What exactly is general intelligence again?
>>
>
> Let's start with the Wikipedia entry:
>
>  https://en.wikipedia.org/wiki/Artificial_general_intelligence
>
>
>
>
>
>
> *Artificial general intelligence (AGI) is the intelligence of a machine
> that has the capacity to understand or learn any intellectual task that a
> human being can. It is a primary goal of some artificial intelligence
> research and a common topic in science fiction and future studies. Some
> researchers refer to Artificial general intelligence as "strong AI",[1]
> "full AI"[2], "true AI" or as the ability of a machine to perform "general
> intelligent action";[3] others reserve "strong AI" for machines capable of
> experiencing consciousness.Some references emphasize a distinction between
> strong AI and "applied AI"[4] (also called "narrow AI"[1] or "weak AI"[5]):
> the use of software to study or accomplish specific problem solving or
> reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to
> perform the full range of human cognitive abilities.As of 2017, over forty
> organizations worldwide are doing active research on AGI.[6]*
>
>
>
>
>
>
>
>
>
>
> *Requirements[edit]Main article: Cognitive scienceVarious criteria for
> intelligence have been proposed (most famously the Turing test) but to
> date, there is no definition that satisfies everyone.[7] However, there is
> wide agreement among artificial intelligence researchers that intelligence
> is required to do the following:[8]reason, use strategy, solve puzzles, and
> make judgments under uncertainty;represent knowledge, including commonsense
> knowledge;plan;learn;communicate in natural language;and integrate all
> these skills towards common goals.*
>
> *Other important capabilities include the ability to sense (e.g. see) and
> the ability to act (e.g. move and manipulate objects) in the world where
> intelligent behaviour is to be observed.[9] This would include an ability
> to detect and respond to hazard.[10] Many interdisciplinary approaches to
> intelligence (e.g. cognitive science, computational intelligence and
> decision making) tend to emphasise the need to consider additional traits
> such as imagination (taken as the ability to form mental images and
> concepts that were not programmed in)[11] and autonomy.[12] Computer based
> systems that exhibit many of these capabilities do exist (e.g. see
> computational creativity, automated reasoning, decision support system,
> robot, evolutionary computation, intelligent agent), but not yet at human
> levels.*
>
> > I've been saying the same thing, but I'm not Turing Award winner.
>>
>> I certainly agree that ML is not "the Singularity" but what about it
>> do you think is over-hyped? As far as I know, nobody is writing
>> Alpha-zero fan-mail just yet.
>>
>
> Oh, you missed John's fan-email, "AlphaZero –The ‘Lucy’ of the Emerging AI
> Epoch"?
>
> AI has been over-hyped for 50 years despite chronically under-delivering.
> Now that deep learning has had some success, the hype train has again left
> the station.
>
> Don't get me wrong: this is good stuff, with plenty of practical
> applications. I just don't think we're on the verge of AGI yet.
>
> -Dave
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20190904/7e3cf841/attachment.htm>


More information about the extropy-chat mailing list