[ExI] 'Friendly' AI won't make any difference

Tomaz Kristan protokol2020 at gmail.com
Sat Feb 27 07:00:29 UTC 2016


Colin,

You are wrong, as one wrong can be.

On Sat, Feb 27, 2016 at 1:48 AM, Colin Hales <col.hales at gmail.com> wrote:

> Hi Anders,
>
> Yes you got my basic approach right. But I do not ask any of you to have
> an opinion and I am not interested in opinions. Especially not in any
> assessment of what might be or not be dangerous. Not because I want to be a
> contrarian prick. But because I want some science. I want something you can
> argue for, with evidence. That makes it science. None of the commentariat
> involved in this even knows what actual science is. They think they can
> 'define' things! Want me to show you magic? MEASURE SOMETHING. Do some
> actual science.
>
> "......do not allow neat safety proofs. We need to develop a better way of
> thinking about complex adaptive technological systems and how to handle
> them".
>
> 'SAFETY PROOF!!?" Don't you see that the very idea of a proof or even
> saying anything even remotely relevant was dumped as an option 60 years
> ago?  Anniversary year, 2016. Sixty years of blindness. All of it, before
> the AGI risk handwringing commentator even opens their mouth, or lifts a
> hand over a keyboard, is meretricious maundering at best.
>
> OK I'll have another go at getting this 60 year old slumbering twat to
> wake up. I'll do it empirically.  I am going to use some upper case.
> Apologies in advance.
> ==========================================
> MEASUREABLE EMPIRICAL FACT.
> The science of what is currently called 'artificial intelligence' has not
> started yet. Scientific behaviour, for centuries, _until the Dartmouth
> conference_, was 100% involved in 2 mutually coupled, resonating activities
> that resulted in predictive abstract statements that sometimes get called
> 'laws of nature':
>
> 1) Encounter and/or replicate nature's essential physics.
> 2) Construct abstractions that could be manually computed and interpreted
> as involved in what was explored as the essential physics.
>
> Compare/contrast models of nature, replicated nature with natural original
> physics. We just saw two spectacular examples: Higgs Boson and
> Gravitational waves. Actual empirical work.
>
> But in AI? ZERO EMPIRICAL WORK.
>
> Essential physics: e.g. A heart has pump physics. A kidney has filtration
> physics. A plane has air-flight-surface physics. Combustion has oxidation
> physics. and on and on and on and on and on and on and on and on and on and
> on and on and on and on and on and on and on and on ...... thousands of
> examples. Centuries of practice.
>
> And that non-stop continuous run of success ENDED at the Dartmouth
> conference in 1956. How?
>
> (1) was abandoned.
>
> Brains may have essential physics. But if you never ever ever ever look
> for it, but instead, replace it with endless machinations of potential
> (2)'s (neuromorphic hardware model-computation or software or quantum
> hardware whatever), then you are not doing science. That is what stopped in
> 1956. It is as if an entire community stopped doing actual science by
> assuming, for no principled reason or empirical reason, that there is no
> physics essential and unique to a brain. An intuition. A guess. Nothing
> more.
>
> FACT. Measurable birth defect in the science of AI. Obvious, complete,
> pervasive, ongoing.
>
> The hypothesis X = "there is 'no essential physics' of the brain" may be
> true! But you, me everyone on the entire planet does not know that. This is
> because the empirical science that determines the essential physics
> involves 2 kinds of tests:
>
> A) Assume X true, emulate everything, compute models... then
> compare/contrast with the natural brain.;
> B) Assume X is false, replicate hypothesised essential physics ...
> compar3e nature, replication and emulation. DO ACTUAL SCIENCE.
>
> This year is the 60th anniversary of the 100% expenditure of all AI
> budgets world wide entirely on TEST A.
>
> There are 2 cases of test B (apart from mine) Ashby and Walter in the
> early 1950s. They did not use computers.
> Their work could have become (1) science and (B) testing. But that was
> lost in the great cybernetics rout of 1956-65.
>
> I can see potential essential physics in the brain. I am doing (1)/(2) AND
> (B). I don't know if it is essential. NOBODY DOES. That is the point.
>
> If you don't have the essential physics of the brain then NO ACTUAL BRAIN.
> All the endless bullshit about AGI risks is based on completely malformed
> non-scientific uninformed mumbo-jumbo. The only way to handle any risk is
> to actually build it and then experiment. The entire field of risk
> assessment is totally screwed up because it is literally missing half the
> science. And it is the only half that matters.
> =========================================
>
> Using a flight analogy, this is what AI actually is at the moment:
>
> 1) 100% flight simulators. (studies of models of intelligence, sometimes
> in robot clothing).
> 2) %Nil actual flight. ZERO intelligence. Not just small or low or
> variable. ZERO. Like there is zero flight in a flight simulator.
>
> FFS. Use a Searle analogy:
>
> (i) WEAK FLIGHT = Computed models are a flight simulator (a study of
> flight, not actual flight)
> (ii) STRONG-FLIGHT = Computed models of flight carried out with the
> expectation that the computation will FLY!!!!!
>
> (i) WEAK FIRE = Computed models are a combustion simulator (a study of
> fire, not actual fire)
> (ii) STRONG-FIRE = Computed models of fire carried out with the
> expectation that the computation will BURN
>
> 60 years of expecting (ii) for the brain and only for the brain, endlessly
> expecting something different by doing the same thing over and over and
> over? Complete insanity! Expecting the computer to 'fly' or 'burn'(be
> human/natural-intellect) ?? Without 1 test that does actual science?
>
> Putting a model in robot clothes, automating the behaviour of the model,
> makes an elaborate puppet. It may be useful. It may have a social impact
> (jobs). SO WHAT! The whole existential 'robots are gonna kill us all'
> argument from non-scientific ignorance is a complete nonsense.
>
> AI (Flight) has not even started yet. It's all 'automated intelligence'.
> Deep automation. Deep learning. Whatever.
> None of it is actual 'artificial intelligence' because all of it throws
> the essential brain physics out, replacing it with the physics of a
> computational substrate. We threw it all out on day 1 and it's still thrown
> out. Am I making myself clear?
>
> Dammit I am sick of point out the obvious.
>
> I have to admit I now know how Laviosier felt about phlogiston. Phlogiston
> lasted 100 years. So called 'AI' is turning 60. I hate that I was born and
> have lived nearly 1 year longer than the whole of so-called AI era. I have
> watched this all my life. All I am asking is for a return to an untried
> normalcy: actual science ... a departure that was never chosen by anyone,
> never justified, has no physical principle supporting it and no evidence,
> and no literature trail justifying any of it.
>
> Forget it. Carry on. I'll just go hide again. At least I have proved to
> myself again why I hate this idiotic situation.
>
> Colin
>
>
>
>
>
> On Fri, Feb 26, 2016 at 8:50 PM, Anders Sandberg <anders at aleph.se> wrote:
>
>> On 2016-02-25 22:43, Colin Hales wrote:
>>
>> Evaluations of the AI risk landscape are, so far, completely and utterly
>> vacuous and misguided. It completely misses an entire technological outcome
>> that totally changes everything.
>>
>>
>> OK, let me see if I understand what you say. (1) Most people doing AI and
>> AI risk are wrong about content and strategy. (2) Real AGI is model-less,
>> something that just behaves. (3) The current risk conversation is about
>> model-based AI, and (4) you think that approach is totally flawed. (5)  You
>> are building a self-adapting hierarchical control system which you think
>> will be the real thing.
>>
>> Assuming this reading is not too flawed:
>>
>> I agree with (1). I think there is a fair number of people who have
>> correct ideas... but we may not know who. There are good theoretical
>> reasons to think most AI-future talk is bad (the Armstrong and Sotala
>> paper). There are also good theoretical reasons to think that there is
>> great value in getting  better at this (essentially the argument in
>> Bostrom's Superintelligence), although we do not know how much this can be
>> improved.
>>
>> I disagree with (2), in the sense that we know model-less systems like
>> animals do implement AGI of a kind but that does not imply a model-based
>> approximation to them does not implement it. Since design of model-less
>> systems, especially with desired properties, is very hard, it is often more
>> feasible to make a model system. Kidneys are actually just physical
>> structures, but when trying to make an artificial kidney it makes sense to
>> regard it as a filtering system with certain properties.
>>
>> I agree with (3) strongly, and think this is a problem! Overall, the
>> architectures that you can say sensible things about risk in are somewhat
>> limited: neuromorphic or emergent systems are opaque and do not allow neat
>> safety proofs. We need to develop a better way of thinking about complex
>> adaptive technological systems and how to handle them.
>>
>> However, as per above, I do not think model-based systems are necessarily
>> flawed, so I disagree with (4). It might very well be that less-model based
>> systems like brain emulations are the ticket, but it remains to be seen.
>>
>> (5): I am not entirely certain that counts as being model-less. Sure, you
>> are not basing it on some GOFAI logic system or elaborate theory (I
>> assume), just the right kind of adaptation. But even the concept of control
>> is a model.
>>
>> If you think your system will be the real deal, ask yourself: why would
>> it be a good thing? Why would it be safe (or possible to make safe)?
>>
>> [ Most AGI people I have talked with tend to answer the first question
>> either by scientific/engineering curiosity or that getting more
>> intelligence into the world is useful. I buy the second answer, the first
>> one is pretty bad if there is no good answer to the second question. The
>> answers I tend to get to the second question are typically (A) it will not
>> be so powerful it is dangerous, (B) it will be smart enough to be safe, (C)
>> during development I will nip misbehaviors in the bud, or (D) it does not
>> matter. (A) is sometimes expressed like "there are people and animals with
>> general intelligence and they are safe, so by analogy AGI will be safe".
>> This is obviously flawed (people are not safe, and do pose an xrisk). (A)
>> is often based on underestimating the power of intelligence, kind of
>> underselling the importance of AGI. (B) is wrong, since we have
>> counter-examples (e.g. AIXI): one actually needs to show that one's
>> particular architecture will somehow converge on niceness, it does not
>> happen by default (a surprising number of AGI people I have chatted with
>> have very naive ideas of metaethics). (C) assumes that early lack of
>> misbehavior is evidence against late misbehavior, something that looks
>> doubtful. (D) is fatalistic and downright dangerous. We would never accept
>> that from a vehicle engineer or somebody working with nuclear power. ]
>> {Now you know the answers I hope you will not give :-) }
>>
>> --
>> Dr Anders Sandberg
>> Future of Humanity Institute
>> Oxford Martin School
>> Oxford University
>>
>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
>>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>


-- 
https://protokol2020.wordpress.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20160227/71887458/attachment.html>


More information about the extropy-chat mailing list