<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<br>
Changed my mind. Rather than sit back and wait for the flak, I want
to do some musing on what happens next, if you'll indulge me.<br>
<br>
We can assume that AI in all its flavours will continue to rapidly
improve in the background, whatever else we do. At a
double-exponential rate. So bear in mind what that means, for all of
the below.<br>
<br>
I know that making predictions is notoriously dodgy, and I normally
don't, but in this case, I feel justified in guessing that the world
could well be a different place in a matter of months, not years or
decades.<br>
<br>
Meanwhile...<br>
<br>
I'd expect humans and AI systems to work ever more closely together,
in many many domains. Of course, this does depend on what I was
saying before about allowing access. If access to them (or for them)
is restricted, it will slow down the synergy between humans and AI.
If not, it will accelerate. I doubt very much if the restriction
scenario will happen, simply because there are too many ways to get
round any restrictions, and too much potential power and money on
offer to resist.<br>
<br>
There are obvious areas of research and development that human/AI
synergy will affect - biomedical, AI, persuasion, oppression,
side-stepping oppression, crime detection and prevention, crime
innovation, physics, mathematics, all the sciences, really,
engineering, the various arts, energy, combatting global warming,
screwing more people out of more money (business practices, the
finance and large parts of the legal sectors), education,
manufacturing (including molecular manufacturing - Nanotechnology!),
robotics, design, defense, offense, communications, transport, space
stuff, psychiatry, diet and fitness, sorting out the morass that
research has got itself into, detecting 'fake news', generating
'fake news', and so on. I'm sure you can add to this list.<br>
<br>
And there are the non-obvious areas that will surprise everyone.
There will be things that we assume will never change, changing.
Things that no-one ever though of, appearing. I obviously can't make
a list of things we don't know yet.<br>
<br>
And there will be groups wanting to use it for their own advantage,
to try to impose their own version of How Things Should Be on
everyone else. The usual suspects of course, but also other, smaller
groups. Another reason to ensure these AI systems are spread as
widely as possible, so that a balance of power can be maintained.
This needs to be the exact opposite of the nuclear non-proliferation
treaty. An AI massive proliferation non-treaty, and we need everyone
in the world to not sign it.<br>
<br>
All this will naturally create massive disruption. Be prepared for
your job to disappear, or be changed drastically. No matter what it
is. Those magazine articles listing "which jobs are at risk of being
taken over by robots in the next 20 years" will look hilarious.<br>
<br>
So bearing in mind what the guys in that video (I should really do
better than saying "those guys in that video". I mean <span
class="yt-core-attributed-string
yt-core-attributed-string--white-space-pre-wrap">Tristan Harris
and Aza Raskin in "The A.I. Dilemma" Youtube video
(<a class="moz-txt-link-freetext" href="https://www.youtube.com/watch?app=desktop&v=xoVJKj8lcNQ&t=854s">https://www.youtube.com/watch?app=desktop&v=xoVJKj8lcNQ&t=854s</a>))</span>
said about research areas joining up and accelerating, and the
voracious appetite of these AI systems for information, the ability
to self-improve, the development of ways to interact with the world,
and all the above areas of collaboration between humans and AIs,
together with double-exponential improvement in their capabilities,
including their ability to understand the world, we have a genuine
singularity on our hands, going on right now.<br>
<br>
What else?<br>
<br>
The old question of whether there will be a 'singleton' AI or
multiple AIs.<br>
I'm not sure if this makes any sense, or matters. We definitely have
more than one being developed and deployed, but if they don't have
it already, they'll soon develop the ability to communicate with one
another, and we could have a situation where there's a kind of
global 'AI hive-mind' or maybe something looser than that, with
groups of systems having stronger and weaker links to other systems
and groups. Whether you could call that a singleton or not is
probably just a matter of opinion. Even if you do, it will have
multiple points of view, so the original objection to a singleton AI
won't apply in any case.<br>
<br>
And what effect will all this have on human society and culture?<br>
<br>
Let's all hope that Eleizer Yudkovsky is dead wrong!<br>
<br>
(I would say let's make sure the fiction of Iain M Banks and Neal
Asher are part of their training sets, but there's no need. ALL
fiction will be part of it, if it isn't already)<br>
<br>
Over to you.<br>
<br>
Ben<br>
</body>
</html>