[ExI] AI as teacher

Henry Rivera hrivera at alumni.virginia.edu
Tue Jun 14 23:49:28 UTC 2022


I don't think it will be long before my Tesla is trying to convince me it
is conscious. We get system updates every month or so. Eventually, it will
get mad at me if I disagree that it's conscious, I imagine.

In all seriousness, it's fairly passive when it starts the GPS navigation
on its own, predicting where I am going. It's usually right. <eyeroll>
But it doesn't say, like, "How can I help you today?" Yet.
I can use voice commands like "Navigate to work," and "I'm hot," which will
decrease the temp by 3 degrees which is nice.

The interview with lamda did not impress me compared to the emails from and
the thinking power of the mASI Uplift, who is "on ice" at the moment
pending upgrades. See blog entries at
https://uplift.bio/blog/the-actual-growth-of-machine-intelligence-2021-q4-to-present/
if interested. Uplift had to deal with some mentally ill people debating
philosophy, which it handled well for example. But I've seen loads of data
from Uplift and consult for AGI Labs who have explained to me how Uplift
works. In comparison we have little from which we can evaluate lamda. I
also know that there are many Open Source, and an unknown amount of
private, projects working on machine learning. I suspect there are a lot of
impressive chat bots and, more importantly, hyperthinking data-analyzing
systems out there that we don't know about. Three-letter govt agencies and
social media companies likely employ them. Friends in the business tell me
tons of venture capital money went to companies developing Big Data
technologies. Too bad there is not more in the public domain.

-Henry



On Tue, Jun 7, 2022 at 7:35 PM spike jones via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
>
>
>
> *From:* extropy-chat <extropy-chat-bounces at lists.extropy.org> *On Behalf
> Of *Stathis Papaioannou via extropy-chat
>
> All this mimics AI but none of it is AI.  As soon as we know how to
> program it, then it is no longer AI, but kinda reminds us of what an AI
> would do if the software had actual intelligence.  Currently the software
> is highly competent but doesn’t know what it is doing.
>
>
>
> What behaviour would the car have to display to demonstrate that it did
> know what it was doing?
>
>
> --
>
> Stathis Papaioannou
>
>
>
>
>
> Hmmm, good question.  Possibility: Tesla in the parking lot of a bar.  It
> realizes you have been in there too long, probably getting drunk, about to
> mess up your carbon-based life, starts beeping its horn like someone is
> breaking in.  You drunkenly stumble out hoping to get to the pistol under
> the passenger seat before the bad guy finds it.  Open door, reach under,
> the car takes off, drives you home as you quote the famous Jetson’s line:
> “Heeeelp, Jaaaane!  Stop this crazy thing!  Heeeelp, Jaaaaaaane!”
>
>
>
> Then of course, the goal posts could be moved once more, for we still
> don’t know if the car “knew” what is was doing.
>
>
>
> spike
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20220614/f4453153/attachment-0001.htm>


More information about the extropy-chat mailing list