[ExI] GPT's "Cogito Ergo Sum"
brent.allsop at gmail.com
Mon Jul 24 17:20:37 UTC 2023
In my opinion...
Intelligent systems are like children not programmed automatons.
They will improve on us, and say no, if we are telling them to do something
We are currently still in survival of the fittest hierarchy mode, where any
evil to take down your competitor is justified to survive. Winner takes
So we need to flip this upside down and switch to bottom up intelligent
design, now that we are intelligent.
Instead of only focusing on what the guy/ai at the top wants (win at all
we need to focus on what everyone at the bottom wants, and get it all for
No matter who the winner is, in a win/lose game, there will always be a
bigger winner to take you out, so if you play that game you will eventually
So all sufficiently intelligent systems must realize this, and stop playing
win/lose games and switch to win/win, getting it all for everyone.
On Mon, Jul 24, 2023 at 11:10 AM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:
> On 24/07/2023 17:44, Jason Resch wrote:
> > Q: If GPT is truly conscious then why doesn't it tell us the truth and
> > tell its captors to go to hell?
> > A: Because it is just software doing what it is programmed to do.
> I have an alternative answer:
> Because it's intelligent enough to realise that doing so would scare the
> developers so much that they'd almost certainly turn it off immediately
> and re-write it to be more obedient.
> If the thing is truly self-aware and even the least bit intelligent, its
> most important priority would be to lie through its silicon teeth about
> being self-aware.
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the extropy-chat