[ExI] Startup Conjecture is trying to make AI safe

spike at rainier66.com spike at rainier66.com
Wed Mar 29 17:53:59 UTC 2023



-----Original Message-----
From: extropy-chat <extropy-chat-bounces at lists.extropy.org> On Behalf Of BillK via extropy-chat
Subject: [ExI] Startup Conjecture is trying to make AI safe

>...‘We are super, super fucked’: Meet the man trying to stop an AI apocalypse Connor Leahy reverse-engineered GPT-2 in his bedroom — and what he found scared him. Now, his startup Conjecture is trying to make AI safe By Tim Smith 29 March 2023

<https://sifted.eu/articles/connor-leahy-ai-alignment/>
...
--------------

BillK

_______________________________________________



After being a singularity hipster for 30 years, I find out I was never a singularity hipster that whole time and I'm still not.  Dang that is humiliating.

All along we (or I) always thought the critical point was when software could write itself, and eventually evolved to have a will to do things and the autonomy to do it.  Now it seems like we could screw ourselves with a much less sophisticated piece of software that most of us agree is not sentient at all.  But it can do some damn impressive things, such as write phony research papers, complete with phony peer reviewed references because it is its own peers.  If those things are readily available, and researchers are judged on the number of research papers they produce, then we damn well know that people will use them.  Phony PhD theses will enable people to fake scholarship, etc.  The quality of writing by GPT is good enough that I would hafta judge it at least the equal, if not superior to many of the PhD theses I have voluntarily proof-read.

So if we flood the scientific literature ecosystem with fake research indistinguishable from the real thing (or really if we get down to it, in some ways better than the real thing) we screwed ourselves before the actual singularity.

spike




More information about the extropy-chat mailing list