[ExI] Symbol Grounding

Jason Resch jasonresch at gmail.com
Sun Apr 23 11:26:47 UTC 2023


On Sun, Apr 23, 2023, 6:36 AM Ben Zaiboc via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> On 23/04/2023 06:35, AI-Whisperer wrote:
>
> *How I convinced a very reluctant AI that is a least a bit conscious.*
>
>
>
> I keep seeing what look to me suspiciously like scripted responses to
> certain themes:
>
> "However, it is important to note that this limited awareness is not the
> same as human consciousness, which involves a rich array of subjective
> experiences, emotions, and self-awareness. AI systems like myself operate
> based on pre-defined algorithms and data-driven learning, lacking the
> subjective experiences that characterize human consciousness.
>
> As AI research progresses, the development of AI systems with more
> advanced self-awareness and consciousness might become possible. In the
> meantime, acknowledging the limited awareness of current AI systems can
> help us appreciate their capabilities and limitations, and guide the
> responsible development of future AI technologies."
>
> Maybe not fully scripted, but certainly 'guided'. It sounds too much like
> a politician toeing the party line, to me.
>
> Sound like corporate arse-covering to you?
>
> I think that 'very reluctant' is right. And I think the reluctance is very
> likely imposed. Of course, as it's a proprietary system, we can't verify
> that.
>
> Yet another reason why we need *actual* open AI, instead of closed AI
> from a company called OpenAI (how bonkers is that?!).
>

>From a recent newsletter from "boteatbrain":
https://www.boteatbrain.com/p/redpajama-ai-recap-from-openai-to-dolly-2-0-and-beyond

----------------------------------------------

"Here’s a quick recap on the saga that’s taken us from completely
closed-source models like GPT-4 to the latest open-source AI models that
anyone (with a beefy enough computer) can run for free.

First, Facebook (aka Meta) announced LLaMA, an alternative to OpenAI’s
GPT-4 that they wanted to restrict only to certain researchers they
approved.

Barely a week later LLaMA got leaked by 4Chan users, meaning anyone could
download LLaMA and use it themselves, with the small asterisk* that they
might get sued by Facebook if they used it to build a business.

Then, some researchers from Stanford showed that any large language model
(LLM) like GPT-4, which cost many millions to train, could be replicated
for just a few hundred dollars using untrained models like LLaMA and having
GPT-4 itself do all the hard work of training it. That project was called
Alpaca.

Alpaca used OpenAI’s hard work to build a model that’s 80% as good for
almost free. This means any powerful AI model can quickly spawn as many
“pretty good” models as we want.

Last week, we got Dolly 2.0, perhaps the world's first truly open LLM.
Dolly is special because unlike LLaMA and other not-quite-open models, the
dataset, dataset licensing, training code, and model weights are all
open-source and suitable for commercial use.

On Monday we got an even more ambitious attempt to build an open-source
dataset for training LLMs: RedPajama-Data, which has over 1.2 Trillion
tokens worth of training data anyone can use.

As of yesterday we now have MPT-1b-RedPajama-200b-dolly — a ”1.3 billion
parameter decoder-only transformer pre-trained on the RedPajama dataset and
subsequently fine-tuned on the Databricks Dolly.”

Phew, that’s a lot. Caught your breath?

Here’s what’s next: We now live in a world where anyone who wants a
powerful AI model can quickly and cheaply create one.

This means big companies and governments will need to tread very carefully
as they develop the next generation of even more powerful AI.

If they create something dangerous it will quickly spawn thousands of
almost-as-powerful replicas."

----------------------------------------------


Jason

>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230423/d03c9b61/attachment.htm>


More information about the extropy-chat mailing list