<div dir="ltr"><div dir="ltr">On Tue, Oct 13, 2020 at 9:13 AM Stuart LaForge via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" target="_blank">extropy-chat@lists.extropy.org</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
From what I have been able to see of its output, it actually is <br>
pretty smart when comes to writing stuff. It just seems to lack common <br>
sense which is understandable since GPT-3 has no sensory inputs except <br>
for text. This could cause it to underperform on tasks that would <br>
require it to associate text with sensory and motor experiences just <br>
as Bill Hibbard observed earlier.<br></blockquote><div><br></div><div>It's able to string together words in an order that typically makes grammatical sense and that usually has some contextual relationship to the seed used. It's been trained on a truly massive amount of data with an amped up architecture compared to GPT-2. It's not surprising that it has uncovered enough relationships in text to fool us for a paragraph.</div><div><br></div><div>For me, while the tech under the hood is different, this is just the latest iteration (granted a highly advanced one) that began with RNNs, moved on to CNN, followed by LSTM, and on to the GPT series.</div><div><br></div><div>Do you consider an image recognition system conscious? IMO, this isn't much different. It finds patterns in text, and with a seed, attempts to guess at what would make sense as output with some level of novelty introduced. </div></div></div>