<html><head><meta http-equiv="Content-Type" content="text/html; charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><blockquote type="cite" class=""><div dir="auto" class="">I am told chat gpt already memorized all the worlds religious text ok cool but what if... it is trained on only that material and nothing else?</div></blockquote><div class=""><br class=""></div>If enough training data were collected such a model would have a very fantastical view of the world. It would be an interesting experiment.<br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Jun 3, 2023, at 2:09 PM, Gregory Jones via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" class="">extropy-chat@lists.extropy.org</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="auto" class="">I am told chat gpt already memorized all the worlds religious text ok cool but what if... it is trained on only that material and nothing else?<div dir="auto" class=""><br class=""></div><div dir="auto" class="">spike</div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Jun 2, 2023, 5:23 PM BillK via extropy-chat <<a href="mailto:extropy-chat@lists.extropy.org" class="">extropy-chat@lists.extropy.org</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">26 May 2023 Discussion on AI, GPT-4 and the Open Letter". 1hr 20min.<br class="">
<<a href="https://www.youtube.com/watch?app=desktop&v=EaijyqS1MOw" rel="noreferrer noreferrer" target="_blank" class="">https://www.youtube.com/watch?app=desktop&v=EaijyqS1MOw</a>><br class="">
Panelists include -<br class="">
Ben Goertzel, Robin Hanson, Max More, Anders Sandberg,<br class="">
Natasha Vita-More, Peter Voss<br class="">
<br class="">
We discuss and debate 4 areas of concern:<br class="">
(1) Spectrum of Cultural Concepts of AI and GPT-4;<br class="">
(2) Logic of Assertion that Large language models resulting in AGI;<br class="">
(3) Plausibility of AI-Foom vs. Soft take-off; and<br class="">
(4) Practicality of the "AI-pause".<br class="">
----------------------<br class="">
<br class="">
BillK<br class="">
_______________________________________________<br class="">
extropy-chat mailing list<br class="">
<a href="mailto:extropy-chat@lists.extropy.org" target="_blank" rel="noreferrer" class="">extropy-chat@lists.extropy.org</a><br class="">
<a href="http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat" rel="noreferrer noreferrer" target="_blank" class="">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat</a><br class="">
</blockquote></div>
_______________________________________________<br class="">extropy-chat mailing list<br class=""><a href="mailto:extropy-chat@lists.extropy.org" class="">extropy-chat@lists.extropy.org</a><br class="">http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat<br class=""></div></blockquote></div><br class=""></body></html>