[ExI] another open letter, but this one may be smarter than the previous one

Gadersd gadersd at gmail.com
Fri Apr 28 14:59:37 UTC 2023


> [Related question - what is the state of being able to compile state transition graphs of Turing machines into readable source code, or even comprehensible assembler for a reasonably generic von Neumann architecture register machine?]

This is related to the problem of proving that two different computer programs are functionally equivalent. It should be no surprise that this falls under the domain of the halting problem and is therefore impossible in general. I suspect that the best we can do in practice is to use an AI to generate proofs that two programs are equivalent using an automated proof checker. This of course will not work in general but it may be good enough in practice. It will at least be reliable as not even an omniscient creature can fool a properly designed and functioning proof checker.

> On Apr 27, 2023, at 6:37 PM, Darin Sunley via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
> Underlying all of this, of course, is the deep and urgent need for research into the interpretation of deep neural networks. A lot of people are asking very important questions about what precisely LLMs are doing, and how precisely they do it. Some are even trying to make policy based on anticipated answers to these questions. But the long and short of it, we mostly don't know, and don't even know how to find out. 
> 
> LLMs are encoding ridiculously complex functionality as matrices of billions or trillions of floating point numbers. Imagine trying to understand the deep behavior of a hundred megabyte binary executable. We have tools that could turn that into millions of lines of undocumented spaghetti-code assembler, but the project of factoring and interpreting that assembler would be work of decades. The problem with LLMs is thousands of times that size, and dozens of times harder per unit size. 
> 
> Frankly, being able to turn matrices of model weights into uncommented assembler would be a fantastic, revolutionary, nigh-unimaginable achievement, an incomprehensible improvement on the current situation. And still nowhere near enough. As it stands, it seems unlikely that we will have any significant understanding of how to engineer (as opposed to train) behavior of the complexity we see today until it has already changed the world unimaginably.
> 
> May God help us all.
> 
> [Related question - what is the state of being able to compile state transition graphs of Turing machines into readable source code, or even comprehensible assembler for a reasonably generic von Neumann architecture register machine?]
> 
> On Thu, Apr 27, 2023 at 4:17 PM Darin Sunley <dsunley at gmail.com <mailto:dsunley at gmail.com>> wrote:
> The capitalization of "Psychology" is a little weird.  Ditto the use of the idiom "achieving consciousness", "mystery of consciousness", et. It's a little woo, frankly.
> 
> I'm not seeing an actual policy recommendation here. Calling "on the tech sector, the scientific community and society as a whole to take seriously the need to accelerate research in consciousness" seems like it's a demand for a seat at the table by a group that may be being denied a seat at the table for pretty good reasons at the moment.
> 
> Setting aside for the moment what they actually /mean/ by consciousness [I'm pretty sure it's Dennet-style formal systems capable of introspection over a model of their environment that includes themselves, rather than anything involving phenomenal conscious experience], they don't seem to offer a recommendation for whether LLMs specifically, or artificial intelligences in general, should be conscious, in whatever sense they mean. [It's worth noting that the consciousness of AGIs, in any sense, is entirely irrelevant to their status as a potential existential threat. Contra popular culture, unaligned agentic tool AIs can destroy the world just as easily as unaligned agentic conscious minds.]
> 
> One of the articles they reference is indeed very interesting. The degree to which LLMs may be able to form even a primitive theory of minds based on training text that was generated by systems (people) with a clear embedded theory of mind is interesting, and may even be alarming if possession of a theory of mind is one of your primary bright line criterion of a definition of consciousness and therefore moral valence. [I personally disagree that having a theory of mind is a sufficient bright-line criteria for moral valence, but reasonable people can disagree about this.]
> 
> I've long held that AGI, as it develops, will allow, to at least some degree, questions about the nature of consciousness to become amenable to actual scientific research and investigation. Calling for practitioners of "Consciousness Science" to be acknowledged as leaders in the AGI research programme is somewhat premature. I would argue that it is the emergence of LLMs that will allow the field of consciousness research [at least within the limits of Dennet's paradigm] to actually /become/ a field of science and engineering, rather than of philosophy.
> 
> 
> 
> On Thu, Apr 27, 2023 at 3:50 PM Adrian Tymes via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
> And what if someone uses something like Gödel's incompleteness theorems to prove that what they're looking for is impossible, or at least no more possible than it is for human intelligences?
> 
> Indeed, do those theorems apply to AIs, to show that no computer program (at least, one that is expressed in the same low level language - high level language irrelevant since they get compiled to the same low level language - as the same sort of computer the AIs themselves run on, so it can run on the same sort of computer) can ever formally prove all the qualities and consequences of these AIs?
> 
> On Thu, Apr 27, 2023, 1:36 PM spike jones via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
> But it is hard to say, and I am not an expert on the topic:
> 
> https://amcs-community.org/open-letters/ <https://amcs-community.org/open-letters/>
> 
> 
> 
> 
> Here's the letter, in case the link doesn't work:
> 
> 
> 
> The Responsible Development of AI Agenda Needs to Include Consciousness
> Research
> Open Letter – PUBLISHED April 26, 2023 – 
> 
> This open letter is a wakeup call for the tech sector, the scientific
> community and society in general to take seriously the need to accelerate
> research in the field of consciousness science.
> 
> As highlighted by the recent “Pause Giant AI Experiments” letter [1], we are
> living through an exciting and uncertain time in the development of
> artificial intelligence (AI) and other brain-related technologies. The
> increasing computing power and capabilities of the new AI systems are
> accelerating at a pace that far exceeds our progress in understanding their
> capabilities and their “alignment” with human values.
> 
> AI systems, including Large Language Models such as ChatGPT and Bard, are
> artificial neural networks inspired by neuronal architecture in the cortex
> of animal brains. In the near future, it is inevitable that such systems
> will be constructed to reproduce aspects of higher-level brain architecture
> and functioning. Indeed, it is no longer in the realm of science fiction to
> imagine AI systems having feelings and even human-level consciousness.
> Contemporary AI systems already display human traits recognised in
> Psychology, including evidence of Theory of Mind [2].
> 
> Furthermore, if achieving consciousness, AI systems would likely unveil a
> new array of capabilities that go far beyond what is expected even by those
> spearheading their development. AI systems have already been observed to
> exhibit unanticipated emergent properties [3]. These capabilities will
> change what AI can do, and what society can do to control, align and use
> such systems. In addition, consciousness would give AI a place in our moral
> landscape, which raises further ethical, legal, and political concerns.
> 
> As AI develops, it is vital for the wider public, societal institutions and
> governing bodies to know whether and how AI systems can become conscious, to
> understand the implications thereof, and to effectively address the ethical,
> safety, and societal ramifications associated with artificial general
> intelligence (AGI).
> 
> Science is starting to unlock the mystery of consciousness. Steady advances
> in recent years have brought us closer to defining and understanding
> consciousness and have established an expert international community of
> researchers in this field. There are over 30 models and theories of
> consciousness (MoCs and ToCs) in the peer-reviewed scientific literature,
> which already include some important pieces of the solution to the challenge
> of consciousness.
> 
> To understand whether AI systems are, or can become, conscious, tools are
> needed that can be applied to artificial systems. In particular, science
> needs to further develop formal and mathematical tools to model
> consciousness and its relationship to physical systems. In conjunction with
> empirical and experimental methods to measure consciousness, questions of AI
> consciousness must be tackled.
> 
> The Association for Mathematical Consciousness Science (AMCS) [4], is a
> large community of over 150 international researchers who are spearheading
> mathematical and computational approaches to consciousness. The Association
> for the Scientific Study of Consciousness (ASSC), [5], comprises researchers
> from neuroscience, philosophy and similar areas that study the nature,
> function, and underlying mechanisms of consciousness. Considerable research
> is required if consciousness science is to align with advancements in AI and
> other brain-related technologies. With sufficient support, the international
> scientific communities are prepared to undertake this task.
> 
> The way ahead
> Artificial intelligence may be one of humanity’s greatest achievements. As
> with any significant achievement, society must make choices on how to
> approach its implications. Without taking a position on whether AI
> development should be paused, we emphasise that the rapid development of AI
> is exposing the urgent need to accelerate research in the field of
> consciousness science.
> 
> Research in consciousness is a key component in helping humanity to
> understand AI and its ramifications. It is essential for managing ethical
> and societal implications of AI and to ensure AI safety. We call on the tech
> sector, the scientific community and society as a whole to take seriously
> the need to accelerate research in consciousness in order to ensure that AI
> development delivers positive outcomes for humanity. AI research should not
> be left to wander alone.
> 
> References:
> [1] Pause Giant AI Experiments: An Open Letter:
> https://futureoflife.org/open-letter/pause-giant-ai-experiments <https://futureoflife.org/open-letter/pause-giant-ai-experiments>
> [2] Theory of Mind May Have Spontaneously Emerged in Large Language Models:
> https://arxiv.org/abs/2302.02083 <https://arxiv.org/abs/2302.02083>
> [3] The AI revolution: Google’s developers on the future of artificial
> intelligence: https://www.youtube.com/watch?v=880TBXMuzmk <https://www.youtube.com/watch?v=880TBXMuzmk>
> [4] Association for Mathematical Consciousness Science (AMCS):
> https://amcs-community.org/ <https://amcs-community.org/>
> [5] Association for the Scientific Study of Consciousness (ASSC):
> https://theassc.org/ <https://theassc.org/>
> 
> Sign the open letter.
> Supporting Signatories:
> 
> Prof. Lenore Blum (AMCS President; Carnegie Mellon University and UC
> Berkeley)
> Dr Johannes Kleiner (AMCS Board Chair; Ludwig Maximilian University of
> Munich)
> Dr Jonathan Mason (AMCS Board Vice Chair; University of Oxford)
> Dr Robin Lorenz (AMCS Board Treasurer; Quantinuum)
> Prof. Manuel Blum (Turing Award 1995; UC Berkeley and Carnegie Mellon
> University)
> Prof. Yoshua Bengio FRS, FRSC, Knight of the Legion of Honour [France]
> (Turing Award 2018; Full professor, Scientific director of Mila, University
> of Montreal / Mila)
> Prof. Marcus du Sautoy FRS, OBE (University of Oxford)
> Prof. Karl Friston FRS, FRBS, FMedSci, MAE (Weldon Memorial Prize and Medal,
> 2013; Donald O Hebb award, 2022; Prof of Neuroscience, University College
> London)
> Prof. Anil K. Seth (University of Sussex, Canadian Institute for Advanced
> Research, Program on Brain, Mind, and Consciousness)
> Prof. Peter Grindrod OBE (University Of Oxford)
> Prof. Tim Palmer FRS CBE (University of Oxford)
> Prof. Susan Schneider APA (NASA Chair, NASA; Distinguished Scholar, Library
> of Congress; Director of the Center for the Future Mind, Florida Atlantic
> University)
> Prof. Claire Sergent (Professor of Cognitive Neurosciences, Co-director of
> the Master of Cognitive Neurosciences of Paris; Université Paris Cité /
> CNRS)
> Dr Ryota Kanai (Founder & CEO of Araya, Inc.)
> Prof. Kobi Kremnitzer (University of Oxford)
> Prof. Paul Azzopardi (University of Oxford)
> Prof. Michael Graziano (Princeton University)
> Prof. Naotsugu Tsuchiya (Monash University)
> Prof. Shimon Edelman (Cornell University)
> Prof. Andrée Ehresmann (Université de Picardie Jules Verne Amiens)
> Prof. Liad Mudrik (Tel Aviv University, Canadian Institute for Advanced
> Research, Program on Brain, Mind, and Consciousness)
> Dr Lucia Melloni (Max Planck Institute/NYU Langone Health)
> Prof. Stephen Fleming (University College London)
> Prof. Bob Coecke (DVRS at Perimeter Institute; Quantinuum)
> Jeff Walz (Tech sector Consultant)
> Dr Wanja Wiese (Ruhr University Bochum)
> Dr Joscha Bach (Research Scientist, Thistledown Foundation)
> Prof. Ian Durham (Saint Anselm College)
> Prof. Pedro Resende (IST – University Lisbon)
> Dr Quanlong Wang (Quantinuum)
> Peter Thestrup Waade (Interacting Minds Centre, Aarhus University; Wellcome
> Trust Centre for Human Neuroimaging, University College London)
> Prof. Jose Acacio de Barros (San Francisco State University)
> Dr Vasileios Basios (University of Brussels)
> Dr Miguel Sanchez-Valpuesta (Korea Brain Research Institute)
> Dr Michael Coughlan (Wageningen University)
> Dr Adam Barrett (University of Sussex)
> Prof. Marc Ebner (Computer Science Professor, University of Greifswald)
> Dr Chris Fields (Tufts University)
> Dr Guillaume Dumas (Associate Professor, University of Montreal / Mila)
> Dr Hamid Azizi (Research Scholar, Center for Theology and the Natural
> Sciences (CTNS))
> Prof. Ricardo Sanz IEEE, AAAI, ASSC (Head of Autonomous Systems Laboratory,
> Universidad Politecnica de Madrid)
> Dr Robert Prentner (Ludwig Maximilian University of Munich)
> Prof. Johannes Fahrenfort ASSC (Assistant Professor, VU Amsterdam)
> Dr Svetlana Rudenko (Researcher and composer; Haunted Planet Studios,
> Trinity College Dublin)
> Prof. Óscar Gonçalves (Full Professor of Neuropsychology, University of
> Coimbra, Portugal)
> Prof. John Barnden SSAISB (Professor Emeritus of AI, University of
> Birmingham, UK)
> Prof. Valtteri Arstila (University of Turku)
> Dr Neda Kosibaty (AMCS)
> Dr Daniel Helman (College of Micronesia-FSM)
> Justin T. Sampson (VMware, Inc.)
> Christopher Rourk (Jackson Walker LLP)
> Dr Mouhacine B. Benosman (MERL)
> Prof. Ouri Wolfson (University of Illinois at chicago and Pirouette Software
> inc.)
> Dr Rupert Macey-Dare (St Cross College Oxford)
> David Evans (Sonoma State University)
> Rajarshi Ghoshal (Ford)
> Prof. Peter B. Reiner (University of British Columbia)
> Dr Adeel Razi (Monash University)
> Prof. Jun Tani (Okinawa Institute of Science and Technology)
> David Rein (New York University, Cohere)
> Dr Colin Hales (University of Melbourne)
> John Balis (University of Wisconsin – Madison)
> George Blackburne (University College London)
> Jacy Reese Anthis (Sentience Institute)
> Dr George Deane (University of Montreal)
> Dr Nathan Faivre (CNRS)
> Dr Giulio Ruffini (Neuroelectrics, Starlab)
> Borjan Milinkovic (Unniversity of Melbourne)
> Dr Jacobo Sitt (Inserm, Paris Brain Institute)
> Dr Aureli Soria-Frisch (Starlab Barcelona)
> Dr Bjørn Erik Juel (University of Oslo and university of Wisconsin –
> Madison)
> Craig Cockburn (Siliconglen Ltd)
> Dr Theofanis Panagiotaropoulos (Inserm/CEA)
> Andrea Sittoni (Ludwig Maximilian University of Munich)
> Dr Lancelot Pecquet (University of Poitiers)
> Carlos Perez (Intuition Machine Inc.)
> Dr Xerxes Arsiwalla (Pompeu Fabra University)
> Emeritus Dr Jim Rutt (Santa Fe Institute)
> Dr Sean Tull (Quantinuum)
> Prof Chris Frith (Craik Prize. 1996; University of London)
> Dr Henry Shevlin (Leverhulme Centre for the Future of Intelligence,
> University of Cambridge)
> Dr Jolien C. Francken (Radboud University, Nijmegen)
> Prof. Sebastiano Stramaglia (University of Bari)
> Milton Ponson (Caribbean Applied Engineering and Science Research
> Foundation)
> Juan Cordovilla (Exactos Consulting Group)
> Eduardo César Garrido Merchán (Universidad Pontificia Comias)
> Benedict Harrision (Who Am I Ltd)
> Nicolas Grootjans (BlueField)
> Jared Frerichs (Deus Mechanicus)
> Dr Nadine Dijkstra (University College London)
> 
> 
> 
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230428/6e440cfe/attachment.htm>


More information about the extropy-chat mailing list