[ExI] very rough model of simulator and one version of consciousness
sam micheal
micheals at msu.edu
Tue Jan 20 15:36:03 UTC 2009
now, with the model below - i have to make a couple statements: it is
Extremely rough - no assumption is made about correctness nor
completeness. so i would prefer readers send suggestions about component
corrections or extensions - not simply dismissal like regan seems to
prefer. regan would seem to prefer to reject point by point - just to
dismiss my ideas - without reading or understanding. perhaps it is their
job to confuse and obfuscate - so i would take any of their statements
with a large "grain" of salt; don't just take regan's word for it -
*read for yourself and decide for yourself*. the other point about the
model is that there is no assumption made that it is anything similar to
human consciousness - i am NOT saying this is how we think or are aware.
it is only a guess. it is only a model of the guess. so plz don't assume
i am saying "this is a complete and accurate model of human
consciousness" - it is NOT.
A COnscious MAchine Simulator -- ACOMAS
Salvatore Gerard Micheal, *modified* 15/JAN/09
Objects
2 cross verifying senses (simulated)
hearing (stereo)
seeing (stereo)
*motivation subsystem (specs to be determined)*
* information filters (4 controlled hierarchically)*
*supervisory symbol register (same as below)*
short-term*/primary* symbol register (8^3 symbols arranged in 8x8x8
array)
rule-base (self verifying and extending)
*rule-base symbol register (same as above)*
3D visualization register *(10^12 bits arranged in 10000x10000x10000
array)*
models of reality (at least 2)
morality (unmodifiable and uncircumventable)
don't steal, kill, lie, or harm
goal list (modifiable, prioritized)
output devices
robotic arms (simulated -- 2)
voice-synthesizer and speaker
video display unit
local environment (simulated)
operator (simulated, controlled by operator)
Purpose
test feasibility of construct for an actual conscious machine
discover timing requirements for working prototype
discover specifications of objects
discover implications/consequences of enhanced intelligence
(humans have 7 short-term symbol registers)
discover implications/consequences of emotionless consciousness
Specifications -- The registers are the core of the device -- the
(qualified) 'controllers' of the system (acting on goal list), the
reasoners of the system (identifying rules), but all constrained by
morality. The goal list should be instantaneously modifiable. For
instance, an operator can request "show me your goal list" .. "delete
that item" or "move that item to the top" .. "learn the rules of chess"
and the device should comply immediately. Otherwise, the device plays
with its environment -- learning new rules and proposing new experiments.
The purpose of the cross verifying senses is to reinforce the 'sense of
identity' established by these, registers, and model of reality. The
reason for 'at least 2' models is to provide a 'means-ends' basis for
problem solving -- one model to represent local environment 'as is' and
another for the desired outcome of the top goal. The purpose for
arranging the short-term register in a 3D array is to give the capacity
for 'novel thought' processes (humans have a tendency to think in linear
sequential terms). The reason for designing a self verifying and
extending rule-base is because that has a tendency to be a data and
processing intensive activity -- if we designed the primary task of the
device to be a 'rule-base analyzer', undoubtedly the device would spend
the bulk of its time on related tasks (thereby creating a rule-base
analyzer device and not a conscious machine). The 'models of reality'
could be as simple as a list of objects and locations. Or, they could be
a virtual reality implemented on a dedicated machine. This applies to
the 'local environment' as well. For operator convenience, the simulated
local environment should be in the form of a virtual reality. So the
operator would interact with the device in a virtual world (in the
simulated version). In this version, the senses, robot arms, and
operator presence -- would all be virtual. This should be clarified to
the device so that any transition to 'real reality' would not have
destructive consequences.
*Modifications: the specs for the visualization register at this time
were "either or" -- either a restricted VR (as described in the specs-01
document) or a bit-array. Since I'm not a VR programmer, it was simpler
for me to specify my "best guess" at the requirements for a bit-array. A
dedicated rule-base symbol register was added because that subsystem
will likely be "register hogging" and won't allow the device to freely
"pay attention" to anything else but rule-base development. The
supervisory symbol register was added also to "free up" the primary
symbol register for "attentive tasks". The purpose of that is exactly
what it says: to take directives directly from the motivation subsystem
and "tell" the rest of the system what to do. For instance, the
rule-base register may be currently scanning the rule-base for
consistency (since there were no immediate tasks assigned). The primary
symbol register is telling the robotic arms to push a set of toy blocks
over (since it is in play/experiment mode -- to see what happens). The
supervisory symbol register just received a directive from motivation:
try to make Sam laugh with a joke. A possible scenario is described in
specs-01. (The directive would have to entail "researching" what a joke
is -- in the rule-base, what qualifies as "laughing", and any other
definitions required to satisfy the directive. If those researches were
not satisfied, the directive would have to be discarded or questions
asked of the operator: "Sam, what's a joke?") ..After outlining the
'conscious part' in a schematic 'block diagram', I realized information
filters would be required implemented in a hierarchical fashion (this
basic design was approached in '95). Senses feed: motivation,
supervisor, primary register, and visualization register. But through
filters: motivation controls its own and the supervisor filter;
supervisor controls the filters feeding the registers. Directives/info
flows directly from: motivation to supervisor and supervisor to
registers. Signals before and after filters would be analogous to
sensations and perceptions: the hierarchical 'filter control structure'
decides what sensations are perceived by lower registers -- thereby
controlling sensation impact and actual register content. Whether or not
humans actually think like this, I believe the structure is rich enough
to at least mimic human consciousness. The crux, 'the Achilles Heel',
becomes motivation. The motivation subsystem must be flexible and
focusable. It cannot be overly flexible (completely random) or overly
rigid (focusing only on the goal list). Its control of the filters
(including its own) must be adaptable and expedient. Its purpose is to
guide the system away from inane repetition, 'blind alleys' (unnavigable
logical sequences and unprovable physics), and catastrophic failure;
simultaneously, guide the system toward robust and reliable solutions to
problems, expedient play/experimentation, and engaging conversation with
the operator. If this sounds impossible, try to raise a baby without any
experience! You learn fast! ;)*
My ultimate purpose of creating a conscious machine is not out of ego or
self aggrandizement. I simply want to see if it can be done. If it can
be done, then create something creative with potential. My mother argues
a robot can never 'procreate' because they are not 'flesh and blood'. It
can never have insight or other elusive human qualities. I argue that
they can 'procreate' in their own way and are only limited by their
creators. If we can 'distill' the essence of consciousness in a
construct (like above), if we can implement it on a set of computer
hardware and software, if we give that construct the capacity for
growth, if that construct has even a minimal creative ability (such as
with GA/GP), and critically limit its behavior by morality (such as
above), we have created a sentient being (not just an
artificial/synthetic consciousness). I focus on morality because if such
a device became widespread, undoubtedly they would be abused to perform
'unsavory' tasks which would have fatal legal consequences for inventor
and producer alike.
In this context, I propose we establish 'robot rights' before they are
developed in order to provide a framework for dealing with abuses and
'violations of law'. Now, all this may seem like science fiction to
most. But I contend we have focused far too long on what we call 'AI'
and expert systems. For too long we have blocked real progress in
machine intelligence by one of two things: mystifying 'the human animal'
(by basically saying it can't be done) -- or -- staring at an
inappropriate paradigm. It's good to understand linguistics and vision
-- without that understanding, perhaps we could not implement certain
portions of the construct above. But unless we focus on the mechanisms
of consciousness, we will never model it, simulate it, or create it
artificially.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20090120/8c9fa734/attachment.html>
More information about the extropy-chat
mailing list