[ExI] UK Artificial Intelligence advances
anders at aleph.se
Wed Aug 14 21:17:59 UTC 2013
Now I understand why I can't stand the machines at Tesco. They are all
On 14/08/2013 19:45, BillK wrote:
> Although the article is intended to be funny, it does make a couple of
> significant points.
> When AI appears it will have initiative and want to do its own thing,
> whatever that might be. It will be as uncomfortable following orders
> as humans are.
This is a common anthropomorphism. But it is not given that AI will have
initiative in a human or animal way: consider a question-answer system
where motivations and activity are triggered by a question or command,
leading to a huge tree of branching activity and action, which in the
end (if resolved) leads back to a passive state. There might not be a
"its own thing". It might still be very capable, unpredictable and
potentially dangerous, yet not have a will of its own.
> What the AI will want to do will be dependent on its knowledge and
> experience and the importance that it attaches to certain objectives.
> If significant philosophies are omitted from the knowledge base, that
> will affect its decision making.
I think the core architecture really matters too. A reinforcement
learning architecture will think and motivate itself utterly different
from a self-organized learning architecture or a question-answer
architecture. A utility maximizer has different bad behaviours from a
utility satisficer, and so on.
Future of Humanity Institute
Oxford Martin School
Faculty of Philosophy
More information about the extropy-chat