[ExI] alt man out

Brent Allsop brent.allsop at gmail.com
Mon Nov 20 20:53:07 UTC 2023


Hi Adrian,
Thanks for the comments and questions.

On Mon, Nov 20, 2023 at 1:27 PM Adrian Tymes via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> I do not believe that name change makes sense, for two or three reasons.
>
> 1) AI is being and will continue to be commercialized.  There does not
> seem to be any plausible way to stop this, so there appears to be
> negative utility in debating it (as it takes attention and energy away from
> the more useful question of how best to guide, encourage, and shape the
> commercialization of AI).  Where methods have been proposed, such as
> Elizier's, the details are highly relevant to whether or not people would
> want to support it - again to take Elizier's proposal, quite a few who
> might generally support non-commercialization of AI would back down from
> that if, as Elizier (possibly correctly) states, such drastic action is the
> only way to achieve that goal.  It is like debating whether the Moon should
> orbit the Earth.
>

I believe if there were powerful arguments as to why AI commercialization
could be dangerous, it could at least be severely restricted.  And
eventually nuances like this could be teased out in the conversations and
continued pivoting of the topic could be achieved to focus on important
issues like this, as they come up.  For example, one could start a new
topic with the goal of building and tracking consensus around the idea that
it would be impossible to stop the commercialization of AI, and once you
demonstrated real consensus with converting arguments, use that to
influence this topic, and push that doctrine higher in the consensus tree
structure.


> 2) Commercialization of AI and AI being an existential threat are not
> necessarily opposing beliefs.  It is entirely possible that the best, or
> only realistic, way to deal with what existential threats AI brings is via
> commercializing it: letting it get out into the hands of the masses so the
> general public figures out how to deal with it before it could end
> humanity, thereby preventing it from ending humanity.  As I understand it,
> your system relies on the inherent assumption that camps are mutually
> opposed.
>

The guideline is no similar doctrines in competing sibling camps.  If there
is anything two sibling camps agree on, instead of it being duplicated, it
should be pushed up to a supper camp, leaving only the disagreeable
doctrine in the supporting sub camps.  And again, if issues like this come
up, people can morph the structure to bring them up and push them up to
higher level, more supported camps.


3) I don't know if this is the case, but if you do the name change, will
> all the statements and supporters automatically be switched to the new
> names even if said statements and supporters might be
> irrelevant/indifferent, or even in opposition, to their newly assigned
> camp, with that supporter or the person who made that statement not
> bothering to come back onto the platform to update and correct it?  If this
> is the case, this would be a third reason - though presumably easily dealt
> with just by closing the old debate and opening a new one.
>

One of the benefits of Canonizer is the ability to significantly morph and
change existing large topic communities towards new issues that come up, or
new ways to deal with similar issues, without having to start all over from
scratch, with zero support.  If no supporters of a camp object to a
proposed camp change in the 24 hour review period unanimous support is
assumed.  If something slips by a supporter, when they realize they missed
it, they can then go back and request the doctrine (or name change) they
disagree with be pushed to a lower level camp, so they don't need to
support it.  This is similar to the way if someone discovers that the
agreement statement is expressed in a way that is biased towards one camp
or another, they can request that it be fixed, since there is no way for
anyone to show they disagree with anything in the single root camp.  The
goal is to find out what everyone believes, so everyone should do
everything to make this possible so nobody is frustrated or censored.



> On Mon, Nov 20, 2023 at 10:49 AM Brent Allsop via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>>
>> I believe it is critically important that we find a way so our morals can
>> keep up with our technology, especially on existential issues like General
>> AI.  If we continue on this type of polarizing and hierarchical win / lose
>> survival of the fittest, fight to the death path, it could have grave
>> consequences for us all.  All the wars, and destructive polarization of
>> society is ever more clear proof of this importance.
>>
>> We simply need to convert from a win / lose, survival of the fittest war
>> to the death between hierarchies, where moral truth is determined via edict
>> (if you aren't with our hierarchy, you are against us) to a bottom up win /
>> win building and tracking of moral consensus, with a focus on what everyone
>> agrees on.  We need to make our morals based on building and tracking
>> scientific moral consensus.  A moral truth derived from bottom up, grass
>> roots, experimental demonstration and rational arguments rather than
>> hierarchical edict.
>>
>> There is already an existing topic on "The Importance of Friendly AI
>> <https://canonizer.com/topic/16-Friendly-AI-Importance/1-Agreement>".
>> There is a unanimous supper camp where everyone agrees that AI will "Surpass
>> Current Humans
>> <https://canonizer.com/topic/16-Friendly-AI-Importance/8-Will-Surpass-current-humans>"
>> and the "Such Concern is Mistaken
>> <https://canonizer.com/topic/16-Friendly-AI-Importance/3-Such-Concern-Is-Mistaken>"
>> with 12 supporters continues to extend its lead over the "Friendly AI is
>> Sensible
>> <https://canonizer.com/topic/16-Friendly-AI-Importance/9-FriendlyAIisSensible>"
>> camp currently with half as many supporters.
>>
>> To me, this topic is too vague, not centering around any specific
>> actions.  So I propose the following name changes to pivot the topic to be
>> more specific about actions that need to be taken.
>>
>> Old:                                                        New:
>>
>> Friendly AI Importance                  Should AI be Commercialized?
>> <-Topic Name
>>
>> Such Concern Is Mistaken             AI should be commercialized.
>>
>> Friendly AI is Sensible                    AI Poses an Existential Threat.
>>
>>
>> I would love to hear anyone's thoughts, especially if you are a supporter
>> of any of the camps with the proposed name changes who objects to this
>> proposal.  And of course, the more people that help communicate about their
>> current moral beliefs (whether experienced and educated or not) the
>> better.  That which you measure, improves.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Mon, Nov 20, 2023 at 11:05 AM Keith Henson via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>> On Mon, Nov 20, 2023 at 2:13 AM efc--- via extropy-chat
>>> <extropy-chat at lists.extropy.org> wrote:
>>> >
>>> > Based on the gossip I've seen and read I think it is due to Sam
>>> wanting to
>>> > accelerate and earn money, and the board wanting to decelerate to
>>> choose a
>>> > more cautious approach.
>>> >
>>> > But who knows? ;)
>>>
>>> By historical and Supreme Court standards, this would be malfeasance
>>> by the board, opening them to stockholder lawsuits.
>>>
>>> I don't think it makes much difference. The advances in AI are way out
>>> of control.
>>>
>>> Keith
>>>
>>> > Best regards,
>>> > Daniel
>>> >
>>> >
>>> > On Sat, 18 Nov 2023, spike jones via extropy-chat wrote:
>>> >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> https://www.theverge.com/2023/11/17/23965982/openai-ceo-sam-altman-fired
>>> > >
>>> > >
>>> > >
>>> > >
>>> > > WOWsers.
>>> > >
>>> > >
>>> > >
>>> > > I am told Altman is a talented guy, as is Brockman.  We don’t know
>>> what went on there, but watch for both to team up with Musk and
>>> > > Thiel, start a competitor company that will blow OpenAI’s artificial
>>> socks off.
>>> > >
>>> > >
>>> > >
>>> > > As I wrote that sentence, it occurred to me that what happened today
>>> is Eliezer Yudkowsky’s nightmare scenario.
>>> > >
>>> > >
>>> > >
>>> > > spike
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >_______________________________________________
>>> > extropy-chat mailing list
>>> > extropy-chat at lists.extropy.org
>>> > http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20231120/c43fe7bb/attachment.htm>


More information about the extropy-chat mailing list