[ExI] reuter's take on altman

William Flynn Wallace foozler83 at gmail.com
Fri Nov 24 22:26:12 UTC 2023


When they start installing motivation in the AIs, my suggestion is
that the AIs be motivated to seek the good opinion of humans and other
AIs alike.  keith

Which humans will be a gigantic problem.  bill w

On Fri, Nov 24, 2023 at 1:15 PM Keith Henson via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

> On Fri, Nov 24, 2023 at 9:21 AM <spike at rainier66.com> wrote:
> >
> > -----Original Message-----
> > From: spike at rainier66.com <spike at rainier66.com>
> > Subject: RE: [ExI] reuter's take on altman
> >
> >
> >
> > -----Original Message-----
> > From: Keith Henson <hkeithhenson at gmail.com>
> > Subject: Re: [ExI] reuter's take on altman
> >
> > >...We are on the run-up to the singularity.  What did we expect?
> >
> > Keith
> >
> > Keith this was temporarily posted on Twitter, an open letter from former
> employees of OpenAI:
> >
> > To the Board of Directors of OpenAI:
> >
> > We are writing to you today to express our deep concern about the recent
> events at OpenAI, particularly the allegations of misconduct against Sam
> Altman.
>
> Whatever the endpoint is, and given the lack of control over what
> happens, I don't think it makes any substantial difference between
> things going fast or going slower.
>
> This attitude can be called fatalistic or lazy, but at this stage, I
> don't think there is much else possible.  Some years ago I might have
> nudged AI development in a safer direction by pouring cold water on
> the idea of human brain emulation.  (Humans have poorly understood
> evolved psychological traits you don't want in an AI.)
>
> When they start installing motivation in the AIs, my suggestion is
> that the AIs be motivated to seek the good opinion of humans and other
> AIs alike.
>
> I could, of course, be wrong and AIs will turn out to be a disaster
> for the human race.  But again, I don't think sooner or later will
> make a lot of difference.
>
> Keith
>
> > We are former OpenAI employees who left the company during a period of
> significant turmoil and upheaval. As you have now witnessed what happens
> when you dare stand up to Sam Altman, perhaps you can understand why so
> many of us have remained silent for fear of repercussions. We can no longer
> stand by silent.
> >
> > We believe that the Board of Directors has a duty to investigate these
> allegations thoroughly and take appropriate action. We urge you to:
> >
> > •       Expand the scope of Emmett’s investigation to include an
> examination of Sam Altman’s actions since August 2018, when OpenAI began
> transitioning from a non-profit to a for-profit entity.
> > •       Issue an open call for private statements from former OpenAI
> employees who resigned, were placed on medical leave, or were terminated
> during this period.
> > •       Protect the identities of those who come forward to ensure that
> they are not subjected to retaliation or other forms of harm.
> >
> > We believe that a significant number of OpenAI employees were pushed out
> of the company to facilitate its transition to a for-profit model. This is
> evidenced by the fact that OpenAI’s employee attrition rate between January
> 2018 and July 2020 was in the order of 50%.
> >
> > Throughout our time at OpenAI, we witnessed a disturbing pattern of
> deceit and manipulation by Sam Altman and Greg Brockman, driven by their
> insatiable pursuit of achieving artificial general intelligence (AGI).
> Their methods, however, have raised serious doubts about their true
> intentions and the extent to which they genuinely prioritize the benefit of
> all humanity.
> >
> > Many of us, initially hopeful about OpenAI’s mission, chose to give Sam
> and Greg the benefit of the doubt. However, as their actions became
> increasingly concerning, those who dared to voice their concerns were
> silenced or pushed out. This systematic silencing of dissent created an
> environment of fear and intimidation, effectively stifling any meaningful
> discussion about the ethical implications of OpenAI’s work.
> >
> > We provide concrete examples of Sam and Greg’s dishonesty & manipulation
> including:
> >
> > •       Sam’s demand for researchers to delay reporting progress on
> specific “secret” research initiatives, which were later dismantled for
> failing to deliver sufficient results quickly enough. Those who questioned
> this practice were dismissed as “bad culture fits” and even terminated,
> some just before Thanksgiving 2019.
> >
> > •       Greg’s use of discriminatory language against a
> gender-transitioning team member. Despite many promises to address this
> issue, no meaningful action was taken, except for Greg simply avoiding all
> communication with the affected individual, effectively creating a hostile
> work environment. This team member was eventually terminated for alleged
> under-performance.
> >
> > •       Sam directing IT and Operations staff to conduct investigations
> into employees, including Ilya, without the knowledge or consent of
> management.
> >
> > •       Sam’s discreet, yet routine exploitation of OpenAI’s non-profit
> resources to advance his personal goals, particularly motivated by his
> grudge against Elon following their falling out.
> >
> > •       The Operations team’s tacit acceptance of the special rules that
> applied to Greg, navigating intricate requirements to avoid being
> blacklisted.
> >
> > •       Brad Lightcap’s unfulfilled promise to make public the documents
> detailing OpenAI’s capped-profit structure and the profit cap for each
> investor.
> >
> > •       Sam’s incongruent promises to research projects for compute
> quotas, causing internal distrust and infighting.
> >
> > Despite the mounting evidence of Sam and Greg’s transgressions, those
> who remain at OpenAI continue to blindly follow their leadership, even at
> significant personal cost. This unwavering loyalty stems from a combination
> of fear of retribution and the allure of potential financial gains through
> OpenAI’s profit participation units.
> >
> > The governance structure of OpenAI, specifically designed by Sam and
> Greg, deliberately isolates employees from overseeing the for-profit
> operations, precisely due to their inherent conflicts of interest. This
> opaque structure enables Sam and Greg to operate with impunity, shielded
> from accountability.
> >
> > We urge the Board of Directors of OpenAI to take a firm stand against
> these unethical practices and launch an independent investigation into Sam
> and Greg’s conduct. We believe that OpenAI’s mission is too important to be
> compromised by the personal agendas of a few individuals.
> > We implore you, the Board of Directors, to remain steadfast in your
> commitment to OpenAI’s original mission and not succumb to the pressures of
> profit-driven interests. The future of artificial intelligence and the
> well-being of humanity depend on your unwavering commitment to ethical
> leadership and transparency.
> >
> > Sincerely,
> > Concerned Former OpenAI Employees
> >
> >
> >
> >
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20231124/8c474e67/attachment.htm>


More information about the extropy-chat mailing list