[ExI] AI enhancing / replacing human abilities

Gadersd gadersd at gmail.com
Wed Apr 5 14:58:59 UTC 2023


> Perhaps successful algorithms just don't last long.

They can if kept secret. A good rule of thumb is if the fund/algorithm is public then it is no better than passive investing.

> I think my question was based on the assumption that the successful AI was available to everyone. What would happen then?

Then the market would self-correct and the AI would stop trading well.

> On Apr 5, 2023, at 10:25 AM, William Flynn Wallace via extropy-chat <extropy-chat at lists.extropy.org> wrote:
> 
> I read of stock market people who have a great year, advertise that fact, get lots of new buyers, and then experience regression to the mean.
> 
> Perhaps successful algorithms just don't last long.
> 
> I think my question was based on the assumption that the successful AI was available to everyone. What would happen then?  bill w
> 
> On Tue, Apr 4, 2023 at 5:25 PM Gadersd via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
>> What if an AI were developed that could accurately predict the 
>> stock market?
> 
> Already been done. James Simons is a math PhD turned quant who started a hedge fund specializing in algorithmic trading. He made a pile of money for himself and his clients and eventually closed the hedge fund to outside investors as the technology can only scale to handle so much money at a time. In this case the fund became “secret” to preserve its profitability for the owner and his buddies.
> 
> Beating the stock market and being open are fundamentally mutually exclusive. More of one implies less of the other.
> 
> https://en.wikipedia.org/wiki/Renaissance_Technologies <https://en.wikipedia.org/wiki/Renaissance_Technologies>
> 
>> On Apr 4, 2023, at 6:07 PM, William Flynn Wallace via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
>> 
>> What if an AI were developed that could accurately predict the 
>> stock market?  I suspect that buyers and sellers would intentionally make the predictions wrong if they were known.  If a person could make one but keep it a secret he would become very rich.  Or not?  bill w
>> 
>> On Tue, Apr 4, 2023 at 4:59 PM BillK via extropy-chat <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
>> On Tue, 4 Apr 2023 at 21:56, Gadersd via extropy-chat
>> <extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>> wrote:
>> >
>> > I concur. In an adversarial environment it is almost never optimal from the perspective of one group to halt progress if the others cannot be prevented from continuing.
>> >
>> > The AI safety obsession is quite moot as any malicious organization with significant capital can develop and deploy its own AI. AI safety can only achieve the goal of preventing low-capital individuals from using AI for malicious reasons for a time until the technology becomes cheap enough for anyone to develop powerful AI.
>> >
>> > I am not sure how much good prolonging the eventual ability for any individual to use AI for harm will do. We will have to face this reality eventually. Perhaps a case can be made for prolonging individual AI-powered efficacy until we have the public safety mechanisms in place to deal with it.
>> >
>> > In any case this only applies to little individuals. China and others will have their way with AI.
>> > _______________________________________________
>> 
>> 
>> Interesting thought, that 'friendly' AI means malicious use by the human owners.
>> 
>> In the past, the main worry was AI running amok and destroying
>> humanity. So the 'friendly' AI design was developed to try to ensure
>> that humanity would be safe from AI.
>> But how can we protect humanity from humanity?
>> 
>> Nations and corporations will be running the powerful AGI machines,
>> controlling economies and war machines.
>> Personal AI will probably have to be much less capable
>> in order to run on smartphones and laptops.
>> But there will be plenty to keep the population amused.  :)
>> 
>> BillK
>> 
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
> 
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org <mailto:extropy-chat at lists.extropy.org>
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat <http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20230405/a2428c2e/attachment.htm>


More information about the extropy-chat mailing list