[ExI] Future of Humanity Institute at Oxford University £1 million grant for AI

J. Andrew Rogers andrew at jarbox.org
Fri Jul 3 19:23:38 UTC 2015


> On Jul 3, 2015, at 5:15 AM, BillK <pharos at gmail.com> wrote:
> 
> Elon Musk funds Oxford research into machine intelligence


The list of what was funded is interesting:

http://futureoflife.org/misc/2015awardees <http://futureoflife.org/misc/2015awardees>


Reading over the abstracts, it appears the selection bias in who submitted proposals creates a disconnect between assumptions about the state-of-the-art and actual state-of-the-art. In at least a few cases, the researchers are insufficiently familiar with the domain they are nominally researching as evidenced in the abstract.

The following are two examples of what I mean. I do not mean to single anyone out in particular, nor do I care per se, these just happen to be areas I know (too) well.


Katja Grace (related to this work: http://aiimpacts.org/tepsbrainestimate/ <http://aiimpacts.org/tepsbrainestimate/>)

This project relates to estimating AI transition timelines based on improvements in computer science and hardware to “aid in evaluating the probability of discontinuities in AI progress”. It mentions using traversed edges per second (TEPS), a supercomputing benchmark, that actually is a good proxy for AI-like computational capability.

The TEPS benchmark is *evidence* of a severe discontinuity but ironically that is not recognized. Nothing in literature will allow you to replicate the published TEPS performance. The algorithm family used, developed in 2008, is several orders of magnitude beyond the published state-of-the-art and only a few people know how it works. There was an independently developed algorithm family in 2009 that is two orders of magnitude more efficient than the mystery algorithm used in TEPS benchmarks, so a discontinuity beyond *that* too, but there is virtually no evidence that it exists unless you know what you are looking at because they are not marketing supercomputers. 

Core computer science is advancing rapidly but little of it is occurring in academia or is being published. There have been many large discontinuities in computer science over the last several years evidenced by their effects that have largely gone unnoticed because it was not formally published. A model of AI transition timelines that is oblivious to the current rate of change in non-published computer science as evidenced by effects is going to generate a misleading model.


Owain Evans

“Our project seeks to develop algorithms that learn human preferences from data despite humans not being homo-economicus and despite the influence of non-rational impulses. We will test our algorithms on real-world data and compare their inferences to people’s own judgments about their preferences."

This has been done at spectacular scales for a couple years now. No assumptions about individual human decision processes are made at all, each person is the sum of their observed behaviors learned over long periods of time in various contexts. Contextual values and preferences are derived from that, both individually and in aggregate. Doing these types of studies in a way that produces robust and valid results is beyond non-trivial and highly unlikely to be achieved by someone who is not already an expert at real-world behavioral induction, which unfortunately is the case here. 



Some of the funded projects seem quite reasonable but the list reflects either an overly limited selection to choose from — fishing in the wrong pond — or a naivete on the part of the selectors as to the state of some of these areas. The absence of people doing relevant advanced computer science R&D in the list is going to produce some giant blind spots in the aggregate output.


-jar







-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150703/29d2a2b7/attachment.html>


More information about the extropy-chat mailing list