<html><head></head><body><div><div>The abstracts posted on that page are not the full technical abstracts, but seem to be short versions of them. There is *a lot* of more detail in the actual project proposals as far as I know. </div></div><div><strong><br></strong></div><div><strong>From:
</strong>
J. Andrew Rogers <andrew@jarbox.org> </div><br><div><blockquote class="mori" style="margin:0 0 0 .8ex;border-left:2px blue solid;padding-left:1ex;"><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="mcnt body"><div></div><div>Core computer science is advancing rapidly but little of it is occurring in academia or is being published. There have been many large discontinuities in computer science over the last several years evidenced by their effects that have largely gone unnoticed because it was not formally published. </div></div></blockquote></div><div><br></div><div>Would you be so kind to mention some of these?</div><div><br></div><br><div><blockquote class="mori" style="margin:0 0 0 .8ex;border-left:2px blue solid;padding-left:1ex;"><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="mcnt body"><div>Owain Evans</div><div><br class="mcnt"></div><div>“Our project seeks to develop algorithms that learn human preferences from data despite humans not being homo-economicus and despite the influence of non-rational impulses. We will test our algorithms on real-world data and compare their inferences to people’s own judgments about their preferences."</div><div><br class="mcnt"></div><div>This has been done at spectacular scales for a couple years now. No assumptions about individual human decision processes are made at all, each person is the sum of their observed behaviors learned over long periods of time in various contexts. Contextual values and preferences are derived from that, both individually and in aggregate. </div></div></blockquote></div><div><br></div><div>Any particular references for this?</div><div><br></div><div><div>Full disclosure: Owain is actually my flatmate *and* colleague, and we have been discussed this project at some length. What he is actually planning to do with the Stanford team seems to be rather different from current recommender and preference inference systems (yes, there has been a fair bit of literature and tech review involved in writing the grant). While there are certainly behavioural economics models out there, I have not seen any generative modelling.</div></div><br><div><blockquote class="mori" style="margin:0 0 0 .8ex;border-left:2px blue solid;padding-left:1ex;"><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="mcnt body"><div>Doing these types of studies in a way that produces robust and valid results is beyond non-trivial and highly unlikely to be achieved by someone who is not already an expert at real-world behavioral induction, which unfortunately is the case here. </div></div></blockquote></div><div><br></div><div>Hmm, just checking: are you an expert on judging the expertise of the different teams? How well do you know their expertise areas? The sentence</div><div><br></div><div><blockquote class="mori" style="margin:0 0 0 .8ex;border-left:2px blue solid;padding-left:1ex;"><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="mcnt body"><div></div><div> The absence of people doing relevant advanced computer science R&D in the list is going to produce some giant blind spots in the aggregate output.</div></div></blockquote></div><div><br></div>seems to suggest that you do not know the CVs of the teams very well.<div><br><div><blockquote class="mori" style="margin:0 0 0 .8ex;border-left:2px blue solid;padding-left:1ex;"><div style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="mcnt body"><div></div></div></blockquote></div><div>It wouldn't surprise me if the majority of funded projects are duds. Most science is. But the aim is a bit subtle: to actually kickstart the field of beneficial AI, and that involves meshing several disciplines and luring in more standard research too - there is a fair bit or related stuff in other research programsthat is not visible from the list. In the end, the real success will be if it triggers long-term research collaborations that can actually solve the bigger problems.</div><div><br></div><br><br>Anders Sandberg, Future of Humanity Institute Philosophy Faculty of Oxford University</div></body></html>