On 3 May 2011 21:00, Kelly Anderson <span dir="ltr"><<a href="mailto:kellycoinguy@gmail.com">kellycoinguy@gmail.com</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">> The two common values of all TX nodes are:<br>
> - Art and culture can contribute to transhumanism too (not just science &<br>
> engineering)<br>
> - Considered action is preferred over ungrounded philosophical conversations<br>
<br>
</div>Just out of curiosity, is there room for someone who knows computer<br>
science and cares deeply about mental health, particularly the mental<br>
health of AGIs?<br></blockquote><div><br><br>Hey Kelly - <br><br>Short version: Yes.<br><br>Longer version: My own background is in cognitive psychology, so I'd be the first to want to define terms, warn of anthropomorphization, get to the heart of what you're interested in, etc. That said, we will be (and to some extent, already are) looking at various aspects of AI. Initial phase is about recognizing and assessing issues, putting together written reports that give a sense of where our resources will be best applied, and how. So, at this stage, there is benefit in exploring a wide range of related issues, just to see where our investigations take us.<br>
<br>I can imagine this issue being relevant to the work of a team with a wider remit, or perhaps even as a theoretical research project in its own right, if you felt like leading such a thing...? I say "theoretical" only because we have no AIs with potential mental health issues (yet!), but the general intention is that projects like this may yield valuable insights and guidelines for practical projects to incorporate into their work.<br>
<br>I hope you don't mind, but I'm going to copy this post (no names) over to the private ZS facebook group, so people can see the kinds of issues others may be interested in pursuing.<br><br>Best,<br>A<br></div></div>