[ExI] Can philosophers produce scientific knowledge?

Adrian Tymes atymes at gmail.com
Thu May 6 18:25:54 UTC 2021


> I'm so close to all this stuff, I don't realize all these issues you are
pointing out.

Yeah, that's typically how it goes.  (I say, having run into this more than
once in my own head.)

> You simply achieve an ability to do something like implant

I suggest reducing that to, "You implant".  The modifiers between those
words don't help convey your meaning.

> then you stimulate their neuralink

I suggest "then you stimulate their neural link" or just "then you
stimulate their link", since Neuralink is a potential party to what you're
proposing, and Neuralink the corporation is not itself the neural link.

> then say something like that different color you are now experiencing on
that white screen is what most people would call 'redness'.  Some people
might say: "Oh wow, no, that is my greenness." or maybe: "Oh wow, I've
never experienced that color before in my life....

Okay!  That is certainly an experiment that Neuralink could in theory
perform: stimulate a neuron in a person and change a pixel on the white
screen in front of the person, and ask them if they are perceiving redness
or if they are perceiving something else.

Note that "stimulate the neuron" and "change the pixel" would be
unconnected unless there is some control mechanism from the neuron back to
the screen.  You may want to be clear about that.  If there is such a
control mechanism, it would need to be set up and calibrated before the
stimulation.

> Of course, eventually,  neuralink wants to be able to completely recreate
visual knowledge in people's brains, right?  Both at higher resolutions,
and using large groups of new colors nobody has ever experienced before
(i.e. making biochromatic color blind people tetrachromats, or better), and
giving people visual knowledge of what is behind them (or on mars...)...
Exactly the above is going to be the critical first step before anything
like that is possible.

You could point this out to Neuralink when proposing the experiment.

So, once you have a description of the experiment that you would like
Neuralink to perform, and the money for them to do so, you can call them
and ask if they would be willing to conduct and publish the results of the
experiment in exchange for funding.

If they say no, then you might look around for universities with good
neural science programs - or you might try universities first, since this
sort of funded research is exactly what they do (more often for governments
or large institutions, since they more often have money for this, but your
money spends as well as theirs).  For instance, Stanford University has a
Neurology department that I believe has done similar research before; see
https://med.stanford.edu/artificial-retina/team.html for some people you
might contact.  Even if they are not set up to actually do the experiment,
often times professors love to talk about their field of research.  If you
emphasize that you want help designing an experiment that you could fund,
they will probably be quite willing to give advice as to how such an
experiment would work and what things you need to look out for, such as
specific legal barriers designed to make sure such research is conducted in
an ethical manner.

Note that "publish the results": think through your desired form of output,
and make sure that is agreed to before you provide funding.  Again I speak
from experience here, though on a nanotech investigation in my case.  At a
minimum, you want access to the results yourself.  You may want to make the
data publicly accessible (so you or anyone could access it as a member of
the public) - as much as you can: medical investigations have all sorts of
privacy laws restricting what can be published.

On Thu, May 6, 2021 at 10:17 AM Brent Allsop via extropy-chat <
extropy-chat at lists.extropy.org> wrote:

>
> Adrian, thanks for the help!  I'm so close to all this stuff, I don't
> realize all these issues you are pointing out.  Does the following help at
> all?
>
> You simply achieve an ability to do something like implant a neural ink
> into someone, wired in such a way, that when you put someone in front of a
> white screen, then you stimulate their neuralink, it causes at least one
> pixel of the white screen to change color, then say something like that
> different color you are now experiencing on that white screen is what most
> people would call 'redness'.  Some people might say: "Oh wow, no, that is
> my greenness." or maybe: "Oh wow, I've never experienced that color before
> in my life....
>
> Of course, it would be more complex, and have many subtle
> shade differences, and so on, but that is the general idea.  In other
> words, if we know which of all our descriptions of stuff in the brain is a
> description of redness, then when you reproduce that same thing in other
> brains, they can then directly apprehend what it is like for glutamate to
> react in a the correct  set of synapse (or whatever it is that is redness).
>
> Also, you could measure if you had achieved success (at which time the
> prize would be rewarded) by having one of the sub camps of RQT achieving
> greater than 90% "mind expert"
> <https://canonizer.com/topic/81-Mind-Experts/1> consensus, and greater
> than 1000 experts weighing in.
>
> Of course, eventually,  neuralink wants to be able to completely recreate
> visual knowledge in people's brains, right?  Both at higher resolutions,
> and using large groups of new colors nobody has ever experienced before
> (i.e. making biochromatic color blind people tetrachromats, or better), and
> giving people visual knowledge of what is behind them (or on mars...)...
> Exactly the above is going to be the critical first step before anything
> like that is possible.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Thu, May 6, 2021 at 10:48 AM Adrian Tymes via extropy-chat <
> extropy-chat at lists.extropy.org> wrote:
>
>> The first step is to define your terms in ways that Neuralink et al can
>> work with.  In other words: stop using jargon.
>>
>> What exactly is "qualia"?  What exactly is "redness"?  These are not
>> terms that Neuralink or other such researchers can define experiments
>> around, since the definitions are - at best - loose.
>>
>> Define what you are looking for, using only words that you can find in
>> commonly accepted dictionaries.  For instance, instead of "qualia" you
>> might use "perceived sensation", if that 100% captures what you are looking
>> to measure here.  Neuralink might be able to measure the neurological
>> underpinnings of sensation.
>>
>> Is "redness", "the sensation of seeing light of roughly 700 nanometer
>> wavelength"?  If not, what is it?  Remember that "red" is "light of roughly
>> 700 nanometer wavelength" (red is a color of light, and that is where red
>> falls on the spectrum), so "the sensation of seeing red", which seems to be
>> what you mean, is by definition "the sensation of seeing light of roughly
>> 700 nanometer wavelength".
>>
>> The problem of jargon isn't specific to you.  Jargon is a problem in many
>> scientific fields.  People inside a field get used to using such shorthand,
>> then when they try to relate their concepts to related fields which might
>> offer insight, they find that shorthand (specifically that those in the
>> related fields don't know it, and are often too polite or too uninterested
>> to point out that this is why they do not understand what is being asked of
>> them) becomes a barrier to communication - even when the shorthand is
>> well-defined, and in this case I'm not entirely certain it is.  I have
>> found that the best solution is, when talking in cases where this shorthand
>> might not already be understood, is to swap in equivalent terms that are
>> understood by the audience (which also helps me make sure that my jargon is
>> well-defined).
>>
>> On Thu, May 6, 2021 at 9:06 AM Brent Allsop via extropy-chat <
>> extropy-chat at lists.extropy.org> wrote:
>>
>>>
>>> For example, can any one give me any examples of ANY peer reviewed
>>> "philosophy of mind" claims which are falsifiable?
>>> That is other than what we are describing in our "Consciousness: Not a
>>> hard problem, just a color problem
>>> <https://canonizer.com/videos/consciousness/>".
>>> Basically, all the supporters of "Representational Qualia theory", and
>>> all sub camps, are predicting that if experimentalists can discover and
>>> demonstrate which of all our descriptions of stuff in the brain is a
>>> description of redness, only one camp can remain standing, only the one
>>> making the correct prediction about the nature of qualia, all others being
>>> falsified by such a demonstration.  Stathis, even functionalists must agree
>>> with this, right?  In other words, if someone could demonstrate that nobody
>>> could ever experience redness if, and only if that redness was glutamate
>>> reacting in the correct set of computationally bound synapses, and that if
>>> no neuro substitution of any kind, or anything else, could produce even a
>>> pixel of conscious redness experience...
>>>
>>> In other words, what we have is theoretical physical science, each
>>> competing camp describing the experiments required to falsify the camps
>>> they support.  Doing the actual experiments is now up to the
>>> experimentalists, right?
>>>
>>> With my Ether earnings, I could now afford to fund some significant
>>> experimental research to discover this.  Does anyone have any idea of how I
>>> might go about funding such experimental work?  Maybe we could help fund
>>> some of the work going on at Neuralink or something, along this direction?
>>> Elon once was involved in this list, right?  Any idea how I could propose
>>> putting a few $ million towards something like this to Neuralink, or any
>>> other neuroscience experimental institutions?
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Thu, May 6, 2021 at 9:32 AM William Flynn Wallace via extropy-chat <
>>> extropy-chat at lists.extropy.org> wrote:
>>>
>>>> What I don't get out of that quote by Gillis is whether the
>>>> philosophers proceed to do the actual research their proposal suggests.
>>>>  bill w
>>>>
>>>> On Thu, May 6, 2021 at 10:26 AM Brent Allsop via extropy-chat <
>>>> extropy-chat at lists.extropy.org> wrote:
>>>>
>>>>>
>>>>> I've always considered the difference between scientific and
>>>>> philosophical claims to be experimental falsifiability.
>>>>> Is that not right?
>>>>>
>>>>>
>>>>>
>>>>> On Wed, May 5, 2021 at 10:30 AM Dan TheBookMan via extropy-chat <
>>>>> extropy-chat at lists.extropy.org> wrote:
>>>>>
>>>>>>
>>>>>> http://philsci-archive.pitt.edu/18972/1/Pradeu-Lemoine-Khelfaoui-Gingras_Philosophy%20in%20Science_Online%20version.pdf
>>>>>>
>>>>>> Abstract:
>>>>>>
>>>>>> Most philosophers of science do philosophy ‘on’ science. By contrast,
>>>>>> others do philosophy ‘in’ science (‘PinS’), i.e., they use philosophical
>>>>>> tools to address scientific problems and to provide scientifically useful
>>>>>> proposals. Here, we consider the evidence in favour of a trend of this
>>>>>> nature. We proceed in two stages. First, we identify relevant authors and
>>>>>> articles empirically with bibliometric tools, given that PinS would be
>>>>>> likely to infiltrate science and thus to be published in scientific
>>>>>> journals (‘intervention’), cited in scientific journals (‘visibility’) and
>>>>>> sometimes recognized as a scientific result by scientists (‘contribution’).
>>>>>> We show that many central figures in philosophy of science have been
>>>>>> involved in PinS, and that some philosophers have even ‘specialized’ in
>>>>>> this practice. Second, we propose a conceptual definition of PinS as a
>>>>>> process involving three conditions (raising a scientific problem, using
>>>>>> philosophical tools to address it, and making a scientific proposal), and
>>>>>> we ask whether the articles identified at the first stage fulfil all these
>>>>>> conditions. We show that PinS is a distinctive, quantitatively substantial
>>>>>> trend within philosophy of science, demonstrating the existence of a
>>>>>> methodological continuity from science to philosophy of science.
>>>>>> ——————
>>>>>> CHT William Gillis
>>>>>>
>>>>>> Haven’t finished the paper yet, but not really surprised.
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> Dan
>>>>>> _______________________________________________
>>>>>> extropy-chat mailing list
>>>>>> extropy-chat at lists.extropy.org
>>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>>>
>>>>> _______________________________________________
>>>>> extropy-chat mailing list
>>>>> extropy-chat at lists.extropy.org
>>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>>
>>>> _______________________________________________
>>>> extropy-chat mailing list
>>>> extropy-chat at lists.extropy.org
>>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>>
>>> _______________________________________________
>>> extropy-chat mailing list
>>> extropy-chat at lists.extropy.org
>>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>>
>> _______________________________________________
>> extropy-chat mailing list
>> extropy-chat at lists.extropy.org
>> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20210506/297597eb/attachment-0001.htm>


More information about the extropy-chat mailing list