[ExI] Why so much published 'science' is wrong.

William Flynn Wallace foozler83 at gmail.com
Sat Jul 11 23:51:34 UTC 2015


Spike:  But they don’t really understand what the numbers are saying.  They
know how to get the numbers, but one quarter or one semester just isn’t
enough time to really help students understand what they calculated.

When I taught statistics (to psychology students) I found this to be right
on the mark.  I had them make guesses after looking at small amounts of raw
data before the calculation, then look back at the estimate afterwards.
What this does is make them realize what does and what doesn't make sense
(exp. if they calculated incorrectly).  Number are docile.  They cannot say
that you are putting them into the wrong equations, that the assumptions of
the statistical tests you are using are not met, that (to reiterate someone
else's point) while one point different on a ten point attitude scale
between two groups might yield statistically significant results it just
don't make a hill of beans in reality.  My colleagues across the way in
math taught students how to calculate, how to derive the formulas, but they
had no more clue than their students as to what kinds of data to use - or
not - how to gather the numbers.  They had no background in research
design.  Their students may have learned some math but they did not learn
how to use and abuse statistics.  In  courses I took in statistics there
was virtually no consideration of beta errors - only alpha. And of course
there is an inverse correlation between the two.

I estimate that even when you include our very best psychology journals,
and only peer-reviewed ones, that about 75% of the studies printed are
worthless - unrepeatable, used improper statistics, design, subject
assignment, error control and so on.  It's a very old story - no one is
interested in doing a simple repeat of an experiment, even when they get
astonishing results.  I got some of those while doing my Master's work, and
my professor made me do the study ten times before he was convinced that it
was reliable (not necessarily valid, just reliable).

No, people want to start throwing new variables into the mix before they
understand the ones they started with.  I'd better quit.  Bill W

On Sat, Jul 11, 2015 at 5:51 PM, spike <spike66 at att.net> wrote:

>
>
>
>
> *From:* extropy-chat [mailto:extropy-chat-bounces at lists.extropy.org] *On
> Behalf Of *Anders Sandberg
> *Sent:* Saturday, July 11, 2015 11:13 AM
> *To:* ExI chat list
> *Subject:* Re: [ExI] Why so much published 'science' is wrong.
>
>
>
> *Från: *BillK <pharos at gmail.com>
>
>
>
> Science is heroic, with a tragic (statistical) flaw
> Mindless use of statistical testing erodes confidence in research
>
>
> >…I am reading Alex Reinharts "Statistics done wrong: the woefully
> complete guide" ( http://www.statisticsdonewrong.com/ ) and enjoying it. …But
> if science has it rough, at least it is rather numerically literate and
> knows it has a problem. After three years of talking to insurance I realize
> that a lot of important business is made on far dodgier assumptions. Anders
> Sandberg,
>
>
>
>
>
> Ja.  I see this primarily as a failure in the way statistics courses are
> taught.  The textbooks may contain good explanations for the various
> calculations, but if the instructor has ten weeks to cover the topic, the
> little bit of time is spent teaching the students how to calculate the
> parameters.  OK then, exam time, the students grind away, calculate means,
> standard deviation, variance, do a chi square test, Kruskal Wallis,
> factoral ANOVA, identify a few distributions, they run all the tests and
> get the numbers, hooray they pass.  But they don’t really understand what
> the numbers are saying.  They know how to get the numbers, but one quarter
> or one semester just isn’t enough time to really help students understand
> what they calculated.
>
>
>
> Before the course, the students looked at the data and took their best
> guess.  After the course, they look at the data, calculate some numbers,
> draw the wrong conclusion, walk away 95% confident they are correct.  Then
> they go get jobs.
>
>
>
> I can think of a better way.  Instead of the usual approach, if the
> engineering students took two quarters of calculus (where you use Wolfram’s
> magic act rather than learn twenty different ways to integrate the kinds of
> functions you never seen to get in real life) and take three quarters of
> statistics, where the students use Wolfram again (rather than spending the
> time on how to calculate all those parameters) then spend their time
> figuring out how to interpret what the computer gave them.
>
>
>
> I did some searching and found this whole discussion on reducing calculus
> education veered off in the wrong direction by about a radian.  A faction
> took off with the idea of calculus reduction, but presented a weird
> justification: women and minority students are less likely to pass the
> calculus series (sheesh) and of course you can’t go on in the sciences
> without it, so that means fewer women and minorities in the sciences, all
> because of calculus.  My notion is the reason for calculus reduction would
> be to make room for more statistics study.  Perhaps the minority argument
> derailed the notion of statistical education or sent it down the wrong
> road, damn.
>
>
>
> spike
>
>
>
>
>
> _______________________________________________
> extropy-chat mailing list
> extropy-chat at lists.extropy.org
> http://lists.extropy.org/mailman/listinfo.cgi/extropy-chat
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.extropy.org/pipermail/extropy-chat/attachments/20150711/980929b2/attachment.html>


More information about the extropy-chat mailing list