Some subscribers to
MathEdCC might be interested in a recent discussion-list post
"Re: How Reliable Are the Social Sciences?" The abstract
Froman of the TIPS discussion list has pointed to a New York Times
Opinion Piece "How Reliable Are the Social Sciences?" by
Gary Gutting at <http://nyti.ms/K0xVQL>. Gutting wrote
that Obama, in his State of the Union address
<http://wapo.st/JnuBCO> cited "The Long-Term Impacts of
Teachers: Teacher Value-Added and Student Outcomes in Adulthood"
(Chetty et al.,
<http://bit.ly/KkanoU> to support his emphasis on evaluating teachers by
their students' test scores. That study purportedly shows that
students with teachers who raise their standardized test scores are
"more likely to attend college, earn higher salaries, live in
better neighborhoods, and save more for retirement."
After comparing the reliability of social-science research unfavorably
with that of physical-science research, Getting wrote [my CAPS):
"IS THERE ANY WORK ON THE EFFECTIVENESS OF TEACHING THAT IS
SOLIDLY ENOUGH ESTABLISHED TO SUPPORT MAJOR POLICY DECISIONS?"
THE CASE FOR A NEGATIVE ANSWER lies in the [superior] predictive power
of the core natural sciences compared with even the most highly
developed social sciences."
Most education experts would probably agree with Getting's negative
economist Eric Hanushek
<http://en.wikipedia.org/wiki/Eric_Hanushek>, as reported by
Lowery <http://nyti.ms/KnRvDh>, states: "Very few people
suggest that you should use value-added scores alone to make personnel
But then Getting goes on to write (slightly edited): "While the
physical sciences produce many detailed and precise predictions, the
social sciences do not. The reason is that such predictions almost
always require randomized controlled trials (RCT's) which are seldom
possible when people are involved. . . . . . Jim Manzi. . .
.[[according to Wikipedia <http://bit.ly/KqMf1M>, a senior fellow at the
conservative Manhattan Institute <http://bit.ly/JvwKG1>]]. . . . in his recent
book "Uncontrolled" <http://amzn.to/JFalMD> offers a careful and informed
survey of the problems of research in the social sciences and
concludes that non-RCT social science is not capable of making useful,
reliable, and nonobvious predictions for the effects of most proposed
policy interventions." BUT:
(1) Randomized controlled trails may be the "gold standard"
for medical research, but they are not such for the social science of
educational research - see e.g., "Seventeen Statements by Gold-Standard Skeptics
#2" (Hake, 2010) at <http://bit.ly/oRGnBp>.
(2) Unknown to
most of academia, and probably to Getting and Manzi, ever since the
pioneering work of Halloun & Hestenes (1985a) at
<http://bit.ly/fDdJHm>, physicists have been engaged in the social science of
Physics Education Research that IS "capable of making useful,
reliable, and nonobvious predictions," e.g., that
"interactive engagement" courses can achieve average
normalized pre-to-posttest gains which are about two-standard
deviations above *comparison* courses subjected to "traditional"
passive-student lecture courses. This work employs pre/post testing
with Concept Inventories <http://en.wikipedia.org/wiki/Concept_inventory>
- see e.g.,
Impact of Concept Inventories on Physics Education and It's Relevance
For Engineering Education" (Hake, 2011) at
<http://bit.ly/nmPY8F>, and (b) "Why Not Try a Scientific
Approach to Science Education?" (Wieman, 2007) at
To access the complete 26 kB post please click on
Richard Hake, Emeritus Professor of Physics, Indiana University
Links to Articles: <http://bit.ly/a6M5y0>
Links to SDI Labs:
In some quarters,
particularly medical ones, the randomized experiment is considered the
causal 'gold standard.' IT IS CLEARLY NOT THAT IN EDUCATIONAL
CONTEXTS, given the difficulties with implementing and maintaining
randomly created groups, with the sometimes incomplete implementation
of treatment particulars, with the borrowing of some treatment
particulars by control group units, and with the limitations to
external validity that often follow from how the random assignment is
- Tom Cook & Monique Payne (2002, p.
". . .the
important distinction. . .[between, e.g., education and
physics]. . . is really not between the hard and the soft
sciences. Rather, it is between the hard and the easy
-David Berliner (2002)
educators have led the way in developing and using objective tests to
compare student learning gains in different types of courses, and
chemists, biologists, and others are now developing similar
instruments. These tests provide convincing evidence that students
assimilate new knowledge more effectively in courses including active,
inquiry-based, and collaborative learning, assisted by information
technology, than in traditional courses."
-Wood & Gentile (2003)
REFERENCES [All URL's shortened by <http://bit.ly/> and accessed
on 21 May 2012.]
Berliner, D. 2002. "Educational research: The hardest science of
all," Educational Researcher 31(8): 18-20; online as a 49 kB pdf
Cook, T.D. &
M.R. Payne. 2002. "Objecting to the Objections to Using Random
Assignment in Educational Research" in Mosteller & Boruch
Hake, R.R. 2012.
"Re: How Reliable Are the Social Sciences?" online on the
OPEN! AERA-L archives at <http://bit.ly/K432fC>. Post of 20 May
2012 20:08:07-0700 to AERA-L and Net-Gold. The abstract and link to
the complete post are also being transmitted to several discussion
lists and are on my blog "Hake'sEdStuff" at
<http://bit.ly/JyNP7B> with a provision for comments.
Mosteller, F. & R. Boruch, eds. 2002. "Evidence Matters:
Randomized Trials in Education Research." Brookings Institution.
Amazon.com information at <http://amzn.to/n6T0Uo> . A searchable
expurgated Google Book Preview is online at
Wood, W.B. & J.M. Gentile. 2003. "Teaching in a research
context," Science 302: 1510; 28 November; online to subscribers
at <http://bit.ly/9izfFz>. A summary is online to all at