Search All of the Math Forum:
Views expressed in these public forums are not endorsed by
NCTM or The Math Forum.
|
|
Math Forum
»
Discussions
»
Inactive
»
MEME
Notice: We are no longer accepting new posts, but the forums will continue to be readable.
Topic:
High Stakes Testing Study
Replies:
2
Last Post:
May 16, 2002 9:35 PM
|
 |
|
|
High Stakes Testing Study
Posted:
May 6, 2002 12:10 AM
|
|
High-Stakes Testing, Uncertainty, and Student Learning
Audrey L. Amrein
Arizona State University
David C. Berliner
Arizona State University
Citation: Amrein, A.L. & Berliner, D.C. (2002, March 28). High-stakes
testing, uncertainty, and student learning Education Policy Analysis
Archives, 10(18).
Full study available at http://epaa.asu.edu/epaa/v10n18/.
Abstract
A brief history of high-stakes testing is followed by an analysis of
eighteen states with severe consequences attached to their testing
programs. These 18 states were examined to see if their high-stakes
testing
programs were affecting student learning, the intended outcome of
high-stakes testing policies promoted throughout the nation. Scores on the
individual tests that states use were not analyzed for evidence of
learning. Such scores are easily manipulated through test-preparation
programs, narrow curricula focus, exclusion of certain students, and so
forth. Student learning was measured by means of additional tests covering
some of the same domain as each state's own high-stakes test. The question
asked was whether transfer to these domains occurs as a function of a
state's high-stakes testing program.
Four separate standardized and commonly used tests that overlap the same
domain as state tests were examined: the ACT, SAT, NAEP and AP tests.
Archival time series were used to examine the effects of each state's
high-stakes testing program on each of these different measures of
transfer. If scores on the transfer measures went up as a function of a
state's imposition of a high-stakes test we considered that evidence of
student learning in the domain and support for the belief that the state's
high-stakes testing policy was promoting transfer, as intended.
The uncertainty principle is used to interpret these data. That principle
states "The more important that any quantitative social indicator becomes
in social decision-making, the more likely it will be to distort and
corrupt the social process it is intended to monitor." Analyses of these
data reveal that if the intended goal of high-stakes testing policy is to
increase student learning, then that policy is not working. While a
state's
high-stakes test may show increased scores, there is little support in these data that such increases are anything but the result of test
preparation and/or the exclusion of students from the testing process. These distortions, we argue, are predicted by the uncertainty principle.
The success of a high-stakes testing policy is whether it affects student
learning, not whether it can increase student scores on a particular test.
If student learning is not affected, the validity of a state's test is in
question.
Evidence from this study of 18 states with high-stakes tests is that in
all
but one analysis, student learning is indeterminate, remains at the same
level it was before the policy was implemented, or actually goes down when
high-stakes testing policies are instituted. Because clear evidence for
increased student learning is not found, and because there are numerous
reports of unintended consequences associated with high-stakes testing policies (increased drop-out rates, teachers' and schools' cheating on exams, teachers' defection from the profession, all predicted by the
uncertainly principle), it is concluded that there is need for debate and
transformation of current high-stakes testing policies.
The authors wish to thank the Rockefeller Foundation for support of the
research reported here. The views expressed are those of the authors and
do
not necessarily represent the opinions or policies of the Rockefeller
Foundation.
|
|
|
|