Monday, February 12, 2007

Observer Effect

The observer effect shouldn’t be confused with the Heisenberg Uncertainty Principle. It is the idea that the act of observation of a given event fundamentally changes that event. The observer, in a way, becomes part of the event, therefore making it impossible to observe the desired event. I thought about this recently while preparing a lesson for a review of the PSSA (state assessment). This test is called a benchmark which is used by the federal government (No Child Left Behind) to determine if the school is making adequate yearly progress (AYP). Students essentially need to constantly improve their scores on state assessment exams in order for their school to continue operating without any federal interventions or regulations.

The key word I used was benchmark. I would define a benchmark as a test to assess some predetermined level of quality or it may be used to compare the results of one test to others who have taken the same test. There are benchmark programs for computers where you run it and get an arbitrary score which is compared to other computer users who run the same benchmark. The significance of the test is seen through the comparison. You can see how different hardware configurations can result in higher or lower scores which may affect which pieces of hardware consumers want to purchase. A benchmark within education is a state test which is compared to previous tests by the same school to determine if that school achieved a level of predetermined progress. The idea is that each school needs to be constantly improving their test scores.

The problem with benchmarks is that people tend to construct unrealistic scenarios where benchmarks produce a higher score without providing an accurate measure of performance. Nvidia, a popular graphics processor manufacturer, came under fire a few years ago because it programmed specific instructions into the processor code which allowed certain Nvidia graphics cards to score higher on a popular benchmarking program. This is called benchmark inflation. Unfortunately, this now occurs in education. Nvidia inflated its scores to sell more graphics cards which make them more money; Schools try to inflate scores because they don’t want any federal regulation which, in many cases, results in them losing money. If tests scores of a school do not meet AYP (annual yearly progress) then that school goes under review by the federal government. This review process may result in any number of interventions.

Schools try to inflate scores by both aligning their curriculum according to the test (i.e. changing the year in which algebra II is taken – Alg I, Geometry, Alg II, PreCalc) and by specifically allocating lessons used to prepare students for the tests. Every Monday a class period is used to review problems that may appear on the PSSA. Every Tuesday a 15 minute practice PSSA quiz is given to the students. This is over 1/5 of the curriculum spent on reviewing for a benchmark. How is this not a disservice to the students? And how is this supposed to accurately measure students’ ability?

The goal of the federal government with NCLB is to assess how effective schools are and then hold educators accountable for any failures that may occur. Their mandates for state assessment exams are an attempt to observe the effectiveness of the public education system. This observation is fundamentally changing the system. High accountability is forcing schools to change strategies and build new curricula around these state assessments. We are no longer observing or assessing the original education system, hence, the Observer Effect. Could this be their goal? Maybe but how then are we to determine how effective the assessment exams are? Who determines the quality of the assessments? How are we to assess the assessments which assess the assessments…?

No comments: