Last time we began dissecting standardized tests (See “Standardized Tests”), and to the surprise of no one who has been a regular reader of “Inside Public Education,” we found them to be pretty vile. From distracting teachers from their more important work (EDUCATING CHILDREN) to providing questionable measures of a school’s success (as evidenced by the Report Cards the state now publishes for each district), the extremely limited usefulness of standardized tests has been exaggerated by a factor of 1,066 times their actual value, by my estimation. They also screw up good teaching methodology when they tackle more subjective issues like writing.
The original Illinois state-mandated assessment—the IGAP—did include an essay as part of the test, and the problems emerged immediately. First, they had to eliminate any literature as the basis for the prompt since they couldn’t be sure that every high school student would have read Romeo and Juliet, Of Mice and Men, or some other title making use of coordinating conjunctions. The essays we have our kids write in English class generally have to do with the literature we’re studying, but the variety of texts used precludes any single work’s being used on a standardized test.
So literary analysis was tossed out immediately, even though a “cold” read (reading something for the first time) of a short excerpt would have worked. Instead, they started coming up with these light-weight prompts—“Describe the qualities of a friend,” “Explain a time when you overcame an obstacle,” or “What was your most embarrassing moment?” Granted, all they were after was a writing sample, and certainly any kid could write an essay about her favorite season of the year; but right from the prompt, the test writers were moving away from what the kids had been doing in school—what these tests allegedly measure, right?
So they had these thousands of essays on which holiday is the most fun. Now they had to establish a means by which the essays would be evaluated, to find people who would read them, to train all those people so they would grade the same way, to get those people together to read the essays, and to assign each essay a score.
Since grading essays can be extremely subjective—which is certainly not a positive in standardization—those in charge had to find a way to make sure that all the graders would score essays identically, not to mention making sure that the process of grading didn’t take too long. So they came up with rubrics that rated the essays on a scale from 1 (piece of garbage) to 6 (good). What made grading these essays so ridiculous is that the scale of the project forced them to establish a few basic qualities for the essays, and finding those couple of things, give the essay a good score.
Transitions would be a good example of how this worked: Skilled writers make use of transitional devices that subtly and smoothly move the reader from idea to idea without his even noticing. When I switch topics, points, or paragraphs; I have to make sure that readers are following me, and transitions make sure they don’t get lost. However (notice this transition—however—is neither subtle nor smooth in that it just hits the reader over the head at the beginning of the sentence.), I should do so in a way that doesn’t call attention to the transition itself, that seamlessly moves my ideas along. IGAP, though, didn’t have time for any of that artistic nonsense, so they just made a list of ten transitions that are most commonly used for the graders to find. As long as those transitions put in an appearance a few times in the essay, the writer would get full credit for correct use of transitions.
Now, when I teach transitions to my students, I use a baseball umpire analogy. If an ump has done his job well, will we be talking about him after the game? Of course we won’t; we’ll be talking about the big hit, the horrific error, the pitcher’s ten strikeouts—our attention will have been drawn to the important aspects of the game, not the arbiters. But what happens when the umpire blows a call? Then most of our attention is focused on him instead of what happened during the game. We’ll watch the controversial play over and over, we’ll debate how evil this clown is, and we’ll never, ever forgive that idiot who destroyed our hopes and dreams with one stupid ruling! Transitions that call attention to themselves detract from the power of the writer’s ideas and hurt the overall impact of the essay.
That was way too subtle a distinction for Illinois graders, though. All they had time to do was to check to see if the essay had transitions in it. If it had a couple from the list of “approved” transitions, then it received the highest marks for that component of the essay’s organization, end of discussion. No matter how ham-fisted or trite the transitional usage was, as long as it was there, that was good enough.
Naturally, after the first IGAP tests were given, all the teachers in the state became aware of the importance of those ten transitions’ being placed in obvious locations (the first words of a sentence) so the essays would receive higher marks. Every kid in Illinois was soon being taught that the only way to write a good essay was to use words like, “first,” “then,” and “in conclusion.” For the next ten years, I had to add to my transition talk the caveat that some transitions were so overused they had become virtually useless for creative, artistic essays; and that I never wanted to see “in conclusion” as the first two words of a concluding paragraph. The IGAP writing test, in other words, actually made my kids worse writers as they started approaching writing with the same mindset they were taught for solving quadratic equations or replicating scientific experiments: a specialized set of rules that were to be used the same way for each “problem,” with any variations leading to wrong answers or blown-up labs. Creativity and art had NO place in the IGAP test.
Eventually the logistics of grading these essays year after year wore the bureaucracy down, and the writing component of the IGAP was abandoned. It only has taken fifteen years for its negative impacts to wear off, and I’m seeing fewer kids afflicted with the “IGAP Limited-Transitional Range” disease, but we should never forget how difficult it is to test writing. Paradoxically, we can never forget just how important writing is as a measure of a student’s abilities. (See Grading Writing Part I and Part II.) Yet, trying to standardize the evaluation of writing for a standardized test can and probably will hurt the overall level of student writing.
Other disciplines suffer from the same subjective judgments which get turned into objective “reality” on standardized tests. While there can be no question of the right or wrong of the math problems’ answers, how is it determined how many of each kind of problem needs to be included? What’s the “correct” ratio of algebraic questions in relation to geometric ones? Exactly how “difficult” should the problems be? Should there be two relatively easy problems to every one hard one, or should that proportion be reversed? How many variables should each algebra question average? How many steps should the geometry proofs require? How can I pose a hypothetical question on trigonometry when I have no idea what I’m talking about?
Those kinds of issues arise with every subject. Should social studies focus more on World War I or the Great Depression? Should biology have more, less, or equal focus as chemistry and physics? I’ve never seen any information on how they determine the kinds of questions they use or how they can condense all the important knowledge from vast subject areas into 65 multiple-guess questions.
Then we’ve got the problem of figuring where the student starts out. Until we can accurately assess a baseline of abilities and achievement for all kids before they start school, standardized tests are as much a measure of their innate abilities and family histories as they are of the impact schools had on students’ knowledge. Progression from a starting point should be the basis for judging a school, not a static measurement done in a single sitting.
When some student started in my English class, for example, he was making twenty mechanical errors per two-page essay; but after a year in my class, he’s down to eight. Those eight errors on the grammar section of a standardized test might translate into a pretty poor overall score, but the truth is that this kid has cut his mistakes by 60% in a single year, not too shabby by my “standards.” But I can’t get too cocky about this statistic either since it still doesn’t take into account maturation or school savvy. Maybe the kid just finally grew up: Seeing college looming and parental pressure increasing, he worked harder and his brain got big enough to handle some of the more advanced grammatical concepts that had baffled him previously. Maybe he’d had a big fight with his girlfriend or got cut from the basketball team the day before he took the test. Maybe he’d had the resources to hire a private tutor or take a test-preparation course to improve his scores. Maybe he just has good parents.
Yep, parents, you are the single most important factor in your kid’s success in school. The statistics have consistently shown that the variables most closely tied to how students do in school are parental education and parental income. If you went to college, your kid is more likely to do well in school than if you didn’t. Not surprisingly, if you went to college, you make more money on average than if you didn’t, so I’m not so sure that parental income is really an independent variable or a by-product of education.
Regardless, parents could save themselves a bunch of money, colleges could save themselves a bunch of time wading through applications, and schools could save themselves a bunch of stress sweating out these stupid tests if we just had the kids fill out a two-question form: 1. How much education do your parents have? and 2. How much money do your parents make? I’m willing to bet a dollar that you could do just about as well at predicting success in college with the statistics on those two questions as all those standardized test companies do for significantly less money.
Speaking of money, next time, we’ll take a look at what have become the most prestigious and powerful standardized tests around, Advanced Placement tests.