Some thoughts in response to “PARCC results: R.I. sees improvement, but achievement gap grows”
“Wagner denied that the test is too hard, a common criticism. Instead, he said Rhode Island has much work to do to put a rigorous curriculum in every school, ramp up teacher training and redesign the way schools, especially high schools, are structured.”
I urge everyone who thinks the PARCC is a valuable assessment, worth the time, money, and curricular resources it gobbles up, to Google PARCC practice tests and avail themselves of the many sample tests for ELA and math at all grade levels. Parents and grandparents with advanced degrees and even teaching experience report that the questions are confusing and designed to trick students. Many cannot figure out the answers that the test developers expected. And this doesn’t even get into the fact that the tests are designed to be administered on computers, ignoring the drawbacks this presents for younger students or any other students who are not computer savvy. Yet Commissioner Wagner insists that all students will take the tests on computers next year, even though students who took the tests with paper and pencil in 2015 did somewhat better. Why the urgency to transition completely to computerized testing? (This is a rhetorical question.)
“Rhode Island students who took the 2014-15 PARCC exams by computer tended to score lower than those who took the exams by paper, raising further questions about the validity and usefulness of results from the tests taken last school year by more than 5 million students in the multi-state Partnership for the Assessment of Readiness for College and Careers.
“The differences were sharpest on the English/language arts exam, where 42.5 percent of Rhode Island students who took the test on paper scored proficient, compared to 34 percent of those who took the test by computer.” [This dwarfs the 2% uptick in ELA scores in 2016 compared to 2015.]
Standardized tests in R.I. reveal ‘readiness issue’ with online testing
Posted Feb. 10, 2016
“PROVIDENCE — Rhode Island students who took the latest standardized tests on computers performed lower than those who took them on paper, raising questions about the fairness and the validity of the assessments, according to some school superintendents. …
“State Education Commissioner Ken Wagner acknowledges that Rhode Island has “a readiness issue” with online testing. He also said, however, that the differences in results are not significant enough to question the validity of the tests. [!]
“‘Increasing technology is the right way to go,’ he said. ‘People have assumed that if there is a difference in scores, that the online ones are too low. But one could argue that the paper scores are too high. It’s not clear which is the right score.’ [perplexing logic here]
“Wagner agrees that these results have ‘shined a light on the need to address the digital divide.
“‘But that doesn’t mean that the test isn’t measuring what it’s supposed to,” he said. “I haven’t heard anyone say we should go back to index cards.'” [!]
Yet here is more from other states on the discrepancies between PARCC scores on paper and pencil vs computer:
“In December, the Illinois state board of education found that 43 percent of students there who took the PARCC English/language arts exam on paper scored proficient or above, compared with 36 percent of students who took the exam online. The state board has not sought to determine the cause of those score differences.
“Meanwhile, in Maryland’s 111,000-student Baltimore County schools, district officials found similar differences, then used statistical techniques to isolate the impact of the test format.
“They found a strong “mode effect” in numerous grade-subject combinations: Baltimore County middle-grades students who took the paper-based version of the PARCC English/language arts exam, for example, scored almost 14 points higher than students who had equivalent demographic and academic backgrounds but took the computer-based test.
“’The differences are significant enough that it makes it hard to make meaningful comparisons between students and [schools] at some grade levels,’ said Russell Brown, the district’s chief accountability and performance-management officer. ‘I think it draws into question the validity of the first year’s results for PARCC.’ … Assessment experts consulted by Education Week said the remedy for a “mode effect” is typically to adjust the scores of all students who took the exam in a particular format, to ensure that no student is disadvantaged by the mode of administration.
“PARCC officials, however, said they are not considering such a solution. It will be up to district and state officials to determine the scope of any problem in their schools’ test results, as well as what to do about it, Nellhaus said. In the short term, on policy grounds, you need to come up with an adjustment, so that if a [student] is taking a computer version of the test, it will never be held against [him or her],’ said Briggs, who serves on the technical-advisory committees for both PARCC and Smarter Balanced.
“Such a remedy is not on the table within PARCC, however.
“’At this point, PARCC is not considering that,’ Nellhaus said. ‘This needs to be handled very locally. There is no one-size-fits-all remedy.’
[Apparently this is not being handled by Wagner/RIDE.]
“But putting that burden on states and school districts will likely have significant implications on the ground, said Casserly of the Council of the Great City Schools.
“’I think it will heighten uncertainty, and maybe even encourage districts to hold back on how vigorously they apply the results to their decisionmaking,’ he said.
“’One reason many people wanted to delay the use [of PARCC scores for accountability purposes] was to give everybody a chance to shake out the bugs in the system,’ Casserly added. ‘This is a big one.’”
“Comment from homeschoolingmom:
“The PARCC website allowed anyone to take a sample test. When I, as an adult, took the computer test, it was FULL OF BUGS! One screen, nothing showed up when I typed, but when I was shown a summary at the end, every key stroke was recorded. Some questions had you pick from several options, whether your answer would be a whole number, mixed number, etc. There had to be the right number of boxes in the right order for you to input one number of your answer into each box! That was confusing. Some questions had multiple parts, but the 2nd and 3rd parts were not visible unless you scrolled down, and I missed answering some problems all together. When I got to the end, it told me to go back and review my answers, which I tried to do. I was able to fill in a couple of missed problems, but when I tried to correct that answer that recorded miscellaneous key strokes, I was locked out, my test was gone. I think BUGS made online scores LOWER!!”
Now let’s consider the appropriateness and reasonableness of the PARCC tests themselves. We have only the word of the test maker, Pearson, and those at RIDE who accept Pearson’s word, that these assessments are valid. Valid means that they assess what they claim to assess. Is this true?
Russ Walsh, an adjunct professor and respected reading specialist, recently “decided to take a close look at the PARCC sample test reading comprehension passages and try to assess their readability, and therefore, their appropriateness for a testing environment.”
He concluded that for most grade levels, “the passages chosen are about two grade levels above the readability of the grade and age of the children … .”
Obviously, children take these tests independently. Children’s independent reading level is by definition below their instructional level. Tasks at their instructional level are accomplished with teacher and class support. Tasks at the students’ frustration level cannot provide meaningful information about what they know and can do and where they struggle. This sets children up to fail, lowers their self-esteem, and makes it more difficult for them to engage in learning. This is counter-productive and also terribly costly of time, money, and school resources. While adversely affecting all students, the negative impact is exponentially worse for struggling students.
“Less than 22 percent of black and Latino students scored proficient in English compared to a statewide average of almost 38 percent on the Partnership for Assessment of Readiness for College and Careers, a challenging test rolled out last year amid dismal results.
“Less than 9 percent of English language learners reached the state standard, and that number fell to less than 6 percent for special-needs students.” (emphasis added)
(quote again from
“PARCC results: R.I. sees improvement, but achievement gap grows”)
According to RIDE, the PARCC test is aligned to so-called rigorous curricula based on the Common Core State [sic] Standards, which prepare students for critical thinking and problem-solving and thus for the tests. What is the nature of the curricula that are aligned to the Common Core and the PARCC? Tragically, the answer has been poor quality workbooks from publishers like Pear$on (coincidentally also the producer of the PARCC and remedial materials) that focus narrowly on text-dependent questions on short passages or excerpts from informational or fiction texts.
The next level of curricular harm rushing pell-mell (i.e. in a recklessly hasty or disorganized way; headlong) toward a school near you is 1:1 digital learning/incessant testing/monitoring/tracking. This is being referred to as “personalized” learning, student-centered learning, or competency/proficiency based education, accomplished through the use of digital devices. Despite the fact that actual academic benefits have never been confirmed through peer-reviewed research, the edtech edupreneurs are chomping at the bit to sell ever more hardware, software, and data analysis systems to cash-strapped school systems in the name of “innovation” and global competitiveness. See here for the actual harm to developing children that this frenzy for bogus teaching/learning with hand-held devices all day every day is perpetrating on our children: here
When the general public hears the word “personalized,” they have a right to believe that this means a human teacher working with a reasonable number of students so that human connections can be made. This is the antithesis of what “personalized” learning means to the edtech mob, however. Here is a great explanation of the difference from Alfie Kohn:
“When -ized is added to personal, again, the original idea has been not merely changed but corrupted — and even worse is something we might call Personalized Learning, Inc. (PLI), in which companies sell us digital products to monitor students while purporting to respond to the differences among them.
“Personal learning entails working with each child to create projects of intellectual discovery that reflect his or her unique needs and interests. It requires the presence of a caring teacher who knows each child well.
“Personalized learning entails adjusting the difficulty level of prefabricated skills-based exercises based on students’ test scores. It requires the purchase of software from one of those companies that can afford full-page ads in Education Week.”
Here’s more on this aspect:
“So the problem with personalization via adaptive software isn’t simply that “it doesn’t work.” It’s that it might work — work to obliterate meaningful and powerful opportunities for civics, for connection, for community. Work to obliterate agency for students. And work not so much to accelerate learning, but to accelerate educational inequalities.”
Rather than doubling down on a fatally flawed set of ELA and math standards and the inflexible curricula and assessments aligned to them, RIDE policy makers need to break out of the PR bubble of “rigor,” “innovation,” individualized digital learning, and misguided data collection. Students at all levels, and particularly the most vulnerable students—those who have cognitive/perceptual/neurological/sensory/emotional/behavioral challenges, those whose families live in high-poverty neighborhoods, those from non-English speaking homes, those who are homeless—need personal, not “personalized” a la computer algorithm, attention from well-prepared, dedicated teachers who are sensitive to their interests and needs, and who have the autonomy to create curricula that engage students where they are rather than disengaging them and reinforcing the worst stereotypes about their ability to succeed.