Here are some of the highlights from a useful big-picture review, in Edweek, of how the Common Core assessments are shaping up:
…In terms of overall execution, how do the exams crafted by the two main state testing coalitions—the Smarter Balanced Assessment Consortium and the Partnership for the Assessment of Readiness for College and Careers, or PARCC—stack up to what they promised in their$360 million bids for federal funding?
In sum, say testing experts, what the consortia have accomplished thus far is more like a first draft of their original goals.
“Both consortia will have tests in 2014-2015 that will be better than almost all existing state tests, if not all. Neither will be as good as promised in their response to the department’s [request for proposals],” said Scott Marion, an associate director at the Dover, N.H.-based National Center for the Improvement of Educational Assessment, which advises both consortia. “But if they can survive until 2018, ’19, ’20, they actually might have something pretty good that comes close to living up to their promises.”
One notable technological issue affected design and price point. Both consortia had expressed interest in using “artificial intelligence” scoring to ease the burden of hand-scoring answers. But as it became clear that AI scoring would not be ready to measure the evidence-based reading and writing skills demanded by the common core, both consortia decided to rely on trained educators to score students’ responses to the performance-based tasks. (Each group plans to carry out additional studies of AI scoring, in the hope that it might become feasible in the future.)
“The point the consortia are emphasizing is that it’s very good testing in a sense, and will tell you things we haven’t been able to tell you before,” said Derek Briggs, a professor of research and evaluation methodology at the education school at the University of Colorado at Boulder who serves on the technical-advisory panels for both consortia. “But it’s still a hard sell to a lot of parents and children and people who are already skeptical about testing.”…
Smarter Balanced, meanwhile,reduced the number of performance tasks in each subject from three in the initial application to one, comprised of several steps.
“The price point people felt they could manage politically has meant we’re doing less than we could have done, and it will not signal as firmly that we want kids to demonstrate their learning,” said Linda Darling-Hammond, a Stanford University education professor who advises the Smarter Balanced consortium.
Smarter Balanced has kept, however, a classroom-based introduction and activity for each performance-based segment meant to help level the playing field for students who come to the exam with different levels of background knowledge.
Those constraints, though, shouldn’t detract from some real breakthroughs, according to testing experts. Performance testing in K-12 has never been done at the scale it will occur once the two groups’ tests go live, they say. And the consortia’s advances in that area directly respond to the instructional shifts in the common core.
The performance-based math items created by Smarter Balanced aim to measure whether students exhibit the set of mathematical practices identified in the standards, Ms. Cole noted, such as making sense of problems and persevering in solving them, and reasoning abstractly and quantitatively.
Smarter Balanced didn’t contract with vendors to begin building its Digital Library until early 2013. That resource, which the group hopes to unveil this summer, will include online training modules, exemplar units, and teacher-submitted resources….
Although many instructional experts support those efforts, they worry that they are coming too late, since teachers are facing instructional challenges now.
Teachers are aware of the end goals espoused in the common-core standards, but need more support in learning how to break them into manageable units, said Margaret Heritage, an assistant director for professional development at the National Center for Research on Evaluation, Standards, and Student Testing at the University of California, Los Angeles.
“My concern for teachers is getting a handle on these standards and understanding the depth of them, and what it’s going to take to reach these deeper-level learnings the standards require,” said Ms. Heritage, who sat on Smarter Balanced’s formative-assessment advisory panel. PARCC does have an optional diagnostic exam, which teachers can use to better pinpoint students’ weaknesses, said Mr. Nellhaus. And Smarter Balanced is now deep in the work of creating the teacher supports.
More than 1,400 K-12 teachers are now helping to generate—and vet using common criteria—the resources for the Smarter Balanced digital library, according to Chrys Mursky, the group’s director of professional learning.
Smarter Balanced’s adaptive-test modelhas raised a tricky policy dilemma: whether students who are demonstrably performing significantly above or below proficiency should be given test questionsoutside their grade level.
To date, the federal Education Department has forbidden that practice, citing the requirements of the NCLB law. Smarter Balanced plans to make its case to the agency,with the input of a variety of advocacy groupsand assurances that it will institute plenty of safeguards, said Joe Willhoft, the executive director of Smarter Balanced.
“If we have a 4th grade student who is very good in math, we want to open up the pool for them to see harder items,” he said. “But we don’t want to give them something about the Pythagorean theorem. We want to be sure that if they get it wrong, it’s because they don’t know the math, not that they’ve just never seen it before.”
Only one state, Indiana, had reversed its adoption of the standards as of mid-April. But criticism of the testing has led several states, including Florida,Georgia, andPennsylvaniato decide against using the consortia tests. And there are external pressures, too, as a variety of nonprofit and for-profit vendors begin to build suites of tests to compete for market share with the consortia products.
With such pressures looming, many in the assessment community hope the consortia’s efforts will continue to grow stronger over time. The tests mark an important shift away from the basic skills that the NCLB-era exams tended to measure, they argue.
“It’s important for people to give the consortia a little bit of charity, given the size of the task,” said the University of Colorado’s Mr. Briggs. “I worry that if they don’t have it perfect from the start, then people will want to pull the plug. And then we’d be back to having assessments that look an awful lot like what we had before.”
Coverage of “deeper learning” that will prepare students with the skills and knowledge needed to succeed in a rapidly changing world is supported in part by a grant from the William and Flora Hewlett Foundation, atwww.hewlett.org. Education Week retains sole editorial control over the content of this coverage.
see the whole thing here: Vision, Reality Collide in Common-Core Tests – Education Week.