As I mentioned previously, I now work for Method Test Prep, a company that serves around 1,000 schools, community organizations and independent college counselors to help democratize access to high quality test prep for high schoolers. Earlier articles on this blog have critiqued the helpfulness of the SAT’s “college readiness” assessment and the ACT and SAT’s obsession about security; this one is going to be a little different. It occurs to me that one of the sources of uncertainty felt by college applicants is that, while they know the tests are important, it is increasingly difficult to determine just what a score on these tests really means, which makes it hard for students to feel confident that their $50+ and 3+ hours are well spent.
I suppose that it is important to state at the outset that, despite the popularity of colleges’ stated goal to evaluate applicants “holistically”, standardized test scores play a major role in application decisions. In the 2014 “State of College Admission”, the National Association for College Admission Counseling (NACAC) observed that 58% of colleges attributed “considerable importance” to test scores, third out of 16 variables, and only slightly behind the students’ strength of curriculum. 69.7% of public colleges gave considerable importance to test scores, which was ranked second only behind the applicant’s high school grades. Test scores are also increasingly important as selectivity increases. Only 55% of colleges which accept 85% or more of applicants consider admission test scores to be considerably important, but 62% of colleges which accept 50%-70% of applicants rate test scores highly.
So these tests are important, and it is desirable to get a higher score. But depending on the institution(s) to which students are applying, it can be hard to tell what score to aim for. Websites like CollegeSimply can point you in the right direction, and the ACT and the College Board have published multiple documents to help interpret test scores. Unfortunately, both of the tests have experienced revisions in the last year that have only served to increase the anxiety and uncertainty among students and educators.
In the Fall of 2015, the ACT made a change in how it scored its optional essay. (Note: Even though the writing section is “optional” on the ACT and SAT, it is still a good idea to do it.) According to the ACT, the new writing section gives students 40 minutes (instead of 30) to “analyze multiple perspectives on a given issue in relation to their own perspectives”. The new writing prompt calls on a “broader range of subject matter” related to “contemporary issues beyond school experience”. This certainly sounds straightforward. Unfortunately, it turned out to be quite the opposite, and the ACT had to issue a 13 page document explaining how to interpret the writing scores, and justifying its rigor and value as an assessment.
The confusion was based on the new scoring scale. Since 2005 the writing section had been scored on a scale from 2-12. The new test grades the writing on four “domain scores”, each of which is on a 2-12 scale, and then the sum of the four domain scores is “converted to a scaled score on a 1-36 scale”. Since the rest of the ACT’s sections (English, Math, Science, Reading) are scaled on a 1-36 basis, it only makes sense that students and educators thought that the score would be equivalent to the other sections. In other words, a student who scored a 32 on English and a 32 on Reading should probably get a 32 or thereabouts on the essay. Unfortunately, that has not been the case in most situations, and people are quite concerned about the ramifications of what seems like a “bad” essay score.
ACT admits that the writing scores are “generally lower than other scores–on average…3-4 points lower than the ACT Composite or English scores.” This is mainly due to the inherent variability of grading a single task, compared to the other sections which have up to 75 questions. The ACT urges that “scores across subjects are not strictly interchangeable and should not be compared”. But that seems naive since they made it so easy to make a (faulty) comparison when they rescaled the essay. This seems like a blunder that needlessly muddied the water.
The SAT has also changed its test, but in a much more comprehensive way. The SAT has been overhauled from the ground up and beginning in March, millions of students will take a test that covers different topics, asks different types of questions, and is scored on a different scale from its predecessor. Method Test Prep has written extensively about the new test, and our prep materials for the new SAT have been used by tens of thousands of students since last summer. Many students used our program to prepare for the PSAT/NMSQT which was administered in October and which mirrored the structure and scoring scale of the new SAT. While the College Board proclaims that “students will not be guinea pigs” for the new test, there sure do seem to have been some glitches associated with the new assessment. For instance, the release of PSAT scores was seriously delayed–in previous years they were sent to schools in early December, but this year they were not released until mid-January, and there have been numerous reports of errors and poor user interface to the online data in the weeks since.
Once the scores came out, schools were (pleasantly) surprised to see that, in many cases, the scores seemed to be much higher than they had expected. Considering that the new test includes more difficult math and lengthier, more complicated reading passages, this was quite unexpected. I have spoken with numerous guidance counselors all over the country who have shared similar anecdotes with me. It seems that most students are ranked in a much higher percentile than they had been on previous College Board assessments. A Google search on “PSAT percentiles higher than expected” yields over 24,000 results. While this might be good for students’ self-esteem, it makes it even harder to decide whether the PSAT should be considered as a practice for the SAT or as a scholarship test.
The problem with score reporting is troubling for several reasons, but most notably, because for the next few years, for anyone to be able to make sense of the new SAT (and PSAT), they will have to compare new scores to the old test’s scale. This is one reason why the College Board will be delaying the release of the March SAT scores for so long, and only after the March and May test scores are released will they publish a concordance between the new and old tests. In other words, students, teachers and admissions offices will not really be able to interpret the new scores in a meaningful way without the concordance, and to compare the new SAT to the ACT will require TWO concordances. Current high school juniors and their counselors who should be busy trying to identify appropriate colleges are operating in the dark, since they have no way of knowing what scores colleges will be looking for. The only thing they can do is look at their percentile to see how they compare to other students. But if that number is not accurate, then what?
Major media sources such as the New York Times have been devoting more attention to worries about the new SAT. In recent articles, Tamar Lewin wrote “A New SAT Aims to Realign With Schoolwork“, and Anemona Hartocollis wrote “A New, Reading-Heavy SAT has Students Worried”. Lewin’s article gave a nice synopsis of the changes, and extolled College Board president David Coleman for his goal to make the SAT more closely related to what kids should be learning in school (Coleman’s background with creating the controversial Common Core standards may influence him here). The piece definitely struck a nerve with readers, because in just a few days over 550 comments were posted on the online version of the article. The Times summarized the comments in a later article, and readers seemed generally unhappy with the announced changes.
One of the biggest points that the College Board cites in favor of their new test is that free test preparation will be available through Khan Academy. One Times reader took issue with that:
A commenter under the handle R-son from Glen Allen, VA said his stepson, who is better in math than reading, would soon be taking the test. “The new SAT will be hard for him, but he has an advantage over other students–an $800 Kaplan prep course. So it boils down to this–he’ll score better on the SAT than a lower income student with the same abilities whose family can’t afford to fork out close to 1K to prep for and take this test. So how is this test, in any form, fair?”
R-son is onto something here. While students could take a prep course for much less than he paid for his child, I have no doubt that parents and educators will be seeking out more test prep in the near future than they had previously. In my daily interactions with high schools all over the country I hear over and over that the school boards and administrations are placing greater emphasis on standardized test scores, and that they are eager to find a tool that can help prepare kids for both tests.
Ultimately, the ACT and SAT are important to colleges as a way of comparing students from vastly different educational backgrounds. That said, the built in cultural and economic biases to the tests are troubling, as is the relationship between test scores and college rankings. My alma mater, Hampshire College recently became the first “test blind” institution in America and they had several significant results:
1) They were excluded from the US News and World Reports college rankings list 2) The number of applications decreased, but the quality of the applications increased due to the increased emphasis on essays 3) Diversity increased to 31% students of color, and first generation college students increased to 18% of the class that entered in 2015.
It’s unlikely that colleges will relax their emphasis on tests in the near future, and the complicated nature of the tests, and of interpreting test scores will only make things harder for applicants, their families, and their high school teachers and counselors. Instead, the likelihood is that the proliferation of test prep options will continue as people grasp at options to improve the admissions chances of the students with whom they interact.
Leave a Reply