I say it's probably good news because the most common reason for this review is that your scores have jumped a lot from an earlier PSAT or SAT administration. For example (and this is a scenario I've seen repeatedly), suppose you got a 210 on your PSAT sophomore year, and then have your scores delayed when you take the SAT the fall of your junior year. The most likely reason your scores were flagged is that you have received scores of 2350 or above, equivalent to a 250+ point score improvement.
Large score jumps are unusual (that's why they trigger the review), but they do occur legitimately, especially when the student has been working hard to build the fundamental skills measured on the test. At that point, ETS will double check to make sure the same person really took both tests. They'll do things like check the handwriting on your test and match the pictures you provided for each administration. They'll also check the pattern of your answer choices to see if there's evidence of copying with anyone else in the same test room.
The thing you should understand is that such jumps by themselves are not enough for ETS to cancel your score. The statistical tests are just a way of flagging possible outliers for closer examination. They need specific, credible evidence of impropriety, for example a proctor's report that he or she observed something suspicious, or your high school guidance counselor reporting that the person pictured on your admission ticket isn't you. You should also take comfort in the fact that the additional security measures imposed after the Long Island cheating scandal not only make it much more difficult to hire a ringer to take the test for you, it also makes it easier for ETS to verify that you are who you say you are.
If there are no other factors that cause them to worry, they will release your scores after this investigation, and you'll get your good news.
Now there are occasional instances when you, the innocent party, might be caught up in a more in-depth investigation:
- if there was a serious irregularity at your testing site,
- if another student wound up copying off you, unbeknownst to you; (the statistical tests for cheating can flag pairs of tests where copying probably occurred fairly well, but by themselves they're not good at telling who's copying from whom; they'll have to look at a lot more evidence to decide which one is the cheater),
- if you take the international version of the test and are from a country that has a track record of large-scale, organized cheating rings.
The final possibility is, unfortunately, too common, and this October's test seems to have brought another major cheating incident in Asia. October SAT scores have been delayed for test-takers who live in China and Korea and who took the international version of the SAT.
This incident appears to be a variation of a scenario that has played out in Korea before, first on the January 2007 exam. In each case, some test-takers appear to have taken advantage of a known pattern in the way SATs are administered, namely, that international test-takers do not see the same test forms as domestic U.S. test-takers do. They instead receive a test that was given some time before, on a date when College Board does not release the test.
The international version of the SAT given in January 2007 was the same test used domestically in December of 2005. And, judging by the comments here, it appears that the October 2014 international test was a repeat of the domestic test first given in December of 2013.
Now it is true that the questions on December tests are not released to test-takers, as they are for some other months. But there are organized cheating rings that will pay large sums for copies of these unreleased tests, or that will send in test takers equipped with electronic equipment to take pictures of the pages, etc. In the 2007 incident, it appears that one or more test-prep schools in Korea had acquired copies of this unreleased test and were using it as practice for their students, who then happened to see the same problems on the actual test.
Something similar happened on the May 2013 exam, but in this case ETS got wind of the fact that the test had been compromised before the test date and cancelled the administration altogether. The international test used that time was first used domestically in June of 2007, and reused at least one other time without incident (for the international administration in June 2008). From the reporting, it sounds as if in this case the actual test booklets may have been compromised, so that people knew what exact test would be used, although it could also be that ETS merely discovered that the test they planned to use was being shared from copies of the old administrations.
Based on who is having their scores withheld, it looks as if ETS suspects that test-prep schools in China and Korea were prepping their students on this particular test. That's why students who traveled from China to take the test in other places are having their scores withheld.
Why does the SAT let this happen? Reuse of tests is clearly a major contributing factor to these scandals, so why not stop reusing tests altogether?
If you're going to stop reusing tests, however, you are faced with two choices, each of which has its downside.
First, you could develop an entirely new test for each administration. Even if you continued to reuse old tests for Sunday and make-up administrations, that still nearly doubles the number of tests that must be developed each year. High-stakes tests like the SAT aren't just whipped out in an evening the way a high school teacher might create a final exam. They're the product of a long and expensive development process. Creating that many new tests would greatly raise the cost of the test for the test taker.
The other possibility is that you don't give separate tests for the international administrations. Interestingly, that's exactly what the College Board did for the question-and-answer-service dates (January, May, and October) up through May 2010. The other test dates were already reusing old domestic tests.
The reason for the shift (or indeed that there had been a change at all) was never announced, but it seems obvious that security concerns prompted the shift, and ETS and College Board chose to go in the direction of more, rather than less, test reuse, even though the 2007 scandal had already provided vivid evidence of the potential problems of test reuse.
If we assume (as I do) that they are rational decision makers, they must have perceived a greater threat to using the same test around the world. In this case, test-takers in places like East Asia are taking the test many, many hours before those in the U.S. ETS and College Board most likely have decided that there was a greater threat to test-takers in other countries feeding U.S. test-takers answers than the other possible scenarios. Given that the vast majority of test-takers will take the SAT in the U.S., it makes sense that domestic test security would take precedence.
In the end, ETS and College Board may have decided that periodic incidents like these with international test takers are a consequence they can live with, given the alternatives.
Update: I've been asks what happens, after the review, if they do think there is enough concrete evidence to cancel your score. You will always be given the change to take a free retest to prove that your scores were legitimate (the Stand and Deliver option). Your scores don't have to be exactly the same. As long as they're within the margin of error, they'll accept that as evidence that your original scores were legitimate.