Shall we thus abandon self-assessments? We need to be judicious, to be sure. However, this book does lay out multiple strategies and prescriptions for continued, restricted use of such tools.
1. Make the questionnaire more complicated. The scholarly term for this is practicing “evaluative neutralization.” “Desirable-sounding and undesirable-sounding items are rephrased to sound more neutral and test takers will not be tempted to alter their responses.” (p. 320)
2. Ask for verification. So-called “biodata” and “elaboration” tactics require survey takers not just to rate themselves but to supply evidence supporting their claims. Instead of stating yes, I am a hard worker, students provide an example of the same. (p. 320)
3. Force choices between multiple positive choices. In this technique, a school or test organization would determine a relatively finite set of specific sought-after attributes, say four or five, and then generate a set of another ten to fifteen attractive characteristics. Applicants are then required to select from a list of positive attributes, not knowing which the organization prefers. (p. 321)
4. Warn applicants not to cheat. This strategy has been widely studied, but the results have been inconclusive at best. Only warnings which are persuasive that the organization can effectively detect faking AND will punish fakers seem to have statistically significant effects, and meanwhile, even these can have the negative result of depressing the responses of the honest respondents, due to their anxiety of getting caught. (p. 323)
5. Make it more like a test. This strategy is considerably more complex than the previous four; it requires applicants to self-assess by answering test-like multiple choice questions about situations and scenarios. Situational Judgment Tests (SJTs) are increasingly common, advocated and employed, for instance, by Robert Sternberg in his college admissions work at Tufts and elsewhere. These are appealing, but unless their design is emphatically intended to combat faking, they can fall prey to the same problems.
6. Use third party reporting. This is hardly innovative; we’ve all been using teacher recommendations for years. ETS explored a wide variety of options before reconciling themselves, for graduate student applicants, to a Personal Potential Index constructed exclusively upon “ratings by others.” (Kyllonen, 2008)
Gloomy as this book is about faking, it does not conclude on balance that we should eliminate self-assessments. Kyllonen, et.al offer excellent case studies for effective non-cognitive self-assessment in higher education (p. 290-92), and the book’s concluding chapter rallies users to not surrender. Paul Sackett, Ph.D., University of Minnesota, argues: "there is at least the potential of checks and balances, rather than simply taking the personality score at face value…In sum, although improved methods of preventing, detecting, and correcting for faking will be welcome developments, it is not the case that there is a need to suspend operations while waiting for these developments." (p. 342)
Report from the Field:
“The whole child, not just the cognitive ability, is what we are committed to assessing at Galloway,” reports Polly Williams and Elizabeth King, former and current admissions directors, respectively. They are quick to say this is not to dismiss the importance of the intellectual ability, only that they know “traditional measures don’t work in isolation to capture the breadth of what kids can do.”
Both Williams and King have a background in special needs and dyslexic educational programs, and draw upon that background in the assessment work they do. They know from that experience that often “brilliant kids are not able to demonstrate their brilliance in typical ways.” Instead, the mission of their admissions operation is identify the strengths and weaknesses of every individual child, honoring the unique qualities of each, and to build a balanced class well-suited for their particular educational program and not just select the top academic performers.
Because their school has long been committed to what is now called 21st Century learning, they’ve identified three major assessment domains and have developed assessment tools for each. First on their list are the so-called executive functioning abilities, such as time management, organization, and prioritization. For this, they use a self-assessment survey they’ve built from various published resources.
Second is perseverance and grit, for which they’ve recently begun using the Duckworth grit assessment available on her website. It is early still, but in an initial analysis, they’ve found strong grit scores do correlate well with predicted student success. As Williams and King explain, “We are excited about the use of this tool, because perseverance in the face of challenge is so critical to learning in a project-based environment like Galloway’s.”
Third is the interpersonal: the ability to interact and collaborate effectively. To evaluate this in admission to middle and high school, each applicant participates in a group activity – usually building a tower out of various parts – while Galloway educators observe carefully and evaluate with their collaboration rubrics.
The Galloway team is confident and optimistic about their progress on this critically important activity. “We are well on our way to defining which “soft skills” are most important to us and in developing assessments for them. This is doable and worth doing.”