Undergrads May Fail Critical-Thinking Test, but Academia Is Failing Them

(Photo: Jack Amick) The two professors sat in front of me, making conversation before the talk. The speaker’s title slide already projected on the wall ahead: “What (if anything) are undergraduates learning during college?” The professors laughed at just how apt they thought the title was: “Isn’t that right?” “Yes, anything, please!” And then the more senior faculty member, a woman, returned with a comment that made her junior colleague bristle: “Especially the boys. Some of those boys just try to get by with the minimum possible.” The junior colleague sat silent, and then spoke with a sharpness that spiked into the buoyant mood of moments before: “Well, that was me in high school. But the thing is, I was just bored to tears.” His senior colleague stopped chuckling to nod knowingly.

The slides belonged to Josipa Roksa, a co-author with Richard Arum of the 2011 sociology/media sensation “Academically Adrift: Limited Learning on College Campuses” and of its 2012 follow-up report, “Documenting Uncertain Times: Post-graduate Transitions of the Academically Adrift Cohort.” The premise of the talk, as well as the premise of the book and its sequel, was that undergraduate students are not improving their critical thinking skills in college, that this claim is sustained by the failure of a putatively representative sample of 2,362 students at 24 four-year institutions to increase their average score on standardized tests of critical thinking, and that this failure in critical thinking is affecting them negatively in the labor market and in civil society (as indicated by the percentage with full-time employment or graduate or professional school status, and by self-reported newspaper-reading habits).

The primary instrument with which Arum and Roksa document students’ drift away from critical thinking is the Collegiate Learning Assessment (CLA), a self-described “authentic assessment” or “test worth teaching to” developed by the Council for Aid to Education, a nonprofit organization created by a group of “enlightened business leaders” for the purpose of, “promot[ing] a better understanding of the substantial contribution which higher education makes to the effectiveness, skill, growth, and success of American business, and to the development of the country.”

CLA presents test-takers with “real-life” scenarios, to which students respond in writing rather than multiple choice. Arum and Roksa (2011: 21-22) share an example: “You” are an assistant to the president of “Dynatech,” a company that wants to purchase a plane. But the model of plane Dynatech President Pat Williams wants to purchase was recently involved in a crash. “You” are charged with the assignment of writing a memo using news articles, consumer report data and reports about the sorts of accidents the plane has encountered. You are to advise Williams whether or not Dynatech should purchase the plane.

This “real-life” scenario raises a question: For whom is this real-life? This is not a rhetorical question. The ways in which I make meaning of the scenario and my response to it are likely to be shaped by how I relate to “Dynatech”: am I really an employee, as the scenario dictates? Can I imagine a future as an employee? Can I imagine a future somewhere similar? Am I imagining a future somewhere else, concerned with quite different decisions? Or am I, like that junior faculty member sitting in front of me at the research institution where Roksa gave her talk, “bored to tears” by this test? And if so, is this instrument an adequate measure of the way I think and an adequate provider of feedback to the people – the faculty – who are supposed to be supporting me as I develop my thinking?

The performance-task approach of the CLA can engage students. Other tasks on the test ask students to lobby for space for a campus organization to meet, using university policies, an excerpt from the student newspaper and documents from a web page, or to reconcile two contradictory dialect maps using linguists’ descriptions of their scholarly methodologies. Still another CLA item asks students to imagine that they work for a “Christ-centered” nonprofit parenting organization affiliate in whose community an influential leader has made particularly vocal claims about corporal punishment. Students are to write a newsletter analysis of the pastor’s claims using quantitative and qualitative data on discipline and transcripts of the pastor’s sermons. As in classrooms and workplaces, the CLA uses performance tasks that may engage many students at certain points and in certain ways. But the question of why students would be willing to engage in these tasks – and, similarly, why students would want to engage in our classrooms and the ways of knowing presented there – remains.

Engaging thinking means engaging people: their understandings of self and others, their experiences, their imagined futures. Indeed, Arum and Roksa are correct that an ability to perform the sort of tasks the CLA asks students to complete is a necessity and an asset in what education scholar Lisa Delpit calls the culture of power. Equity in education, as Delpit argues, calls for equipping students for success in the culture of power, while changing the relationship between the culture of power and the contexts in which students live their everyday lives; equity in education calls for engaging and building on the ways of knowing students bring with them to school, especially when those ways of knowing are ignored, or even derogated, by the culture of power. What happens to existing space for this kind of dialogue when schools and teachers are asked – demanded, even – to focus increasingly, or even solely, on the culture of power without coming to know, and to draw in, engage and develop, the ways of knowing students carry with them?

In documenting the “uncertain times” of the college graduates that Arum and Roksa identify as “academically adrift,” the authors observe that the students with stagnant CLA scores during college are more likely to be the young adults living at home after college, or not reading newspapers or not finding full-time jobs. What the authors do not observe is the extent to which their results speak to patterns of social reproduction, in which those whose home cultures are deeply tied to the culture of power are also those who perform best on the “real-life” abstractions of the CLA, and are those for whom reading a newspaper constitutes a more legitimate form of civic engagement than reading a blog or a Twitter stream. Yet coverage of the murder of high-school student Trayvon Martin only trickled into the print of major newspapers after weeks of deeply, civically engaged blogging, tweeting, talking and petitioning.

What Arum and Roksa and the CLA count as “critical thinking” matters, no doubt. But it seems that the inequities in college learning about which they claim concern persist because they are so often relegated to the spaces akin to those that Arum and Roksa do not count, spaces of lived experience and meaning-making through which one man can understand his being “bored to tears” as a sign of academic worthiness, and through which another might feel he’d be better off drifting out of academia. Again, engaging thinking means engaging people. So, when it comes to assessing thinking, I say we need relationships before we need measurements, and when we measure, we need to do so in dialogue with those real, human relationships. But I hope you’ll share with me: what do you think?