Tuesday, 16th January 2018
My inclination in response to this article is to say, yeah, we’ve been making all these points for 10+ years to deaf ears. Lisa read parts of this to me this morning, interspersed with “hey, you’ve been saying this”. Not just me. Lots of others. I remember being in rooms where assessment people were telling us about the latest and greatest tools, rubrics, etc. My problems were always philosophical, specifically epistemological – we are purporting to create knowledge about a program – what kind of knowledge is it, how is it formed, is it reliable, are there other and better methods? I never got a good answer from anyone involved about any of these questions. The article mentions Upton Sinclair’s famous and apt line, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”
One of the most troubling responses I would get was “Well, we know it’s not perfect, but do the best you can with it. You can derive some benefit from the exercise, can’t you?” This was always an abdication of the challenge to answer the epistemological questions. Not only are there far too many pitfalls, of many sorts, for academic assessment to be useful (and those pitfalls range from the obvious ones of the problems of operationalizing success for a program by measuring student outcomes, to less obvious ones of incentivizing a fundamentally broken instrument to be used by faculty who intuitively or explicitly can see through the flaws, and also the downstream effects of changing programs based on data gathered by untrained statisticians and surveyers (i.e. faculty and staff) into a factory model of education), there is also the unwillingness to actually answer a direct question with a clear answer which undermines any sense that assessment is anything other a program to undermine faculty expertise and force them to teach in a manner they know to be less effective, so that some office or bureaucrat somewhere can be convinced that they are doing their jobs.
Let’s face it: assessment communicates one thing, to faculty and everyone else: faculty are not trusted, they are not regarded as professional, and there must be micromanagement of what they are doing or the whole enterprise will descend into chaos. Many are convinced that that’s where the whole enterprise currently is. This is part and parcel of the meme that extends across the political spectrum in the US, but which thrives the most on the right: education in general and higher education in particular is a complete mess, and it needs to be fixed ASAP.
Are there problems in higher ed? Sure. Some of those are because it is a conservative (i.e., slow to change) enterprise. But some of the problems have been caused by the putative solutions, the framing of education as populated by “those who can’t, teach” people, the turning of education into a business, with CEOs and more and more offices, each purporting to solve something or serve someone, and each requiring something out of those who are there to teach.
So, assessment is just one of those tools that does not, and cannot, get the desired result of a higher quality education, because it is not set up to do that. It is set up to produce apples-to-apples comparisons between programs, so that administrators have an easy rubric for determining efficiency. It’s an old version of efficiency which supposes that it comes from a manager who has the data at his fingertips and can tweak systems to maximize their outputs. Even when it looks like the control over improvement is distributed to the programs and faculty themselves (“This is for your self-improvement, it’s not surveillance! Really! We’re the benevolent ones here!”), it is still a version of efficiency that doesn’t actually reflect how complex systems actually change.
I once asked a high-level administrator why he wasn’t a billionaire yet. His attitude was that he could oversee a large complex system, and through his superior oversight, knowledge, experience, and all that, could maximize things. But it’s not actually much different from the stock market, or the weather, or evolution. We’re always much better at looking back than we are looking forward. No one is much good at looking forward, including important high-level administrators. So, if someone claims the insight to manage the chaos of a university, that person should also be able to reliably pick winners in the market, and be ridiculously rich.
He laughed, but I also think I went on his mental list of “troublemakers – minimize contact”. But it’s true. The reason we have programs like assessment is that legislatures and accreditation bureaus ask administrators to do the impossible, which is to guarantee growth and a rise in quality, as defined along very specific measures. And so, administrators put in place tools that they hope will work. They don’t. Assessment doesn’t work, nor do most of the tools used for raising grant production, research production, or anything else. But administrative salaries depend on not understanding, or admitting, any of this, and they depend on not being willing to actually think about how complex systems really work, and what their role in those might look like. Because, their role may well be vastly diminished.
So, we continue to go through the motions of assessment. None of the epistemological questions are answered, and until they are, there is just Kabuki theatre. After trying for years to ask those questions, and being rebuffed at every turn, I’ve given up.