Here are three contributions to a recent discussion on the POD listserv (link to their archives under “Email discussion groups” on the left side of this page). In the first post, a science faculty member from a Big Ten university poses a question:
Some folks on my campus are concerned that accrediting agencies and other entities may expect us to do "Assessment." (Horror of horrors!) There seems to be some concern that we might be forced to do some sorts of No-College-Student-Left-Behind assessment that may lead to evaluation of (and comparisons among) instructors and that may infringe on principles of academic freedom and tenure. Our faculty council may soon consider a proposed set of "principles of assessment" that apparently is meant, in general, to defend against assessment of student learning.Next is a response from the person who is responsible for assessment at a well-known private university in Texas.
Are these concerns legitimate? Are there examples of accrediting agencies or other parties calling for assessments that end up harming faculty?
As others have already suggested, there are fears about assessment - and many of these are quite legitimate! [A listserv subscriber] mentions that part of the faculty fear may come from fear of evaluation rather than assessment and I think that this is true.And the next is from an associate dean and director of the center for teaching excellence at another Big Ten university.
But a lot of the fear/concern/annoyance/etc. comes because assessment hasn't been presented very well to faculty. I mean, really, faculty evaluate student learning all the time and they do it well. But...then assessment people (and I'm one of those "people") come and essentially say "you aren't doing it well enough and you can't use grades. You have to develop something new and better because what you're doing isn't going to work." Well...that is just silly and the faculty know it. So...assessment becomes a top-down, mandated, accountability-focused, bureaucratic exercise that doesn't do anyone any good except for keep the institution out of trouble (maybe -- many institutions still get dinged for assessment-related issues).
Instead...assessment is something that faculty do all the time - even at the program/department level. What often isn't done is the documentation of that assessment. So, the "closing the loop" may occur (curriculum changes, course requirements are modified, and new assignments added), but no one calls that "assessment". When faculty can see that they are doing assessment - and often doing it well, they may begin to see that assessment isn't as difficult or as onerous as they thought. They still may not love it (who loves the grading part of a class, for example??) but they can do it and they can use the data that they get to make modifications at the course, program, and even institutional level.
The other groups that get the bad names are the accreditation bodies. And, as one who has been an evaluator for two of them (NCA and SACS) I can tell you that most people working with these associations (volunteers and staff) are dedicated to doing what is right. However, because these accreditation mandates come "down" through the institutional administrative structure, it often seems as if the accreditation body is the "bad guy". Instead, they are really a buffer between institutions (and the freedom to have individualized missions and to measure outcomes in ways that we individually define) and the federal government. Besides, "them is us." Each institution is a member of their regional accreditor. We should all take more time to work with the accreditation process and to ensure that it works for the ongoing improvement of higher education. Not because it gets us a "check mark" but because we should regularly look at what we are doing and continually try to improve.
Now...don't even get me started on the "voluntary" system of accountability....yikes!
I have definitely run into this attitude about assessment in pockets throughout our university. However, I will say that faculty responses vary considerably across disciplines and domains. Faculty in domains w/o a history of outcomes assessment often bring up the comparison to NCLB and the spectre of standardized testing of learning outcomes for college students.
The concern is quite understandable given the push by the Spelling's Commission of past years, the marketing of certain instruments, and the lack of familiarity/history with the process. When you add the sometimes indiscriminate assumption that models developed for one field apply just as well in other fields, the hullabaloo is also predictable, but again, only in some disciplines.
For faculty in many of the fields that undergo specialized (disciplinary) accreditation, outcomes assessment is old news. And some of the faculty in those fields will even admit that the process has been useful for updating and improving their curricula. However, having been involved in Student Learning Outcomes Assessment for awhile, there is indeed cause for frustration for faculty new to SLOA. Most of the examples/models come from a relatively restricted number of fields and there is a dearth of models that humanities and social science faculty find useful. What works in business, engineering, education, or the health sciences, doesn't necessarily work well, or even make sense in English, cultural anthropology, or political science! We have found that the model from Communications has a lot of cross-over appeal.
Faculty experiences with specialized accrediting bodies might be another reason that some faculty assume that the regional accreditation org. will be very prescriptive. The specialized (disciplinary) accreditation bodies _do_ spell out pretty specific student learning outcomes. If you were a faculty member new to SLOA, and heard about a model from engineering and all about ABET's a-k outcomes, you might reasonably be expected to extrapolate that the regional accreditors would also be as prescriptive. In actuality, the regional accreditors exist at a completely different scale--just like a map of the US is much less detailed than a city map.
While some faculty were hoping that learning outcomes would disappear with a change of administration and the disbanding of the Spelling's Comm., it hasn't.
In addition, some faculty do not realize that messages about standardized assessment processes are generally not coming from the regional accreditors. In fact, CHEA (Council for Higher Ed Accreditation) worked diligently at the federal level to preserve institutional and faculty autonomy in specifying what outcomes are appropriate for the institution and its programs.
All of the regional accreditation organizations are quite committed to faculty engagement and faculty-driven student learning outcomes assessment. They are not about to tell the [philosophy] faculty what students in the philosophy program should know or be able to do by the time they graduate. Nor are regional accrediting bodies going to tell the [philosophy] faculty what courses they should be teaching. [replace with discipline of your choice]
I agree with Louis that assessment evidence can indeed be misused, but when that happens, it is generally not by the regional accreditation organizations. Some institutions get into trouble with their accreditation organization by not taking the process seriously. Others suffer because decisions are made internally within the institution to try to shoehorn all programs/disciplines into a single model that does not respect disciplinary differences in what constitutes evidence or how students learn in the field. The most commonly transferred models are uniform, homogeneous, linear, and sequential, which just doesn't work in some fields.
Frankly, it might be very interesting if [your] faculty did develop some principles of assessment. At least they'd be engaged in a discussion of what it means, and that could be a good step forward. You'd be able to provide a great service by being extremely familiar with just what the North Central accreditation organization is requesting. Your knowledge could help deflect some of the fear back toward a more productive conversation about student learning. I cannot tell you how many times I've heard comments about what Middle States (our regional org) wants, says, etc.
Hopefully, interested PODers will able to talk about this extensively at the POD conference! I think faculty developers have a unique contribution to make to discussions of institutional and program outcomes assessment.