Kenneth Green in today's Inside Higher Ed...
Several quotes to whet your appetite to read the entire commentary:
The question of value added...looms large in the continuing public conversation about quality and reform across all levels of education.And now, for the rest of the story...
Stated (too) simply, Astin’s research, based on multivariate analyses of large, multiple, and longitudinal cohorts of undergraduates across a wide array of colleges and universities, confirmed that the impact of the college experience at some institutions surpassed predicted outcomes (grades and other measures of academic performance; retention in specific majors; degree completion, student satisfaction with the college experience, etc.) while the collegiate experience at other institutions impeded some student outcomes.
...the 1984 Study Group argued that “higher education should ensure that the mounds of data already collected on students are converted into useful information and fed back [to campus officials and faculty] in ways that enhance student learning and lead to improvement in programs, teaching practices, and the environment in which teaching and learning take place. We argue that institutions should be accountable not only for stating their expectations and standards, but [also] for assessing the degree those ends have been met. In practical terms, our colleges must value information far more than current practices imply”
...most postsecondary institutions collect a rich array of data about their students that remain untouched for the purposes of analyzing impacts and outcomes.
Consider one example: student placement tests. Many colleges (especially large state institutions) require their students to take placement tests and/or “rising junior” examinations: the students who do not “pass” these exams must enroll in remedial courses. When they pass the remedial course, they move on, either to college level courses (placement tests) or to upper-class standing (rising junior exams).
What happens to the data about the student experience in these courses? Are the data – tests on mid-terms and finals, as well as other metrics – used to help assess the impact of the course or the effectiveness of the instructor? What about state systems that have a common (freshman placement or rising junior) exam but multiple remedial courses offered across multiple campuses? Are some courses, instructors, or institutions more effective than others? Are the data analyzed in a way that they can be used as a resource (“how do we do better?”) rather than a weapon (“your students failed!”)?
For thirty years, colleges and universities have operated under an outcomes mandate but without a consensual methodology for assessing outcomes.
Yet a quarter of a century after the 1984 Study Group’s Report, most colleges and universities have yet to “ensure that the mounds of data already collected on students are converted into useful information and fed back [to campus officials and faculty] in ways that enhance student learning and lead to improvement in programs, teaching practices, and the environment in which teaching and learning take place.”
But let’s acknowledge that the effective use of institutional data requires campus officials and policy makers to agree that the data will be used as a resource, not as a weapon. The challenge, as articulated by the 1984 Study Group, remains: how do we do better – how can we use and exploit data to aid and enhance program improvement efforts and professional development? In this context, value added analysis, done well and used appropriately, can be a powerful and useful resource.