State government has recently put higher education in its cross-hairs. As Baton Rouge has done for years with K-12 education, our leaders want more productivity from higher education-- while supplying grossly inferior resources, of course. One of the metrics that the Legislature and the various college boards want to see is an improvement in graduation rates. University funding will be tied, in part, to success in that area.

At the same time, while lobotomy-level cuts are being planned for our state's intellectual centers, some schools are insisting that their funding should be protected, in part because of their high graduation rates.

Both of these are hazardous approaches. The UL System has already pointed out that the current metrics do not accurately reflect effectiveness, and that discrepancy must be addressed. But the problems are deeper than that.

First, consider that the schools with the highest graduation rates are also those with the highest selective admissions, and because of both of those, they also have the largest state funding per student. They damned well better have the highest graduation rates. So when we compare those to schools with less stringent admissions criteria and fewer resources, it's a problem of comparing apples and oranges... or maybe regular apples and crabapples.

For instance, LSU has the highest graduation rate among Louisiana colleges-- 61%-- but it is also the only public institution in Louisiana with its level of admission standards, and LSU enjoys an FTE funding that is much, much higher than other state schools. So to fairly judge LSU's graduation success, we must look to other states.

LSU is an SREB Doctoral I institution, and ranked as a Tier I National University under USN&WR criteria. When we look at all 18 schools that are also Tier I, Doc I, the average graduation rate is 73%, well above LSU's rate. In fact, LSU is the third-lowest school in the group of 18, despite an annual FTE funding of $17,609 which is significantly higher than the Doc I average of $17,034. These sorts of analyses serve as illustrations of why raw graduation numbers alone are insufficient for accountability purposes.

Of course, a simple explanation would be that LSU does not currently draw the same calibre of freshmen as peer institutions do, and it would be interesting to see if that were true.  That defense, however, would also apply to all of the schools in Louisiana, and is precisely the point here:  we must consider the strength of the incoming freshmen to appropriately measure graduation outcomes.

To address the problem of varying strengths of freshman classes at different institutions, we can compare the ACT/SAT scores of incoming freshmen with subsequent graduation rates. By looking at matriculation rates for any particular ACT score (which allows us to further analyze effectiveness by considering the subsections in English, mathematics, reading, science, or even the more focused subscores under those), we can make better in-state and out-of-state comparisons.

There is a second problem, however. Graduation rates are a murky conflation of educational effectiveness and academic standards. As an illustration, some years back a comparison of athlete graduation rates across the state showed that the university with the very highest success rate with student-athletes was also one of the very weakest schools in the state. It is possible of course that the college in question was actually doing a superior job of educating athletes. But given its relative weakness in educating and graduating its other students it is also highly unlikely, and the more likely explanation is that the academic standards for athletes were too low.

Conversely, some years ago here at UL an examination of our low graduation rates for many disciplines revealed that students weren't matriculating because they couldn't complete their math requirements. There were several reasons for this, but the principal one was that the math department refused to lower its standards. The mathematicians actually expected students to understand and master the subject matter, and it was suspected that some of the other schools in the state had watered down their math courses.

So do graduation rates reflect educational effectiveness, or academic standards? It's a classic case of the fox guarding the hen house. The people producing the product (education) are the same folks telling us how good of a job they are doing (standards, i.e. grades and pass rates). I would suspect that currently most faculty are trying to honestly align the two, but when funding starts to hinge overmuch on grades, we can be confident that administrators will pressure faculty to lower their standards, and that they will generally succeed.

So by strongly linking funding and graduation rates without validating them against external standards, we might actually undermine education in Louisiana, not improve it. To address this second problem of effectiveness vs. standards, we need standard exit metrics.

This, I suspect, hits on a new problem. I have not done a thorough study of the following so I cannot be sure, but I suspect there has been a quiet national campaign to make exit metrics less available.

The currently available exit metrics-- pre-professional school exams, professional licensing exams, the GRE*, and graduate/professional school acceptance rates-- have become harder to find over the years. Some years ago UL officials discovered that our graduates had the highest Louisiana pass rates for just about every licensing exam our graduates took, even when including Louisiana's private schools. But those comparative numbers have slowly become unavailable. Likewise, professional graduate schools have begun discouraging colleges from boasting about their acceptance rates.

Why is that? The various testing and admissions groups still collect the data, and they still inform each institution how their students and graduates fared. Publishing the comparative data is essentially effortless, but is generally no longer done. I wonder if perhaps 'inferior' schools across the country also began discovering that their graduates were outperforming the 'best' schools on these exams as well?

It is easy to invoke conspiracy theories on almost any topic, but consider: if the 'best' (i.e. the largest and most powerful) universities were dominating these tests-- as well they should be given their funding and admissions standards-- they should be eager to share the data that proves it. On the other hand, if the results of various exit exams begin to question which schools are 'the best', then those traditionally perceived as the best would be threatened. And those perceived as the best are really the only schools with the clout and the connections to insure that comparative data is no longer published. So we have to wonder.

I do not make these comments with the intent to launch a witch hunt, but rather to serve as a warning. If state officials wish to implement accountability measures in line with my suggestions here, they will want to approach the problem tactically, and warily. If schools are resisting these sorts of comparisons, particularly those schools perceived as 'superior', then that resistance may actually emphasize the importance of these metrics, and deeper questions may be warranted.

So by looking at standardized incoming and outgoing student metrics we can find out which school is truly the best.

Actually, we will do no such thing. In fact, one of the casualties of such an approach will be the very concept of single 'best' university.

Let me explain. One of my high school math teachers was one of the best teachers I have ever had. She was an awful teacher.

That seems contradictory, but she wheedled, and harassed, and even publicly embarrassed us. For the students in the class who were not intimidated by this-- and who like me (I blush to admit) were perhaps not the hardest working-- she was a great motivator. "Mr. Abraham, you missed #17 on the exam?!? Well, Mr. Pitzer got it." In front of the whole class she would say such things.

It got me off my duff.  But for the bulk of the class, that same treatment was toxic, and many students hated her. So for me, she was an excellent teacher.  But when considered on the whole, we see that for most students she was not a strong teacher.

The reverse is also true. I can remember a popular teacher in medical school who was excellent at simplifying complex topics. I thought he was great.  A classmate of mine who wanted more rigorous treatments of the subject matter, however, was disappointed in him.

That's an important insight. As we begin using standardized entrance and exit exams to look at added value, we will find that for a student with an ACT of 26, school X may be the best place to enroll, while for a student with an ACT of 19, school Y would be better. In such a case, how can we say which school is 'best'?

As the very concept of 'the best' school becomes a casualty in this approach, the overall system will nevertheless improve.  Because by disenthralling ourselves from the idea of a 'best' school, we will be able to much more effectively educate our citizens. We will be able to develop a large 3D matrix of ACT scores, majors, and institutions. School A will be the best for a prospective accounting major with an ACT of 27, school B will be the best for an entering nursing major with an ACT of 21, while for an English majors with any ACT another school might be the best.

If we track these data for all incoming freshmen, and not simply for graduating seniors, even as students change majors through their college careers we will be able to provide what I call 'selective pre-admissions'. A student with a given ACT who is set on attending a particular institution can get an idea of which majors give her the best chance of graduating. Conversely, the student with an ACT and a specific career goal can select the institutions most likely to help him succeed.

The beauty of this approach is, as the concept of the 'best' school disappears, different institutions, and different departments within each institution, can begin strategically focusing their attentions on some cohort of entering freshmen. As that happens, education will improve for everyone, and we can begin evaluating the effectiveness of each school and each department by assessing the value added to each particular cadre of student, rather than just looking at the unreliable metric of raw graduation rates.

To make this work, we will have to require that all graduating seniors take some exit exam. For those not taking a professional or pre-professional exam, the GRE should suffice. We do not need to mandate any particular score in order to graduate, only that the data is collected and reported.

A final caveat: these metrics will not solve the problem, not in any final way. There is a good deal of data that shows that exam scores are strong predictors of one, and only one, thing:  the subject's ability to take yet more exams. I will address this in a future essay.

But it is a start. If we wish to look at the effectiveness of undergraduate education, this will give us beginning data, from which we can proceed.

For further reading see a critique of Louisiana's colleges from The Pelican Institute, and UL System President Moffett's response to that critique.

*Graduate Record Exam, the standard for most graduate schools.


Send your press releases and articles on UL, the UL District, and quality of life in Acadiana-- particularly education & culture-- to ultoday.com by clicking here.