Dual credit programs deserve an honest look from Texas educators

Photo by Callie Richmond

Last month, a study made headlines with negative news about a popular program known as dual credit.

Dual credit is a model program that allows qualified high school students to enroll in college courses and simultaneously earn college and high school credit. It’s popular with students because it allows them to get a head-start on college, reduce the cost of college tuition and complete their degrees sooner.

While this study suggested that dual credit may not help especially first-time college goers, almost every other study on this topic has had the opposite finding.

All studies have limitations. In an ideal world, education leaders, news organizations, and policy makers weigh the preponderance of evidence produced by research scientists in making policy recommendations. But that nuance was lost in the feverish response to the well-publicized study of Texas dual-credit programs produced by the American Institutes for Research (AIR) and overseen by the Texas Higher Education Coordinating Board.

In fact, the preponderance of scientific evidence supports dual credit.

An emerging consensus of nearly a dozen published research studies finds that dual credit improves outcomes for students pursuing bachelor’s degrees and those pursuing associate degrees. In fact, two of the most thorough studies (here and here) analyzed the same Texas student population as the AIR study and both found dual credit had strong positive impacts on improving student outcomes. Furthermore, four studies (here, here, here and here) found that minority students or low-income students benefited the most from dual credit, a finding in direct conflict with the AIR report.

When a new study conflicts with a large body of evidence, it should definitely cause folks to pay attention, and ask questions, but one of those questions should be about the quality of the outlier study. Here is what I discovered about the AIR report’s research design.

AIR researchers made a series of small and big decisions that all had the consistent effect of minimizing the effect of dual-credit programs. They removed dual-credit schools in Texas that produced the greatest dual-credit impacts on student outcomes. These were university-provided dual-credit courses and early college high schools, which are typically sponsored by community colleges. They only followed students beginning with their junior years in high school, even though dual credit is a high school intervention that starts as early as freshman year.

AIR researchers took two different approaches to estimating the effectiveness of dual credit. In one approach, they measured the effectiveness of participating in at least one dual-credit course. In the second approach, they measured the effectiveness of earning dual credit in increasing numbers. Like testing the dosage of a medication, they wanted to see if the benefit of dual credit existed in higher doses.

Their first approach found that dual credit had virtually no effect on bachelor’s degree completion within eight years of graduating from high school. This finding was reported in the executive summary and main body of the report. In contrast, the dosage approach found that dual-credit programs produced a 5 percent increase in degree completion over students in a no-dual-credit control group. This growth was achieved when students earned the average amount of dual credit. For those who don’t closely follow education research, a 5 percent increase is huge. This finding was buried in the appendix.

The major point here is that a reasonable change in how the researchers defined treatment changed one of the most important findings of the report from night to day. But, that was not the worst fault of this study.

The real problem with the AIR study is that they used a statistical methodology but violated the requirements of that methodology. The end product of their violations was that most students who participated in dual credit were not counted as having done so — and a significant share of the students counted as dual-credit students never in fact participated in a dual credit course.

You might ask, how could they do this? The reason is that they assigned each student a probability score that was supposed to reflect their probability of participating in dual credit. They used this probability score to assign students to their study’s quasi-treatment and quasi-control groups. They were more accurate in predicting who was not going to participate in dual-credit, but they were way off in predicting who actually participated. Based on my estimates, I find that for every 100 students who actually earned dual credit, they placed 83 of them into their quasi-control group. Moreover, 1 of every four of their quasi-treatment group members never stepped foot into a dual-credit class. They chose this method even though real numbers actually exist.

I replicated this part of their study using the same data sources and methodology they described in their report. My estimates are likely to slightly differ from theirs because they did not fully report the details of this part of their research. But, their violations of the statistical procedure they used is absolutely indisputable.

Their inaccuracy in predicting who participates in dual credit is most severe with subgroups disproportionally underrepresented in dual credit: African American, Hispanic, and economically disadvantaged students. For example, 99 percent of African American students who earned dual credits were reclassified as not likely to do so, while half of those reclassified as likely dual-credit students never participated in those programs.

In conclusion, AIR researchers made a series of research design decisions that consistently diminished the estimated effect of dual credit on student outcomes. Moreover, they miscategorized students who did not earn dual credit as members of their quasi-treatment group and students who earned dual credit as members of their quasi-control group. This error discredits the study’s conclusions. Officials of the THECB should withdraw their previous statements about dual credit and ask the AIR researchers to reanalyze the data using a defensible research design.

The University of Texas has been a financial supporter of The Texas Tribune, a nonprofit, nonpartisan news organization that is funded in part by donations from members, foundations and corporate sponsors. Financial supporters play no role in the Tribune's journalism. Find a complete list of them here.

Mike Villarreal

Founding director, Institute on Urban Education, UT-San Antonio

@MikeVillarreal