On the "Secret Sauce" in College Access & Success, Pt. 1

March 22, 2017

By Bill DeBaun, Director of Data and Evaluation

In recent years, NCAN has spoken with more and more members about the value of putting data to work to improve program performance and scale program capacity. The shift toward measuring outcomes and a quantitative approach to improving program and, by extension, student performance has been a heartening trend. One frequent discussion point has centered on the concept of a “secret sauce” in the college access and success field.

The “secret sauce” in this case is a combination of services and supports that systematically and consistently leads to more successful student outcomes across programs. There is good news and bad news about the “secret sauce.” On the one hand, this quasi-Holy Grail or skeleton key for the entire field is likely to be exceedingly difficult to find in the real world. On the other hand, there are tools and approaches that can help individual organizations find their own version of the secret sauce, and these may be more within reach than most programs realize. This two-part blog post will consider both of the aforementioned hands in an effort to give members a better grasp (pun very much intended) on how to consider cross- and within-program research findings.

Let’s start with the reasons why a member-wide secret sauce is likely to remain elusive. One of the key features of the NCAN membership is its variety. We have members of all shapes, sizes, and models. The number of dimensions across which programs can vary is quite large. Take, just for example:

  • Program size: Is the program serving 50 students a year? 500? 5,000? What are advisor-to-student ratios?
  • Program selectivity: Are there requirements students and families must meet to be served by the program?
  • Service delivery: Are services administered 1-on-1 in-person? In a group setting? Online? How often and for how long are services delivered?
  • Services provided: This is is far broader than just the access services and success services buckets, each of which includes a number of other services that programs may choose to deliver or not. Even once this is established, the question of how these services are provided varies.
  • Service recipients: Who is being served by the program? Only students? Students and families? Are the students starting in middle school and served through college completion? Are they adult learners?
  • Geographic context: Does the program operate in a rural, suburban, or urban area? What is the college-going culture in the community being served? Are there several nearby educational institutions to which students can matriculate, or just one or two?

There are a number of other characteristics we could discuss, but hopefully you’re getting the idea. The tricky part is that even within these variables, which seem like they could be easily categorized, there are incredibly important qualitative differences. Consider a program that says it provides financial aid assistance to students. What does that exactly mean? Students and families handed a pamphlet on the FAFSA and students and families receiving hours of 1-on-1 assistance in completing the FAFSA are both receiving financial aid assistance, but there’s an ocean of difference between the two. That last sentence is value-neutral, by the way, and also depends on program-level context. A program serving 5,000 students annually by giving them FAFSA pamphlets is getting a little bit of intervention to a lot of students; a program serving 50 students 1-on-1 is giving a lot of intervention to a smaller number of students.

Back to the secret sauce concept. If we say that we believe there’s a field-wide secret sauce, what we’re really saying is that there is some combination of services that threads the needle among all of these different programs that vary in all of these very important different contexts and also improves students’ outcomes regardless of those students’ own very important contexts (race/ethnicity, first-gen or not, academic preparation and capacity, etc.) That seems … unlikely.

Take it one step further. Imagine that, in an upcoming Success Digest, NCAN shared with members a report saying that students who received a package of interventions that included three sessions of academic counseling, an ACT/SAT prep curriculum, information about the FAFSA, and some assistance with college application essays were 25 percentage points more likely to enroll in a postsecondary institution in the first year following high school graduation. What questions would come to mind? “Who are the students?” “What does a session of academic counseling include?” “What did the ACT/SAT prep curriculum look like? How was it delivered and through what medium?” “What kind of information about FAFSA was provided?” These are worthy questions, and they matter a lot!

NCAN has spoken with members about the secret sauce concept, especially about the possibility that data from NCAN’s Benchmarking Project could unveil this combination of services and supports. We’re working on this now in an effort to provide some insights that all members can use. But members should not hold out hope that we find some miraculous panacea for the challenging and rewarding work that we are collectively engaged in. This is because no data set can adequately capture all of the important shades of gray that exist between programs and their students. In the same way we know that an advisor’s knowledge about a student must supplement the 1’s and 0’s related to that student in a data set, we have to know something more about the programs in our data set to supplement their own 1’s and 0’s. We can control for some program-level characteristics and get something far more useful than comparing apples and oranges, but at the end of the day we need to understand that the closest we may be able to get is to compare apple varieties.

This might sound depressing or disparaging of the value of data, but it isn’t at all intended to be. Instead it is intended to talk honestly and realistically about the challenges of comparing data across programs. Comparing across programs is one tool in the toolkit, but it is far from the only tool. In the second part of this blog post, we’ll consider an approach that could better serve members by helping them to look within their own programs for answers.

Back to Blog

Leave a Reply:
Login
 
 
 

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License