Tag Archives: HigherEd

High School FAFSA Filing in the Midwest

After publishing a recent blog on HS FAFSA filing rates in Washington State, Associate Director of Financial Aid at The University of Missouri Justin Chase Brown (@jstnchsbrwn) asked if I could put together something similar for a collection of states in the Midwest.  Like the Washington data, this is an imperfect snapshot because we have only been given access to one spring snapshot on 4/11 (a time point of varying value depending on the filing deadline of the state/institution), but it gives a sense a good sense of how much variation there is between timing of filing by state (compare Minnesota to Indiana) and then between high schools within states.

As I suggested in the original post (which also provides some context on why this issue is so important), there would seem to be real untapped potential here for post-secondary institutions to look more closely at the high schools in the regions they serve in order to target support services- it’s just that the data in its original form (one-per-state excel spreadsheets) isn’t very user friendly for those purposes.  Remember, these are the percentage of students who complete their FAFSA filing for the 2013-14 school year by April 11 2013 of those who at least submit by December 2013.  These are students who either are attending or taking the major proactive step to attend higher education, and many of these students will qualify for need-based aid from their states and institutions if they meet a filing deadline.  Colleges committed to access should be asking what they can do to ensure these students file early enough.

 

High School FAFSA Filing in Washington State? See for Yourself…

The federal government recently produced a set of tables, one for each state, listing the number of FAFSAs filed by high school as of the end of February, June, and December.

The numbers in these tables are particularly interesting given that, although the federal government has lax deadline for filing (only requiring that a FAFSA be submitted by the close of the following academic year), many states and individual colleges only guarantee aid to those students submitting FAFSAs before  “priority deadlines” that are typically much earlier.  In some states, a FAFSA submitted just a few days late can mean a loss of nearly half of a student’s potential need-based grant aid.

In some ways Washington has been incredibly proactive on the financial aid front. First, the evergreen state offers some of the most generous need-based awards in the country, providing more per student than any other state with grants of up to $10,868 at Washington colleges and universities.

Second, the College Goal Washington program holds events throughout the state, supported partly by volunteers, to increase FAFSA filing by early February.  A solid, proactive step that more states should take.

Yet part of the reason that current levels of outreach are necessary, and possibly still inadequate, is that the state’s newest financial aid program, College Bound (which provides a commitment of college funding to 7th and 8th grade students in foster care or eligible for free/reduced price lunch) has a particularly early deadline.  The February 1st cutoff during the spring before the fall of attendance is of the earliest in the country’s earliest, and is nearly a year and half before the federal FAFSA filing deadline.

Confusingly, many Washington institutions offer their own need-based aid, have their own priority filing deadlines, and typically only guarantee those funds to students filing before the priority deadline. Some share the February 1 target date with College Bound, but, many are pegged dates throughout the spring from February 1 to April 15, making a concerted statewide communication push all the more challenging.

Given that, I thought it might be interesting to try to make the new high school FAFSA filing data a bit more digestible (through some visualization) and contexualized against other state data sources (all from the Washington State Office of the Superintendent of Public Instruction, although painfully spread throughout the site).  After merging the data sets, specifying zip codes for all of the schools, generating some variables for school demographics and grad rates, and pulling in some additional district-staffing level information to get the FTE counselors per 100 students, I was able to produce the interactive map below and attempt some regression analysis with the new data.

The map focuses on a particular high school-level ratio: the number of FAFSAs completed by February 28 (the earliest date the government provides; this is before most institutional priority deadlines in Washington but after the College Bound deadline) divided by total FAFSAs submitted by June (a loose proxy for planning to attend college in the Fall).  The closer to 1, the more FAFSAs are likely getting submitted early enough to receive state and institutional aid.  Although I considered using 12th grade enrollment or December submission numbers as the denominator, using the former might overemphasize the influence of graduation rates and the latter might capture students intending to begin in the spring semester.  What I’m really interested in is:  Given that early filing means more access to aid, how much variation is there between schools in rates of filing by the “priority deadline period” by students intending to attend college the following fall ?

The circle size corresponds with the total number of FAFSAs submitted by June, and the color scale from red to green corresponds with the rate of Feb 28 filing.  The sliders allow you to adjust the schools shown, and you can hover over individual institutions or select subsets to dig deeper.  The default view is zoomed in a bit to focus on the greater Seattle area, but the rest of the state is there if you pull back.

The short answer?  There’s a lot of variation, and it isn’t explained away the school characteristics that you might guess.  Neither “Counselors per 100 students” or “% of underrepresented minorities” were significantly predictive in the models I tried.  A school’s graduation rate was significant, but without much power- a 1% increase in graduation is correlated with about a .03% increase in the filing rate (There are some basic scatter plots on the tabbed page of the visual below to provide a sense of the data).   All told, even after adjusting a bit and removing outliers, these variables explain only about 20% of the variation in school filing rates.

That has real implications for programs like College Goal Washington, but also for individual high schools and colleges in Washington state.  Too often the focus is on supporting applicants who seek out help rather than proactively targeting students in need of guidance.  Particularly in states like Washington, where extensive need-based aid is paired with early deadlines, students and institutions alike have a great deal to lose.  Washington’s post-secondary institutions might do well to spend some time noting the “red” schools above within a few hours’ drive… and then perhaps putting in a request for the college van.

Have insight on the variation, other states worth exploring, and ways to make the red dots green?  Share them below or on Twitter @aroundlearning

Seeing Education Across Countries and Time

You don’t have to be an expert in comparative education to know that international aid agencies and developing countries pour a huge percentage of spending into education because they see education attainment (sometimes broadly described as “human capital”) as linked to health and, ultimately, economic outcomes.  Economists and sociologists in particular spend a good deal of time working with large datasets trying to parse out the effects of completion on everything from teen pregnancy to GDP.

On an international scale this can be notoriously challenging; record-keeping quality and frequency vary enormously between countries.  In recent years international organizations including the UN, the World Bank, and the OECD have made significant strides in capturing and standardizing metrics for school attainment and completions.  This has been accomplished largely through growing inter-agency collaboration, particularly around the so-called “World Development Indicators.”  Reflecting the strategy of the agencies since the late 80’s these indicators focus chiefly on primary school completion, and, in particular, on male/female disparities (tangentially , these are more complex than they sound.  See here for one of several analyses that shows that the male/female gap is decreasing both because of a rise in female completion and a concurrent decline in male completion, and some of the potential confounding effects of culture).  Even with primary school indicators, though, there are big gaps in data availability, particularly as you move back in time, and these are often magnified when looking at less exactly tracked post-secondary indicators.  How, then, do economists think about something even larger in scope and more granular in detail- average total education (or total post-secondary education) for the entire population of scores of countries?

How Economists See It

Economists Robert J Barro and John Wha Lee, of Harvard and Korea University, respectively, have some experience tackling this problem.  They released the first Barro-Lee dataset in 1993 after compiling a wide variety of census data collected by national and international agencies and using what they refer to a “perpetual inventory method” to estimate the number of graduates at each level; essentially, they used a combination of enrollment and completion rates paired with entering numbers for the last year in which they had data, then projected them forward into the known “pool” of the population above age 15 and above age 25.  While a quantum leap forward in ‘93 (subsequently updated in 2001), the estimates came under some criticism in the late 2000s for having some strange jumps between time periods for some countries where more reliable data was hard to find, and less-than-accurate estimates for some countries where reliable data became more readily available.

Enter Barro and Lee, circa 2013, with a new methodology.  Without getting too deep into the weeds, some of the biggest changes include estimates based on new 5 year age groupings (to more exactly account for enrollment patterns), use of previous and subsequent enrollment data to weight estimates for the periods in between, and accounting for mortality rates.  For this last piece, they incorporate the fact that, on average, more education is associated with a longer life expectancy, meaning that in the 65+ group there is likely to be a growing skew towards those with more education.

All of this, to remind you, is being done with just a handful of actual full-scale census samples- the vast majority of countries included here have fewer than five  between 1950 and 2010 (the range the dataset examines), and about 15% of the countries that Barro and Lee actually include in their data set have only one census.  They “assume that tertiary completion is relatively stable” for the 15-19 and 20-24 age groups, which seems like a big assumption (more on that in a bit), but, remember, in the absence of better data this is just about the only game in town.

Actually Seeing What Economists See

Here are some interactive visualizations built on the Barro-Lee dataset (publicly available here); these are the only ones I know of built on this data set  (if you know of others, please post them below).  These visualizations are all built on the updated 25+ population estimates;  it would be great to add some that focus on the 24-29 age group to get a better sense of the changes in tertiary completion in that traditional demographic range, but I found some odd discrepancies in the 2010 completion percentages (almost all of the advanced countries had sharp declines, some by more than half) and have followed up with the authors for comment (if anyone else can provide some guidance there or is not able to replicate the problem, please let me know).

First, lets look at a metric that you won’t often find at the population level across countries- overall percentage of post-secondary completers, from 1950 to the present.

Here’s a similar time-lapse map, this time focusing on the relative differences between the overall and female rate of post-secondary completion in the national population (apologies if this enters me into the blue-for-boys, pink-for-girls nature/nurture debate, but I use that shorthand here).  The color is calculated by subtracting the overall rate from the female rate, with negative numbers showing up as red.  Note that because I’m using the overall rate as the comparison (as opposed to the male rate) differences are muted a bit.

To watch both overall post-secondary education and the associated female ratio change at once, controlled by one tracker, go to the dashboard here.

If we assume that the distribution of human capital across a country has implications for productivity, and that its effect is at least somewhat additive, then one way to compare nations is to multiply the average years of tertiary education times the population of the country to get a sense of the total years of tertiary education represented by the population.  Because Barro and Lee use “4 years” as a catch-all for completing tertiary education and “2 years” as a catch-all for “some college,” it’s possible that these number may actually be underestimating a bit in the modern era as an increasing number of college graduates pursue graduate school, particularly in a subset of the advanced economies. This bubble chart color-codes circles based on their geographic region (for developing countries) or advanced economy status (blue).  While the movement can be headache-inducing if you watch too many loops, it provides both a sense of the overall growth of international tertiary education as well as relative changes between countries while accounting for their populations.  Perhaps most powerful is the contrast between the overwhelming dominance of the US until the 1980s or so when another block of countries starts to catch up, along with a reminder that while the United states increasingly lags other countries in our completion rates we still maintain a dominance in overall years of tertiary education in the broader population (though that’s perhaps something we can only expect to last for perhaps another couple of decades).  Essentially, this captures the effects of national higher education policy for each country, with a lag for it to become distributed throughout the population.

This circle chart shows the changes in each of the measured levels of attainment- it provides both a sense of the areas where progress has been greatest over time, but also areas where emphasis appears to have been focused on a particular level of attainment (primary or secondary), with less focus (or at least success) in retention to the next level of education.  One clear example is Sub-Saharan Africa, which surpassed South Asia in primary school attainment, but has only fallen further behind the region for secondary and tertiary schooling during that same period.  The comparison is similar between Latin America and the Middle East/North Africa.

On the largest scale, though, we often talk about achievement gaps within countries, which are often characterized by whether they are narrowing or widening, regardless of whether raw achievement (however measured) is going up for both groups.  This can be a helpful and thought-provoking way to think about disparities in education outcomes between countries or even, as with the line chart below, between groups of countries. The chart below uses a simple advanced economies vs. developing economies breakdown (the filter at right can be used to adjust the regions included in developing countries).  There is a clearly visible trend across the board of a gap that is narrowing for primary education but widening for secondary and tertiary schooling (except possibly for Europe and Central Asia, where the gap is more stable).  This, of course, happening in a context where developing countries are increasing along all levels of education, but the advanced economies are increasing at an even faster pace.  Part of the reason this is not the case for primary schooling is that the advanced economies have largely topped-out; 6 years is considered the maximum for primary schooling, so advanced economies have largely already encountered “ceiling” to their attainment. By contrast, tertiary schooling is on the rise across all countries.  There are obvious implications here for those interested in questions of inter-country inequality, and for those who study whether a particular level of education has stand-alone value or if its primary worth is in its relative rarity compared to its distribution in the marketplace.  That’s a question of increasing importance as international education development moves beyond the relatively clear-cut outcomes of literacy, health, and basic math in primary school.

For those who like to glance through some of the underlying data, below is a chronological crosstab that looks at these same indicators- average years of primary, secondary, and post-secondary education- by country, using the Barro-Lee estimates.  As with many of the other vizzes in this post, just use the slider to change the reference year.

As always, feedback, questions, and new angles to explore are always welcome.

Everything You Know About the CLA Is Wrong

The Curious Evolution and Disruptive Future of the Collegiate Learning Assessment

This week the Wall Street Journal carried an front page article entitled “Colleges Set to Offer Exit Tests” (the online title is different, and misleading in a different way).  The prominent placement was not surprising given President Obama’s recent tour of colleges promising a more explicit look at the outcomes of higher education than rankings currently provide (more on outcomes and rankings in two upcoming articles…).  Still, the title caught me a little off-guard given that I work in higher ed and had not yet received the memo that we were all administering exit tests this year.

As it turns out the, the article is a glancing look at administration of the Collegiate Learning Assessment Plus (Formerly the CLA, now abbreviated CLA+  rather than the less fortunate acronym “the CLAP”) at about 200 colleges this fall.  It’s important to note up-front that the CLA+ is neither an exit text nor a “Post-college SAT” as suggested by the online headline- and I’ll explain why, shortly.  Yet, while the article is problematic in lots of ways (starting with its title), it is the first CLA+ mention of any length that I have seen in the popular press, and seemed worthy of a follow-up.  That’s because while the limited administration of the CLA+ this is far from revolutionary, it’s very possibly the first step towards something much bigger.  

[Note: since writing, I’ve noticed that the Chicago Tribune has posted a similar article and borrows the “exit exam” language for its title, as has Fox Business, Business Insider,  and that beacon of higher education investigative reporting: Cosmopolitan Magazine.  The CLA marketing team must be on the move!]

Most folks who work in or around higher education know that CLA didn’t spring fully-formed from the knee of Zeus this past week.  Developed in the late 90s and released in the year 2000 with funding from the Council for Aid to Education (a major tax-exempt charity started by a collective of businesses), it has become a sort of darling of accreditors and assessment groups like the Wabash Study while managing to gain the trust of a wide cross-section of traditional public and private institutions.

Unlike similar tools developed by the ACT (the CAAP) and ETS (the Proficiency Profile), which have been more widely adopted by public 2 years and subset of public 4 year institutions participating in the VSA , the CLA has seen penetration in many “elite” privates and publics- last year it was administered by Amherst and UT-Austin, and dozens of highly selective schools that wouldn’t touch something like the ACT’s CAAP with a ten foot pole have have used the CLA at least once .

Part of the very reason these schools have found the CLA appealing is that it is not an “exit test”- a term typically reserved for an exam that one must take, and often must pass, to graduate.  In fact, using it as an exit test was impossible- the CLA has noted from the onset that unlike tools like the SAT, the CLA was intended only for measurement at the institutional level, primarily for internal assessment.  Further, the CLA has always been (and thus far still is) entirely voluntary- institutions can’t require it of their students.   Instead, it is administered to a sample of both first year students and seniors who often receive some sort of incentive in exchange for their roughly two hours spent completing the computer-based, which consists of a “performance task” requiring students to sift through a digital document library to answer a series of questions in a process intended to replicate “real-world” decision-making.  The written responses are scored by a computer program, which the CAE argues has reliability levels similar to two trained human graders (but some critics have suggested can be gamed with non-sensical answers).

The  CLA uses a regression model for its institutional reports (which has become at least less questionable since they switched to HLM in 2010) to control for student and campus characteristics to show change in students at the group-level over time.  After the administration, schools get a student-level summary file and an institutional report showing whether their student improve more or less than predicted. Following pushback when institution names were released along with their scores after an early administration, further administrations simply provided reference to a “comparison group” and explicitly discourage institutions from publicizing their own results.

The important take-home here is that the form and function of the CLA up until 2012  was targeted at a core user base that bought into the concept of the CLA because provided a form of direct assessment that can be used internally and reported to accreditors without risking a challenge to their reputation.   I would suggest that the vast majority institutions using the CLA this fall will be using it in the same way that they have for the past dozen years- as voluntary administrations to gather internal institutional metrics and satisfy accreditors.  But this year they, and even those institutions that took it years ago, will be complicit in something larger that the Journal article correctly alludes to- setting up the CLA as a potential individual-level credential. 

Assessment professionals received the first glimpse of this change as part of an email from the CAE in the fall of 2012.  Here’s an excerpt:

“…it is with tremendous excitement that we share our next enhancement, CLA+, a version of the CLA that is designed to provide reliable information at the student (in addition to the institutional) level.

 Launching in beta this spring and more formally next fall, CLA+ will, among other things, allow faculty to share formative feedback directly with students and open use of the assessment to the unique needs of each campus. The development of this enhanced version of the CLA will also allow the reporting of even more subscores (like scientific and quantitative reasoning, and critical reading and evaluation)”

Did you catch it?  The opening  phrase is marketing language targeted at the CLA’s traditional core audience- faculty committees and assessment contacts at regular ol’ universities and liberal arts colleges- and that’s certainly how the second paragraph reads, quite intentionally.  But hear-you-me, a claim buried in that first sentence, that this new version can now provide “reliable information at the student…level” marks the opening gambit to become a whole ‘nother kind of heavy-hitter in higher education and beyond.

To do this, the CLA is taking advantage of an unspoken but widely understood bargain made by these traditional institutions- they were willing to administer the CLA and to suggest that it was at least the “most accurate metric available” for measuring what college is “really supposed to do”, as long as this satisfied accreditors’ demands for direct assessment without posing any risk to their reputation.  In the old model, institutions remain the keeper of the keys for certifying whether students have completed a college education, and the CLA is a tool they use privately to improve.

Now, for the first time, the CLA is able to say to for-profits and third party providers (who may in turn target the test to adults entirely outside of the traditional system) “here is a metric that some of the country’s most elite colleges have said is the best tool for assessing their students progress.  If your take this and do well, they must be as well prepared as students in those colleges.”  I predict it won’t be long at all before we see those exact claims coming from these spaces- For-profit StraighterLine is already advertising the CLA+ at a heavily marked-up price for use as an additional credential to offer employers.

Read in this light, the other changes from the CLA to the CLA+ take on new implications:

  • The addition of a new explicit quantitative metric (mimicking the verbal/math component of the SAT/ACT and establishing itself as a “complete” assessment of workforce skills)
  • A shift in the scoring to the “more recognizable” 1600 point scale (the metric used by the SAT from almost its inception until a few years back when the new and still inconsistently-used writing section bumped the top score up to 2400- nearly every employer would still be more familiar with the 1600 scale)
  • No longer date-dependent (the old CLA required you to administer the exam to students within a limited testing “window” in the fall and spring- Now, if you want to take the CLA on Christmas Eve or the 4th of July, go for it!)
  • Now openly and explicitly about assessing individual students

It is certainly true that if you believe the CLA is a reliable and valid tool, then it continues to have some real new value for traditional colleges-  as potential placement tests, assessments of subgroups of students (like a remedial pre-college program that meets in the summer), or possibly even as a service for interested students seeking formative feedback or a supplemental piece of evidence. Yet all of this pales in comparison to the new potential for the CLA to be used by non-traditional institutions and the for-profit third party education space- and you can see that in the shift in its marketing.

There have been efforts to use the CLA in a more public and comparative way before- Academically Adrift, 2011’s higher-education-is-failing beach-read, purported to show that the old version of the CLA was reliable and valid at the individual level (a claim that CAE took up after the publication of the book despite continued questions about its methodology) and that most students improved very little over the course of their college careers.  In recent years, community colleges and for-profits, having little to lose and potentially much to gain in the way of reputation, have pushed for the ability to publicize CLA scores- all to little avail.

This Time Could Be Different

The CLA+ though, represents the first concerted push from CAE itself to become a major player in the individual-level assessment business (a multi-billion dollar industry, unlike the small-change niche of institutional assessment) and the timing has never been more ripe.  Let’s consider the higher education ecosystem the CLA+ is stepping into:

  • There is increasing public distrust of the value-added by higher education compared to its cost, fueled in part by rising tuition costs and student debt; institutions with more to lose than they have to gain (largely elite institutions where enrollment streams are built on “reputation”) have led pushback against public and standardized metrics, but they represent a declining percentage of higher education space in terms of both enrollment and lobbying dollars.  Meanwhile…
  • The percentage and number of students attending “traditional” for-profit colleges (where the emphasis is on a student receiving a degree from that single institution) has exploded since the early 90’s.  They have been pressured more than any other sector to provide evidence of outcomes, and, with little to lose in the way of reputation, increasingly see value-added metrics as a way to set themselves apart from public and non-profit counterparts.  The same could be said of community and technical colleges, which research indeed shows can be best bang for your education buck but provide little in the way of reputational capital.
  • We may be seeing the first stages of an impending disruption eruption from unbundled, largely for-profit educational spaces that provide ways for students to abandon the traditional model by picking up credits, certifications, and experiences from multiple spaces.  These include not only MOOCs, but a growing number of other stand-alone, largely for-profit spaces.  Expert after expert has said that the key missing element is a viable credentialing option- and there’s money to be made for whoever figures it out.
  • Colleges serve many purposes, but most businesses will acknowledge that at least one of those purposes is filtering- a way to narrow the resume pile in an employer’s market.  Yet as the higher education landscape has become more expansive and more students have entered the pipeline, that filter has grown more porous.  We want access and completion to be a conversation about skills, but for employers with limited spots, it can be a conversation about numbers- assessments like the CLA, with their now “familiar” 1600 scale, could provide a new, more standardized filtering metric (in the same way that colleges use tests like the SAT)

For these reasons alone, the likelihood of something like this happening was high, and just as with college and graduate school admissions tests, there will be a great deal of money to be made in the tests themselves, services around their administration (like verification and testing centers), and preparation for them.  These are the comparisons worth drawing to tests like the SAT- except there’s one HUGE difference that all of these articles have missed and the CLA hasn’t acknowledged.

You’ll note that earlier I have referred to the claim of individual-level reliability (how consistently a test measures what it measures) and validity (how well-aligned a test is with whatever real-world skill it is trying to measure).  Both of these aspects of measurement can be incredibly sensitive to the motivation of those students completing the test (essentially, how seriously they take it) and here is the big thing we don’t know yet:

What would happen if students started studying for the CLA+ or something like it?  Currently the CLA model is very explicit: students aren’t supposed to study; .  That makes sense when it is being used to capture institution-level changes in students over four years- students have no incentive to study for their own sake, and institutions probably get a rough sense of how students stand “as-is.”  But just ask any high school junior and you’ll learn that the era of students taking the SAT without studying has gone the way of the dodo.  Similarly, if CLA starts to be used to judge even a subset of institutions in any real way, institutions will start operating under a very different set of incentives.  When some institutions and some students are using the tests in a high-stakes way and other institutions and students are using it in a very low stakes way, the test becomes less reliable comparative measure of actual institutional-level growth and can increasingly become a test of short-term preparation for that particular test.  That criticism is regularly lobbed at the SAT and ACT, but they at least have decades of experience norming the test as a high-stakes instrument, and it is almost exclusively used in that way.  What we have also seen is that when tests actually matter at the individual level, when students will study for it because it matters to them (and their parents), a for-profit industry will arise to game it.

How could this play out?

  1. I expect that we may already start to see a bit of this in for-profit spaces where it is being hawked as an alternative credential.  The problem for those right now (and, really, for any CLA claim of validitt) is that there isn’t really any firm evidence that the CLA is in any way predictive of on-the-job success. Relatedly, there is currently no evidence that a CLA score, good or bad, has any correlation to success in the job market.  Still, for-profits will latch onto the CLA’s language that it assesses the “sorts of skills that employers identify as most valuable in X survey,” and I won’t be surprised to see a gold seal reading “As featured in the Wall Street Journal” to spring up on the websites of for-profits offering it directly
  2. Watch for whether any college or university starts to use the CLA as a required, binding exit test.  As noted earlier, the CLA is currently voluntary, and even if it were made mandatory, its potential for reducing graduation rates even slightly would serve as a huge disincentive for an institution of any type to require some level of “passing” score for graduation.  Still, its not impossible that a for-profits, community college, or non-selective 4-year institution might make a gambit- higher education is becoming a competitive space, and the potential reputation-boosting upside might be worth it as an experiment
  3. It’s unlikely, but not impossible that a state government or the federal Department of Ed may will offer either incentive or exception to institutions requiring something like the CLA.
  4. More likely, we’ll see for-profit college guides and ranking lists start to request institutional CLA scores- first voluntarily, then, possibly, as a requirement.
  5. Perhaps the most potentially disruptive possibility is that colleges will not require something like the CLA, but Amazon, McKinsey, Microsoft, or another high-prestige employer starts accepting an alternative credential and makes the argument that it has worked for them as well as or better than college as a predictor of workplace success.  This, like the use of the tool as a binding exit test, could flip the motivational switch for students and change the way the CLA works both on the ground and in the marketplace.

Some in higher education will see the CLA’s move as a bait-and-switch, although a dozen years of gaining credibility with the traditional higher ed sector before diving head-first into the big-money world of the non-traditional sector seems like a particularly ambitious long-con.  But whether institutions took part in the CLA ten years ago or are thinking about it next year, they need to understand these changes and the motivations they represent in the present matter.  We’re almost certainly going to see something happen in the next few years around credible 3rd party credentialing, whether the CLA+ or something else, and it will change the way that potential students and employers consider college- if they consider it at all.

Stirred, not Shaken- a Response to “Let’s Shake Up the Social Sciences” by Nicholas Christakis in today’s New York Times

An Opinion Article entitled “Let’s Shake Up the Social Sciences,” in which Yale sociologist-slash-physician Nicholas Christakis (most sociologists try to diagnose society’s problems, but he can actually write them a prescription)  appeared in today’s New York Times.  In it, Christakis reflects on what he characterizes as a Darwinian evolution of the natural sciences since his days in graduate school, with departments like anatomy, physiology, and biochemistry disappearing or gaining relic status while departments of neurobiology, systems biology, and stem-cell biology have risen to take their place.  Meanwhile, he suggests, the sociologists are still stuck with the same majors your grandfather might have encountered (sociology, economics, anthropology, psychology, and political science).  His read of this stability is that it is “not only boring but also counterproductive”- this because the seeming inability of social scientists to “declare victory” on particular areas of research limits research at the frontiers of discovery” and undermines their credibility with the public.

Christakis’s solution includes a mass redeployment of practitioners to new fields (he gives social neuroscience, behavioral economics, evolutionary psychology, and social epigenetics as possible avenues), manifested in the creation of “social science departments that reflect the breadth and complexity of the problems we face as well as the novelty of 21st-century science”.  He makes a quick pivot from research to pedagogy near the close, hypothesizing that these new departments could better train students by challenging them to investigate in “labs” using “newly invented tools” that make it “possible to use the internet to enlist thousands of people to participate in randomized experiments.”  In the end, though, his key premise and conclusion is more about changing “institutional structures”- he offers up departments of biosocial science, network science, neuroeconomics, behavioral genetics, and computational social science as possibilities (probably no surprise, but Christakis is also the very recently-named director of the Yale Institute for Network Science). 

Let me be the first to grant Dr. Christakis’s point that there is a depressing dearth of engagement in the social sciences with recent developments in computer science, biology, genetics, and technology (with the possible exception of statistics, a hot even if often misunderstood commodity in most social fields, where the lag between the social sciences and health sciences is more like 5 years rather than 15 or 50).  But to cite the work of Janet Weiss (the current dean of the graduate school at U Michigan and with an interdisciplinary background and set of research interests after Christakis’s own heart), I would argue that while we might agree on this as a problem, Christakis’s “theory of the problem” (his “this-caused-that” story of how the problem comes to be) seems underdeveloped, and it leads him to “theory of desired outcome” (what you want to happen, and how it addresses the problem) and “theory of intervention” (what can/should be done to cause the desired outcome, and how it fixes the problem) heavy on department reorganization that are likely to have unintended consequences. 

First of all, the “Go-west-young-man!” frontier mentality doesn’t exactly work for the social sciences.  Quotes like “everybody knows…that people are racially biased and that illness is unequally distributed by social class.” may not be saying “race and stratification doesn’t matter anymore”  (although I’ll bet it won’t take long to find examples of those reading it that way), but even his clarification that “There are diminishing returns from the continuing study of many such topics.  And repeatedly observing these phenomenon does not help us fix them” seems at least a bit dismissive.  I’ll give you that, yes, in the natural sciences once you have determined that the heart pumps blood and you have observed that in n=1 billion patients, you can pretty much close the book on the “does the heart pump blood?” question.  But here’s the difference- my heart pumps blood pretty much exactly the same way as it did for my ancestors in 1843, or 1955, or 1021 or what have you- our understanding of that process has changed across those time points, but what we are observing has not.   But saying that way that racism plays out today and the way health care is stratified today is the same as it was even ten years ago is fallacy, and if you build public policy around the way things were in 1955, I’m going to predict that it will probably not go well for anyone.  This is why studying things like race, and even using some of the same ol’ methodologies as our predecessors, have real and continued value over time.

Again, I don’t think that’s really the central argument Christakis is trying to make, but it has real implications for his solution that social scientists “devote a small palace guard to settled subjects and redeploy most of their forces to new fields.”  That’s because inasmuch as a subject becomes “settled” in sociology, anthropology, political science (and to a degree, psychology and economics), it is also time-stamped (for delivery to the historians, many of whom, by the way, also consider themselves to be social scientists); racism in 1995 might be settled(ish), but racism in 2013 is still pretty fresh because the social sciences are largely about context, and contexts change.  That means that there is not a potential wellspring of untapped research bodies that can simply be redeployed at no cost to public policy by recognizing that certain strains of research no longer have value.  This isn’t like a natural scientist saying “ok, gravity is thing, let’s check that off the list”.  This is more like a biologist saying “Ok, we figured out what samples of this incredibly fast-evolving strain of bacteria looked like 2 years ago, so I don’t think we ever need to check in on it again.”   Tackling the frontier of research requires either recruiting new researchers into the field or redistributing existing resources among research projects that, for the most part, all have real value.  So, with that in mind, let’s turn to Christakis’s solution.

First and foremost among the limitations of the article is that Christakis fails to make a compelling argument for why “departments” are the correct unit of analysis.  He offers no reason, for example, why departments can’t keep their name while evolving in their research and pedagogy.  The research conducted in chemistry departments, biology departments, and electrical engineering departments today little resemble the research they were doing 20 years ago even if their names have remained stable.  This is in no small part because research funding, which drives research agendas just about everywhere except at the Institute for Advanced Study, has favored a growing emphasis on these new areas of research.  It is also because scholars in these field have an interest (in addition to the aforementioned financial one) in doing work that is both innovative and valuable.  Similarly, it is unclear why new pedagogical “tools” Christakis refers to can’t be used in existing courses, but this seems added as quick aside to his central interest in research.  I would suggest that maintenance of the current research status quo may have more to do with the availability of funding that targets this type of research, silos between existing departments, and, indeed, the concept of the “department” itself.  More on that at the close…

There are other reasons why the sort of fusion between natural and social sciences that Christakis sees as necessary and inevitable is more complex a proposition than he suggests.

  • The founding fathers of the sociology and anthropology, including Franz Boaz, saw their work as very closely linked to the traditional sciences, borrowing heavily from observational methodology and attempting to use contemporary biological tests (such as those for blood type) to link cultural characteristics to biological ones.  This early research helped lead to an explosion of new social science departments across the country.  It also served as the intellectual basis for much of the eugenics movement in the early part of the 20th century.  Although we might argue that both the methodology and goals of today’s research are wildly different, the barrier that has been put up between the natural and social sciences since that time is a historically and politically fraught one.
  • Interdisciplinary departments, committees, and majors (the latter two often serving as a transitional step before an interdisciplinary department) are not new to the Social Sciences.  Chicago’s “committee” structure is perhaps the best known and one of the longest standing.  However, with few exceptions, the faculty who staff these programs have their degrees from traditional departments- job placement in higher education is a cyclical process and development of departmental reputation takes time.  A newly-minted PhD with an interdisciplinary degree is likely to have more trouble than an equally qualified student with a PhD from a traditional discipline, and right now it’s a buyer’s market for tenure-track positions.  The dual doctorates of the author are impressive, but also reflective of a system that is heavily driven by traditional departmental divisions.
  • Full-on departments need new chairs (both leaders and seats for students), new offices, new labs, new equipment- they don’t just give those things away for free.  Colleges and universities (except the incredibly small number of school with hedge fund-like endowments) are increasingly unlikely to approve departments in this financial climate without without a clearer case for them made in terms of net tuition or research funding.  

So while I agree that there is a need for change, and while I think Christakis’s actual research is incredible and represents exactly the sorts of frontiers we should be exploring, I am neither convinced that reorganization of departments is a particularly feasible intervention, nor (more importantly) that the intended solution would bring about anything near the level of change that Christakis suggests.

What seems more likely is that the real future of this type of interdisciplinary work is not around the creation of new departments, but rather in thinking about whether there are ways to organize graduate training so that research is not driven by “departments” at all: Why is re-creating or repackaging what is essentially a political and hiring structure every time there is a new methodological breakthrough or opportunity for collaboration any more efficient than maintaining structures created for those same reasons 80 years ago?  Is there a better way to organize the training of students, hiring of experts, and cross-methodological (can I just step in here to suggest that term this has more real meaning than cross-departmental or cross-disciplinary?) research in a world where technological breakthroughs are about as regular and surprising as political soundbites?  Can we imagine networks, exactly like those Christakis studies, of researchers that extend far beyond the bounds of departmental hallways and even of campuses or countries, while allowing opportunities for casual encounters and deep collaboration that are just as real?

Private research labs like Google[x] have met with success by allowing experts to eschew organizational and titular constraints to focus on problem-based work.  I’m not claiming that a similar model is the silver bullet for higher education, and research demands span far beyond the realm of driverless cars and Google Glass.  But with the baggage that comes with any sort of change in higher education, a departmental re-organization seems like more a 1917 solution than a 2013 one, and is more like stirring around the system we already have than a true “shaking up” of what we do.

So I certainly, and sincerely, wish new cross-disciplinary departments well and hope that they are just the first step on the pathway to something transformative, but if traditional colleges and universities are not yet ready to think more innovatively about how to bring about innovation, keep your eye out for who will.