Author Archives: Admin

High School FAFSA Filing in the Midwest

After publishing a recent blog on HS FAFSA filing rates in Washington State, Associate Director of Financial Aid at The University of Missouri Justin Chase Brown (@jstnchsbrwn) asked if I could put together something similar for a collection of states in the Midwest.  Like the Washington data, this is an imperfect snapshot because we have only been given access to one spring snapshot on 4/11 (a time point of varying value depending on the filing deadline of the state/institution), but it gives a sense a good sense of how much variation there is between timing of filing by state (compare Minnesota to Indiana) and then between high schools within states.

As I suggested in the original post (which also provides some context on why this issue is so important), there would seem to be real untapped potential here for post-secondary institutions to look more closely at the high schools in the regions they serve in order to target support services- it’s just that the data in its original form (one-per-state excel spreadsheets) isn’t very user friendly for those purposes.  Remember, these are the percentage of students who complete their FAFSA filing for the 2013-14 school year by April 11 2013 of those who at least submit by December 2013.  These are students who either are attending or taking the major proactive step to attend higher education, and many of these students will qualify for need-based aid from their states and institutions if they meet a filing deadline.  Colleges committed to access should be asking what they can do to ensure these students file early enough.

 

High School FAFSA Filing in Washington State? See for Yourself…

The federal government recently produced a set of tables, one for each state, listing the number of FAFSAs filed by high school as of the end of February, June, and December.

The numbers in these tables are particularly interesting given that, although the federal government has lax deadline for filing (only requiring that a FAFSA be submitted by the close of the following academic year), many states and individual colleges only guarantee aid to those students submitting FAFSAs before  “priority deadlines” that are typically much earlier.  In some states, a FAFSA submitted just a few days late can mean a loss of nearly half of a student’s potential need-based grant aid.

In some ways Washington has been incredibly proactive on the financial aid front. First, the evergreen state offers some of the most generous need-based awards in the country, providing more per student than any other state with grants of up to $10,868 at Washington colleges and universities.

Second, the College Goal Washington program holds events throughout the state, supported partly by volunteers, to increase FAFSA filing by early February.  A solid, proactive step that more states should take.

Yet part of the reason that current levels of outreach are necessary, and possibly still inadequate, is that the state’s newest financial aid program, College Bound (which provides a commitment of college funding to 7th and 8th grade students in foster care or eligible for free/reduced price lunch) has a particularly early deadline.  The February 1st cutoff during the spring before the fall of attendance is of the earliest in the country’s earliest, and is nearly a year and half before the federal FAFSA filing deadline.

Confusingly, many Washington institutions offer their own need-based aid, have their own priority filing deadlines, and typically only guarantee those funds to students filing before the priority deadline. Some share the February 1 target date with College Bound, but, many are pegged dates throughout the spring from February 1 to April 15, making a concerted statewide communication push all the more challenging.

Given that, I thought it might be interesting to try to make the new high school FAFSA filing data a bit more digestible (through some visualization) and contexualized against other state data sources (all from the Washington State Office of the Superintendent of Public Instruction, although painfully spread throughout the site).  After merging the data sets, specifying zip codes for all of the schools, generating some variables for school demographics and grad rates, and pulling in some additional district-staffing level information to get the FTE counselors per 100 students, I was able to produce the interactive map below and attempt some regression analysis with the new data.

The map focuses on a particular high school-level ratio: the number of FAFSAs completed by February 28 (the earliest date the government provides; this is before most institutional priority deadlines in Washington but after the College Bound deadline) divided by total FAFSAs submitted by June (a loose proxy for planning to attend college in the Fall).  The closer to 1, the more FAFSAs are likely getting submitted early enough to receive state and institutional aid.  Although I considered using 12th grade enrollment or December submission numbers as the denominator, using the former might overemphasize the influence of graduation rates and the latter might capture students intending to begin in the spring semester.  What I’m really interested in is:  Given that early filing means more access to aid, how much variation is there between schools in rates of filing by the “priority deadline period” by students intending to attend college the following fall ?

The circle size corresponds with the total number of FAFSAs submitted by June, and the color scale from red to green corresponds with the rate of Feb 28 filing.  The sliders allow you to adjust the schools shown, and you can hover over individual institutions or select subsets to dig deeper.  The default view is zoomed in a bit to focus on the greater Seattle area, but the rest of the state is there if you pull back.

The short answer?  There’s a lot of variation, and it isn’t explained away the school characteristics that you might guess.  Neither “Counselors per 100 students” or “% of underrepresented minorities” were significantly predictive in the models I tried.  A school’s graduation rate was significant, but without much power- a 1% increase in graduation is correlated with about a .03% increase in the filing rate (There are some basic scatter plots on the tabbed page of the visual below to provide a sense of the data).   All told, even after adjusting a bit and removing outliers, these variables explain only about 20% of the variation in school filing rates.

That has real implications for programs like College Goal Washington, but also for individual high schools and colleges in Washington state.  Too often the focus is on supporting applicants who seek out help rather than proactively targeting students in need of guidance.  Particularly in states like Washington, where extensive need-based aid is paired with early deadlines, students and institutions alike have a great deal to lose.  Washington’s post-secondary institutions might do well to spend some time noting the “red” schools above within a few hours’ drive… and then perhaps putting in a request for the college van.

Have insight on the variation, other states worth exploring, and ways to make the red dots green?  Share them below or on Twitter @aroundlearning

Seeing Education Across Countries and Time

You don’t have to be an expert in comparative education to know that international aid agencies and developing countries pour a huge percentage of spending into education because they see education attainment (sometimes broadly described as “human capital”) as linked to health and, ultimately, economic outcomes.  Economists and sociologists in particular spend a good deal of time working with large datasets trying to parse out the effects of completion on everything from teen pregnancy to GDP.

On an international scale this can be notoriously challenging; record-keeping quality and frequency vary enormously between countries.  In recent years international organizations including the UN, the World Bank, and the OECD have made significant strides in capturing and standardizing metrics for school attainment and completions.  This has been accomplished largely through growing inter-agency collaboration, particularly around the so-called “World Development Indicators.”  Reflecting the strategy of the agencies since the late 80’s these indicators focus chiefly on primary school completion, and, in particular, on male/female disparities (tangentially , these are more complex than they sound.  See here for one of several analyses that shows that the male/female gap is decreasing both because of a rise in female completion and a concurrent decline in male completion, and some of the potential confounding effects of culture).  Even with primary school indicators, though, there are big gaps in data availability, particularly as you move back in time, and these are often magnified when looking at less exactly tracked post-secondary indicators.  How, then, do economists think about something even larger in scope and more granular in detail- average total education (or total post-secondary education) for the entire population of scores of countries?

How Economists See It

Economists Robert J Barro and John Wha Lee, of Harvard and Korea University, respectively, have some experience tackling this problem.  They released the first Barro-Lee dataset in 1993 after compiling a wide variety of census data collected by national and international agencies and using what they refer to a “perpetual inventory method” to estimate the number of graduates at each level; essentially, they used a combination of enrollment and completion rates paired with entering numbers for the last year in which they had data, then projected them forward into the known “pool” of the population above age 15 and above age 25.  While a quantum leap forward in ‘93 (subsequently updated in 2001), the estimates came under some criticism in the late 2000s for having some strange jumps between time periods for some countries where more reliable data was hard to find, and less-than-accurate estimates for some countries where reliable data became more readily available.

Enter Barro and Lee, circa 2013, with a new methodology.  Without getting too deep into the weeds, some of the biggest changes include estimates based on new 5 year age groupings (to more exactly account for enrollment patterns), use of previous and subsequent enrollment data to weight estimates for the periods in between, and accounting for mortality rates.  For this last piece, they incorporate the fact that, on average, more education is associated with a longer life expectancy, meaning that in the 65+ group there is likely to be a growing skew towards those with more education.

All of this, to remind you, is being done with just a handful of actual full-scale census samples- the vast majority of countries included here have fewer than five  between 1950 and 2010 (the range the dataset examines), and about 15% of the countries that Barro and Lee actually include in their data set have only one census.  They “assume that tertiary completion is relatively stable” for the 15-19 and 20-24 age groups, which seems like a big assumption (more on that in a bit), but, remember, in the absence of better data this is just about the only game in town.

Actually Seeing What Economists See

Here are some interactive visualizations built on the Barro-Lee dataset (publicly available here); these are the only ones I know of built on this data set  (if you know of others, please post them below).  These visualizations are all built on the updated 25+ population estimates;  it would be great to add some that focus on the 24-29 age group to get a better sense of the changes in tertiary completion in that traditional demographic range, but I found some odd discrepancies in the 2010 completion percentages (almost all of the advanced countries had sharp declines, some by more than half) and have followed up with the authors for comment (if anyone else can provide some guidance there or is not able to replicate the problem, please let me know).

First, lets look at a metric that you won’t often find at the population level across countries- overall percentage of post-secondary completers, from 1950 to the present.

Here’s a similar time-lapse map, this time focusing on the relative differences between the overall and female rate of post-secondary completion in the national population (apologies if this enters me into the blue-for-boys, pink-for-girls nature/nurture debate, but I use that shorthand here).  The color is calculated by subtracting the overall rate from the female rate, with negative numbers showing up as red.  Note that because I’m using the overall rate as the comparison (as opposed to the male rate) differences are muted a bit.

To watch both overall post-secondary education and the associated female ratio change at once, controlled by one tracker, go to the dashboard here.

If we assume that the distribution of human capital across a country has implications for productivity, and that its effect is at least somewhat additive, then one way to compare nations is to multiply the average years of tertiary education times the population of the country to get a sense of the total years of tertiary education represented by the population.  Because Barro and Lee use “4 years” as a catch-all for completing tertiary education and “2 years” as a catch-all for “some college,” it’s possible that these number may actually be underestimating a bit in the modern era as an increasing number of college graduates pursue graduate school, particularly in a subset of the advanced economies. This bubble chart color-codes circles based on their geographic region (for developing countries) or advanced economy status (blue).  While the movement can be headache-inducing if you watch too many loops, it provides both a sense of the overall growth of international tertiary education as well as relative changes between countries while accounting for their populations.  Perhaps most powerful is the contrast between the overwhelming dominance of the US until the 1980s or so when another block of countries starts to catch up, along with a reminder that while the United states increasingly lags other countries in our completion rates we still maintain a dominance in overall years of tertiary education in the broader population (though that’s perhaps something we can only expect to last for perhaps another couple of decades).  Essentially, this captures the effects of national higher education policy for each country, with a lag for it to become distributed throughout the population.

This circle chart shows the changes in each of the measured levels of attainment- it provides both a sense of the areas where progress has been greatest over time, but also areas where emphasis appears to have been focused on a particular level of attainment (primary or secondary), with less focus (or at least success) in retention to the next level of education.  One clear example is Sub-Saharan Africa, which surpassed South Asia in primary school attainment, but has only fallen further behind the region for secondary and tertiary schooling during that same period.  The comparison is similar between Latin America and the Middle East/North Africa.

On the largest scale, though, we often talk about achievement gaps within countries, which are often characterized by whether they are narrowing or widening, regardless of whether raw achievement (however measured) is going up for both groups.  This can be a helpful and thought-provoking way to think about disparities in education outcomes between countries or even, as with the line chart below, between groups of countries. The chart below uses a simple advanced economies vs. developing economies breakdown (the filter at right can be used to adjust the regions included in developing countries).  There is a clearly visible trend across the board of a gap that is narrowing for primary education but widening for secondary and tertiary schooling (except possibly for Europe and Central Asia, where the gap is more stable).  This, of course, happening in a context where developing countries are increasing along all levels of education, but the advanced economies are increasing at an even faster pace.  Part of the reason this is not the case for primary schooling is that the advanced economies have largely topped-out; 6 years is considered the maximum for primary schooling, so advanced economies have largely already encountered “ceiling” to their attainment. By contrast, tertiary schooling is on the rise across all countries.  There are obvious implications here for those interested in questions of inter-country inequality, and for those who study whether a particular level of education has stand-alone value or if its primary worth is in its relative rarity compared to its distribution in the marketplace.  That’s a question of increasing importance as international education development moves beyond the relatively clear-cut outcomes of literacy, health, and basic math in primary school.

For those who like to glance through some of the underlying data, below is a chronological crosstab that looks at these same indicators- average years of primary, secondary, and post-secondary education- by country, using the Barro-Lee estimates.  As with many of the other vizzes in this post, just use the slider to change the reference year.

As always, feedback, questions, and new angles to explore are always welcome.

Continuing the Conversation: A Response to Payscale.com’s Comments on the Post “9 Problems with the Payscale.com College Rankings (and One Solution)

(the following post is a set of responses to the comments posted by Payscale.com on the post “8 Problems with Payscale.com’s College Rankings (and One Solution)”.  The original comments are reproduced verbatim in italics, and the Around Learning responses follow each.  This post first appeared as an addendum to original article)

Many thanks to the folks at Payscale.com for their reply, included in full in the comments section and italicized below-I prefer a conversation to a soapbox any day!  As noted in the article, these responses should not be viewed as a critique of the work and style of Payscale.com writ-large; they focus narrowly and specifically on the Payscale.com college rankings, and suggest that, particularly in light of efforts to gather accurate data on outcomes for all colleges, Payscale.com is not the right solution as currently contructed.  That does not mean the rankings (or, perhaps more appropriately, a non-ranked comparison system) can’t be improved, and, as some colleagues have noted, they may be better than nothing.

Here are my thoughts on your thoughts on my thoughts:

1.  Accuracy is of utmost importance to our business. Every salary survey submitted to PayScale is validated through a number of accuracy checks both automated and manual. They are further judged to see if they are the result of attempted data fraud. Lastly, our team of data scientists does regular validity tests comparing PayScale data to other sources of compensation data (both publically and privately available). We have more than 2,500 business customers who rely on our data to set compensation for their employees.

Absolutely, and sorry if that was unclear-glad to tweak the language there.  The article is intended to assume that Payscale.com, Glassdoor.com, and other salary websites have some spot-check mechanisms in place triggered by responses like “clown school” for undergrad,  “underwater basket weaver” as profession, and $1,000,000,000 as salary.  It is, of course, impossible, though, to know if the institutions a respondents say they have “graduated” from are the same as those where they started, which means it has the same problem as graduate degrees- how much credit do we give to an individual’s first institution vs the one where they finished (see my #4 and your #3 below).  There are also a myriad of well-researched challenges with individuals misreporting their own salary and graduation data with no attempt to mislead- it’s just something that, because of taxes, bonuses, and another dozen things, is incredibly hard to standardize when relying on self-reported data- that limitation is true with the best and most thoroughly vetted surveys.  There is also some great research out there on how inaccurately folks self-report even simpler things things like birthdays and marital status.  Re: your 2,500 business customers, please note that #7 acknowledges that Payscale is probably a great source of industry-level data (at least for many industries), and that’s because of an assumption that your overall sample is much larger than your sample of data linked to specific undergraduate institutions (especially the small ones).   You likely also draw upon other industry data (which may be the confusion over the 40 million profiles vs 1.4 million, noted in my response to your next question)  that is likely incredibly valuable in examining sector-level data, but of course would not be linked to undergraduate institution.

2.  PayScale actually is the top purveyor of not only user-reported salary information but of salary information in general. Our data is more accurate, more up-to-date and broader than any other salary information available. PayScale has a database of more than 40 million individual compensation profiles. Glassdoor claims to have “nearly 3 million salaries and reviews.”

It sounds like we may be mixing metrics here- the NYTimes article cited in the post notes, “PayScale says its rankings are based on data from 1.4 million college graduates” it would be helpful if you could clarify the difference between salary profiles and user-reported salaries- do both include undergraduate information?  If not, then the Glassdoor citation may actually make my case (with a reminder that the focus of the article is not Payscale’s broader work as an aggregator of salary data across industries, only those records associated with colleges). The citations in that section are based on the overall number of user-reported salaries linked to individual colleges and the average daily visit rate- if it is significantly different than 35,000, I would just need another source to correct it. I think, though, that even if Payscale does have the largest dataset of salaries linked to colleges, the particular argument the post makes there is less about market dominance than about the existence of a market at all, the small number of samples per college, the potential for regional variation, and the inability for outsiders to ask questions about such variation and other potential biases.

3. We do exclude master’s degrees and above but for good reason. The intention of our report is to meet the needs of the majority of prospective students researching college choice. Because only 11% of those aged 25 and older hold a master’s degree, according to US Census data, the majority of prospective students will only complete a bachelor’s degree. We sought to offer the best comparison of post-grad outcomes for that population. If someone wants to use the data for another purpose, we’d need to create a unique, specific dataset for that purpose, which we’re more than happy to do.

As the post notes, this is absolutely the standard, logical (read ‘with good reason’) practice with this sort of thing, but that does not make it unproblematic.   The 11% figure you cite for advanced degrees is roughly correct for the entire U.S. population, but remember, by the nature of these rankings we are interested only in a subset of the population: college graduates.  It was big news a year back that BA or higher attainment in the 25+ group topped 30% for the first time in 2012, that means that while only 11% of the population hold an advanced degree 37% of the intended sample population (11 ÷ 30) hold one. That number can also be derived independently using data here. That % is growing quickly (and, as such, is higher if you look at the younger half of the workforce from which most of the Payscale data is derived), and that 37% is not evenly distributed across all institutions.  In fact, many of the colleges most highly valued by consumers (at least in part) for their graduates’ eventual potential earnings send a disproportionate number of their alums into graduate programs.  The original post has been updated to clarify this point. Unfortunately, as I note, while institutions can track this information using a tedious National Student Clearinghouse workaround, there is no good public way to track graduate degrees at the national level across institutions except for PhD’s, so let’s use those as an example:

Looking at Grinnell again, data derived from the NSF Webcaspar system (which tracks about 92% of all PhDs) indicates that 15% of Grinnell alums get PhDs.- that’s excluding M.A.’s, M.D’s, J.D.s- any other non-PhD graduate or professional degree.   Nationally, only 2% of all individuals and ~5% of college grads get doctorates of any kind (so including M.D., J.D., Ed.D, etc- most estimate 2% is closer to the average PhD’s specifically for college grads), so Grinnell alums have between 3-7 times the average, even among only B.A. grads.  Overall, we know that nationally another ~31% will get a non-doctorate advanced degree (law school, M.A.’s, and med school, and a dozen other potential programs).  Even if Grinnell’ s comparative rate of attainment for these other degrees is even much less than its PhD rate- let’s say they only get them at 2 times the national rate for B.A. grads-  you’re now talking about three quarters of the alumni base, and that might even be a conservative estimate.  But we don’t and can’t know for sure because… we don’t have a good federal tracking system, which is of course the real point of the article.  Ideally, one of the contextual pieces you would want for this sort of data would include % of alums earning advanced degrees (perhaps broken down in some way) by institution- again, you can’t access that information, but you could include % getting PhDs (using the source I cite) and the % with advanced degrees in your database.  You could then disaggregate salaries along those lines for institutions with more than a certain number of cases; alternately, you could exclude schools (largely small schools and highly selective schools) above a certain threshold; you could even compare the PhD rate to your own internal distribution to check for bias. Your own data will still be highly subject to self-report bias along other lines, and it would be difficult if not impossible to provide rankings in this way, although I could imagine a more useful comparison system that used this data.  Alternately, you could do some sort of rankings focused on schools with an explicit mission (which some have) of sending graduates directly into the workforce without any additional advanced training  (and which have data to suggest that’s what happens as practice as well), but of course that loses some of the cultural wow-factor.

4. State data not tracking alumni working out of state is a big problem When comparing our data to the state data, you make the assumption that the state data is more accurate even though they can’t track salary data for alumni working out of state. We don’t have this constraint, so I’d argue that it’s the state data that is likely inaccurate. We’ve researched the question of where alumni end up after graduation — near their alma mater or not. Around 50% of grads work in a different state than the state in which they attended school.

entirely agree that the state data is imperfect as well and not that in the article- that’s exactly the reason (or at least that plus the inefficiency of replicating collection systems in 50 states, plus the even more valuable questions of graduation, retention, and post-grad degree attainment…) that we need a good national system.  However, it would be helpful to know more about the 50% number you cite- I would caution that if it comes only from the folks in the Payscale system, then the same selection challenges of the data-set, broadly (young, white collar, actively searching for jobs using a national database of salaries) may be skewing results.  Even if based in something external to the site, part of the reason the post uses Texas is that the state’s large (and growing) population centers, diverse economy (with a growing tech sector), and lack of comparable job-market competition in its immediate border states mean that its out-of-state mobility is almost certainly lower than nearly every other state’s- and remember that the comparison point here was two public universities.  Certainly, some meaningful percentage of Texas public college alums will take jobs out of state, but Texas A&M has graduated about 49,000 student since Payscale.com was founded in 2006, and Payscale has records for about 2,000 students that would would have graduated during that range of years, so even if the Texas data includes only 50% of graduates (although I expect it includes many more) it’s hard to believe it isn’t much more accurate for the (frustratingly small) number of schools in its system.

 

5. We’re very open to research requests. No one has ever asked, but we’ve actually considered releasing some of our data publically so that it can be scrutinized by researchers. We just recently put together a board of advisors that includes economists and others in the academic world, including Mark Schneider of AIR, to talk about how to do things like this in the best way. We are a for-profit company, and our data is our largest asset, so we do have to be careful about what we make available to protect our business interests, but we’re very interested in helping to further the discussion around college outcomes. We are the largest and best source of this data.

That’s great to hear, and thank you in particular for replying to this portion-  #8 has not been reworded a bit to focus more on the core issue it was intended to address.  There is of course no way for me to know about your responses to all requests definitively, and I’m sure you may get many, so I’ll also remove the anecdotal note about the experience of some colleagues, and if you have interest in working with some university-based research groups that explore these sorts of data issues all the time, I’m glad to suggest some.  However, limited engagement with outside researchers and work with individuals in advisory or consultant roles still falls short of the larger point that you allude to in your comment: Payscale.com is a for-profit company and the data is (entirely reasonably!) proprietary.  What that means, as you suggest, is that releasing more than a certain policed amount of data is contrary to Payscale’s business interests; what it also means is that evidence pointing to bias in the data is problematic for the company in ways not only methodological but financial.  That’s an entirely understandable, necessary limitation of any for-profit company doing this sort of work.  Thus the post concludes noting that the real question isn’t about the quality/access to Payscale data, but about why there isn’t a federal, publicly accountable option, especially given that the government is already collecting this data through the FAFSA system and the IRS- they just are not legally able to merge it.  This option would allow us to address not only earnings outcomes, but more importantly issues around graduation, retention, and (as our conversation around advanced degrees suggests) post graduate degree attainment.  A growing national voice including the President, non-profits, tax-payers, and students are arguing that even as we laud the value of education we can acknowledge that it is also an expensive and stratified enterprise, and that, as such, we have the right to ask tough questions of our colleges and universities- and to make that conversation public and accountable.  I suggest here simply that we owe it to ourselves and the many schools “doing it right” to be able to hold the data we use in that process to the same publicly accountable standard.

Many thanks again for the reply.  I’m glad to continue the conversation and would love to learn more about your efforts to make even a restricted dataset open to analysis, and would be glad to add any follow-ups you have to the post.

 

9 Problems with Payscale.com’s College Rankings (and One Solution)

Note: A reply to each of the points in Payscale.com’s comments on this article has been relocated to a separate post here.  Problem’s #4 and #8 were also updated to clarify the issues in question.

But how much can you MAKE???

Friday’s New York Times article on Payscale.com and today’s in the Huffington Post show the website’s growing influence in the national conversation about higher education outcomes, and, in particular, the role of “average earnings” in that debate. The Seattle-based company, which built its business model around providing a glimpse at wages to potential applicants in exchange for their own current salary and providing industry-level aggregate salary data to employers, has gotten increasing press recently for disaggregating salaries based on the college attended and undergraduate major the respondent.  In addition to Payscale’s own rankings, available on its website, its numbers are now a central part of the Forbes Rankings (where they outweigh “graduation rate”) and reputable sites like the Gates Foundation-funded / Chronicle-sponsored CollegeRealityCheck.com.

Here’s a spoiler alert- this post is NOT about why we shouldn’t include salary data as part of the national outcomes conversation (there’s are clear arguments for why salary data alone shouldn’t be the metric by which colleges are judged, but neither Payscale nor those citing its data have suggested otherwise); to the contrary, I’m assuming here that salary information and the type of data that necessarily underlie it is vital part of the national conversation on the outcomes of higher education.

Nor is this about Payscale.com having bad intentions or being bad at what it does.  Their motivation in creating the rankings is clear and reasonable: Parents, students, researchers, and non-profits really want to know how much a graduate can expect to earn, and with no real national-level competition for this data and big pile of salary records in the vault, Payscale is filling a niche where there is money to be made.  And while I think Payscale.com a great resource in its original wheelhouse of industry-level salary information, there are some serious problems with using its college and major data for anything beyond cocktail conversation and some very limited comparisons between a small subset of institutions. Here are 9-

The Problems with Payscale’s College Rankings

  1. The form and setting where the data is collected isn’t conducive to accuracy. Payscale collects all of its wage information from a quick survey required of visitors to access salary information about a particular company or job.  It’s entirely fine for what it is, but as with any similar survey there is no way to check for absolute accuracy (of either college attended or salary earned), and it is typically filled out quickly in an attempt to get to the information of interest.  On top of that, and perhaps even more importantly, self-reported data on salary is just notoriously tricky.

  2. Sites like Payscale oversample young workers new to the job market .  Users of any online salary comparison tool are more likely to be young, white-collar workers;  That’s fine, if that’s all you want to know about, but the ratings purport to indicate “mid-career” earnings and include colleges that serve much wider demographics.  Looking at the response pool for any college, typically only about 10-15% have 10+ years of experience.

  3. Payscale in particular may over- and under-sample lots of things (like region), but we can’t know what. Payscale.com isn’t the internet’s top purveyor of user-reported salary information.  Competitor Glassdoor.com receives about 315,000 visits per day compared to Payscale’s 35,000. Because the data is proprietary (see #8) we can’t know how or if users are biased by region or anything else, but it seems likely that Payscale.com, with its lower name recognition, might, for example, be more popular near its west coast base than in the Southwest (Glassdoor is based in Texas).

  4. Payscale logically but problematically excludes anyone with an advanced degree.  This is more of a challenge with looking at outcomes broadly.  When students have graduate degrees, how  do we parse out how much of their current earning to attribute to their undergraduate institution vs how much we attribute to their graduate institution?  The imperfect solution almost always used is to exclude anyone with more than a B.A. entirely.  Payscale.com noted in their response to this article that nationally only 11% of all individuals in the U.S. 25 or older have a M.A. or higher, but remember that this sample includes only college graduates.  How does that change relative percentage?  Well, it was big news a year back that BA or higher attainment in the 25+ group topped 30% for the first time.  That means that about 37% of the intended sample population (11 ÷ 30) hold an advanced degree. You can also derive that number using data here. That % is growing quickly (and, as such, is higher if you look at the younger half of the workforce from which most Payscale data is derived), at least in part because M.A.’s, PhD’s, and professional degrees end up earning more than those with only a BA.  On top of that, this 37% is obviously not evenly distributed across all undergrad institutions (see the response to Payscale’s 3rd comment  for why the number may be close to 75% for many of the colleges on the Payscale list) and you’ve got yourself a real bias problem- this unequal distribution also applies to majors.  An entirely separate issue is that undergraduate institutions would argue, not unreasonably, that these students are admitted to and succeed in graduate school because of what they learn in undergrad, and if that’s even partly true then institutions where these students represent a high percentage of all graduates are likely misrepresented for that reason as well.

  5. Payscale rankings don’t (and can’t) weight by majors.  It won’t be news to anyone that there are differences in mean earnings based on undergraduate major- Payscale even promotes this fact on a separate part of its site.  So you would think they might try to take account of this in some way in their rankings.  Just a quick glance is all it takes to notice that the vast majority of the top schools are those where majors are skewed towards a certain subset of majors.  You could perhaps make the case that a student on the margin about what they wanted to do with their life might go to one of these colleges and would be more likely to choose a technical profession and go immediately into the workforce rather than graduate school… but as you might imagine, most students who go to these types of schools go because they already know that’s exactly the sort of thing they want to do – representing not a value added, but an input.  More problematically, this means that schools with a more even distribution of majors or even a skew towards certain types of majors (take a few minutes and try to find the first “school of the arts” on the lists rank low), even though a student going into one of the very few fields that Payscale reports correctly here (again, largely types of engineering and technical work) who attends the University of Virginia or a small liberal arts college might actually earn more than a similar student and top-ranked Harvey Mudd (or that one of the tiny number of humanities majors at MIT or other technical schools might vastly under-earn their counterparts at other schools).  This isn’t what the these rankings purport to measure, but it’s a large part of what they do.
  6. There are very few responses for many colleges, and Payscale uses this limited data to make questionable inferences.  Payscale already admits in its sparse methodology that the confidence interval for liberal arts colleges is 10%, but with thousands of graduates represented by fewer than 100 salaries even this is likely too conservative and could lead to false conclusions.  In one troubling example, the NYTimes article above questions a “gender gap” in the rankings by pointing to low ranks for Wellesley and Bryn Mawr.  IN response, an unnamed Payscale.com representative’s explains that “women’s colleges still don’t produce enough graduates in engineering, science and technology, the fields that draw the highest salaries.”  Yet National Science Foundation data from a nearly complete dataset of U.S. PhDs shows that Wellesley, which ranks 304 on the Payscale list, is 33rd in the country in production of Science and engineering PhD’s degrees per capita.  Bryn Mawr, #562 on Payscale, is 12th- 9.7% of their alums go on to get PhD’s in Science or Engineering- a healthy margin above John’s Hopkins, Yale, Rennselaer Polytechnic Institute, UC-Berkley, and obviously just about every other institution.  Grinnell, a small co-ed liberal arts college also called out in the article for being #366 on Payscale, comes in at #8.  What’s more likely is that these institutions rank so low because a) they have a very small set of responses in Payscale’s database b) they produce a disproportionate number of graduate student c) they may produce fewer of a certain type of graduate that goes immediately into the workforce for a certain type of firm.  One easy test (but impossible with the data they make available) would be to look at the relative variability of rankings plotted against institutional size and responses.

  7. More accurate (but also imperfect) state-level data sets suggest that even the Payscale data on large universities may be way off. There are a handful of state-level systems that collect alumni salary data based on actual state tax records (Arkansas, Tennessee, Virginia, Colorado, and Texas).  Just a few quick comparisons to actual wage data from Texas show that Payscale overestimates Texas A&M’s starting salary by almost $10,000 ($51,900 vs. $42,662) or 22%. Meanwhile, it underestimates initial year earnings at Texas Woman’s University by $3,694 or 8%.  These state-level systems are limited too- like Payscale, they only look at earnings for students with only a B.A. and can’t track those alumni working out of state, but both the number of records and their accuracy is vastly higher for the state-level institutions they report. (See why Texas is used as the example here)

  8. We can’t know what else is wrong (or right) with the data because the Payscale data is proprietary.  In many ways, Payscale is an advertisement for the value of the private sector.  If you you want a great sense of what people make in certain industries, or a ballpark sense of what they make at a particular large company, sites like Payscale, and Glassdoor, and Salary.com are appealing as helpful because the companies they report on may be tight-lipped about salary.  On top of that, their presentation of the the data is well-organized, visually appealing, and user-friendly in a way that outdoes anything you’ll find on the department of education website.  But the data in question here has big implications, and comes with huge risk of bias.  While Payscale.com has indicated in a response (found in the comments below) that they are “very open to research requests” they also note, entirely reasonably, that they must consider data releases in light of the business interests.  Because not only their data-quality but their reputation and market value are threatened by challenges to data quality, it’s simply impossible fully explore the bias in the data, ensure the quality of the methodology, and ask more important questions of the information in ways that are publicly available- and publicly accountable.

  9. There could be a MUCH better option.  To be clear, I don’t begrudge Payscale.com one bit- they’re a for-profit company that is filling a unoccupied market for which there is clearly a demand.  On top of that, one can imagine that, even given the limitations above, Payscale data might actually be very accurate as a metric for the early career salaries of white collar/tech workers from large institutions who do not go to graduate school.  Nor can we reasonably criticize groups that have cited the rankings in a search for something, anything, that can tell us about institutional post-graduate outcomes at the national level. That’s why our question shouldn’t be “why is Payscale.com data so bad,” but, rather, “Why don’t we have something better?”

So why don’t we have something better?

You may be thinking to yourself- “But wait- the federal government has attendance and graduation records for every student who has ever gotten any federal financial aid (just about all of them, in some form or another), and the IRS has got the nation’s best records on what people make.  Both have an ID number that could be used to link them.  Why can’t we just use that?”

One answer, offered by former director of the of the National Center for Education Statistics (NCES) Mark Schneider in a chapter in Accountability in Higher Education, is that we tried. Already concerned about growing levels of student loan debt and the questionable amount of student aid going to new for-profit institutions, the NCES proposed a student-level record system and issued a report demonstrating its feasibility while protecting student-level privacy, yet progress  was cut short after political pressure from the higher education lobby. Leading the charge was the National Association of Independent Colleges and Universities, or NAICU, which ironically represents many of the small and medium-sized private schools noted as likely misrepresented in #6.

In 2008, NAICU successfully argued that because a federal student-level record database would compromise student privacy (although similarly confidential information is collected en masse by both the FAFSA and the IRS) and that because institutions were already participating in voluntary accountability processes (largely re-reporting already-available or incomplete data to a limited audience), a student-level database wasn’t just a bad idea- it should be impossible.  As a result, the 2008 reauthorization of the Higher Education Act specifically prohibits federal collection of student-level data on graduation and subsequent earnings.  Instead, each state has been asked to create separate databases (like those already functional in the states noted in #7), and in some states, including Texas and Wisconsin, private colleges have successfully lobbied to create their own databases separate from public colleges and replete additional restrictions on access.

The potential of a national student-level dataset relates not only to post-graduate earnings, but to essential, universally valued metrics in higher education- graduation and retention rates.  Our current method asks hundreds of individual colleges to report on who finishes college in ways that entirely miss 1) transfer students, 2) those who enter as part-time students, and 3) those who take more than 6 years to graduate (or 3 years if the student is at a 2 year institution); over a third of all students fit the first category alone.  Instead, nearly every summary statistic you have ever read relies upon the antiquated federal IPEDS reporting system, which collects data for a subset of students that made sense in 1950, but not in 2013.  That isn’t because the government doesn’t care about the large (and growing) group the metrics miss- indeed, understanding their success will be central to the future success of financial aid and higher education in the U.S.  We ignore these students because we have passed a law that makes it impossible for us to track them.

If the Obama Administration, non-profits, and education policy advocates are really serious about wanting to better understand the outcomes of college, legalizing and facilitating the creation of a highly-secure but highly comprehensive student-level data system yardpost against which to measure their own success.

“If you want a fair opinion of dogs, don’t just ask the fire hydrants.”

There’s some meaning in this old survey research mantra for the Payscale.com data- we need seriously consider the limitations we know about, and the many we not- many of which come back to the simple question of who is really getting asked.

But, broadly, this is about voices being heard- Parents, students, and taxpayers deserve more intelligent answers to questions about the broader return on their investment in higher education.  It’s the very sort of idea that might be debated and applauded within the hallowed halls of the higher education institutions whose representatives have fought the hardest against it.  While it can be a little depressing to the ol’ idealism, the politics of higher education are real, getting a counter-argument heard will require time and funding, and good ideas without strong advocates often remain only that.

This one is worth getting right.

Everything You Know About the CLA Is Wrong

The Curious Evolution and Disruptive Future of the Collegiate Learning Assessment

This week the Wall Street Journal carried an front page article entitled “Colleges Set to Offer Exit Tests” (the online title is different, and misleading in a different way).  The prominent placement was not surprising given President Obama’s recent tour of colleges promising a more explicit look at the outcomes of higher education than rankings currently provide (more on outcomes and rankings in two upcoming articles…).  Still, the title caught me a little off-guard given that I work in higher ed and had not yet received the memo that we were all administering exit tests this year.

As it turns out the, the article is a glancing look at administration of the Collegiate Learning Assessment Plus (Formerly the CLA, now abbreviated CLA+  rather than the less fortunate acronym “the CLAP”) at about 200 colleges this fall.  It’s important to note up-front that the CLA+ is neither an exit text nor a “Post-college SAT” as suggested by the online headline- and I’ll explain why, shortly.  Yet, while the article is problematic in lots of ways (starting with its title), it is the first CLA+ mention of any length that I have seen in the popular press, and seemed worthy of a follow-up.  That’s because while the limited administration of the CLA+ this is far from revolutionary, it’s very possibly the first step towards something much bigger.  

[Note: since writing, I’ve noticed that the Chicago Tribune has posted a similar article and borrows the “exit exam” language for its title, as has Fox Business, Business Insider,  and that beacon of higher education investigative reporting: Cosmopolitan Magazine.  The CLA marketing team must be on the move!]

Most folks who work in or around higher education know that CLA didn’t spring fully-formed from the knee of Zeus this past week.  Developed in the late 90s and released in the year 2000 with funding from the Council for Aid to Education (a major tax-exempt charity started by a collective of businesses), it has become a sort of darling of accreditors and assessment groups like the Wabash Study while managing to gain the trust of a wide cross-section of traditional public and private institutions.

Unlike similar tools developed by the ACT (the CAAP) and ETS (the Proficiency Profile), which have been more widely adopted by public 2 years and subset of public 4 year institutions participating in the VSA , the CLA has seen penetration in many “elite” privates and publics- last year it was administered by Amherst and UT-Austin, and dozens of highly selective schools that wouldn’t touch something like the ACT’s CAAP with a ten foot pole have have used the CLA at least once .

Part of the very reason these schools have found the CLA appealing is that it is not an “exit test”- a term typically reserved for an exam that one must take, and often must pass, to graduate.  In fact, using it as an exit test was impossible- the CLA has noted from the onset that unlike tools like the SAT, the CLA was intended only for measurement at the institutional level, primarily for internal assessment.  Further, the CLA has always been (and thus far still is) entirely voluntary- institutions can’t require it of their students.   Instead, it is administered to a sample of both first year students and seniors who often receive some sort of incentive in exchange for their roughly two hours spent completing the computer-based, which consists of a “performance task” requiring students to sift through a digital document library to answer a series of questions in a process intended to replicate “real-world” decision-making.  The written responses are scored by a computer program, which the CAE argues has reliability levels similar to two trained human graders (but some critics have suggested can be gamed with non-sensical answers).

The  CLA uses a regression model for its institutional reports (which has become at least less questionable since they switched to HLM in 2010) to control for student and campus characteristics to show change in students at the group-level over time.  After the administration, schools get a student-level summary file and an institutional report showing whether their student improve more or less than predicted. Following pushback when institution names were released along with their scores after an early administration, further administrations simply provided reference to a “comparison group” and explicitly discourage institutions from publicizing their own results.

The important take-home here is that the form and function of the CLA up until 2012  was targeted at a core user base that bought into the concept of the CLA because provided a form of direct assessment that can be used internally and reported to accreditors without risking a challenge to their reputation.   I would suggest that the vast majority institutions using the CLA this fall will be using it in the same way that they have for the past dozen years- as voluntary administrations to gather internal institutional metrics and satisfy accreditors.  But this year they, and even those institutions that took it years ago, will be complicit in something larger that the Journal article correctly alludes to- setting up the CLA as a potential individual-level credential. 

Assessment professionals received the first glimpse of this change as part of an email from the CAE in the fall of 2012.  Here’s an excerpt:

“…it is with tremendous excitement that we share our next enhancement, CLA+, a version of the CLA that is designed to provide reliable information at the student (in addition to the institutional) level.

 Launching in beta this spring and more formally next fall, CLA+ will, among other things, allow faculty to share formative feedback directly with students and open use of the assessment to the unique needs of each campus. The development of this enhanced version of the CLA will also allow the reporting of even more subscores (like scientific and quantitative reasoning, and critical reading and evaluation)”

Did you catch it?  The opening  phrase is marketing language targeted at the CLA’s traditional core audience- faculty committees and assessment contacts at regular ol’ universities and liberal arts colleges- and that’s certainly how the second paragraph reads, quite intentionally.  But hear-you-me, a claim buried in that first sentence, that this new version can now provide “reliable information at the student…level” marks the opening gambit to become a whole ‘nother kind of heavy-hitter in higher education and beyond.

To do this, the CLA is taking advantage of an unspoken but widely understood bargain made by these traditional institutions- they were willing to administer the CLA and to suggest that it was at least the “most accurate metric available” for measuring what college is “really supposed to do”, as long as this satisfied accreditors’ demands for direct assessment without posing any risk to their reputation.  In the old model, institutions remain the keeper of the keys for certifying whether students have completed a college education, and the CLA is a tool they use privately to improve.

Now, for the first time, the CLA is able to say to for-profits and third party providers (who may in turn target the test to adults entirely outside of the traditional system) “here is a metric that some of the country’s most elite colleges have said is the best tool for assessing their students progress.  If your take this and do well, they must be as well prepared as students in those colleges.”  I predict it won’t be long at all before we see those exact claims coming from these spaces- For-profit StraighterLine is already advertising the CLA+ at a heavily marked-up price for use as an additional credential to offer employers.

Read in this light, the other changes from the CLA to the CLA+ take on new implications:

  • The addition of a new explicit quantitative metric (mimicking the verbal/math component of the SAT/ACT and establishing itself as a “complete” assessment of workforce skills)
  • A shift in the scoring to the “more recognizable” 1600 point scale (the metric used by the SAT from almost its inception until a few years back when the new and still inconsistently-used writing section bumped the top score up to 2400- nearly every employer would still be more familiar with the 1600 scale)
  • No longer date-dependent (the old CLA required you to administer the exam to students within a limited testing “window” in the fall and spring- Now, if you want to take the CLA on Christmas Eve or the 4th of July, go for it!)
  • Now openly and explicitly about assessing individual students

It is certainly true that if you believe the CLA is a reliable and valid tool, then it continues to have some real new value for traditional colleges-  as potential placement tests, assessments of subgroups of students (like a remedial pre-college program that meets in the summer), or possibly even as a service for interested students seeking formative feedback or a supplemental piece of evidence. Yet all of this pales in comparison to the new potential for the CLA to be used by non-traditional institutions and the for-profit third party education space- and you can see that in the shift in its marketing.

There have been efforts to use the CLA in a more public and comparative way before- Academically Adrift, 2011’s higher-education-is-failing beach-read, purported to show that the old version of the CLA was reliable and valid at the individual level (a claim that CAE took up after the publication of the book despite continued questions about its methodology) and that most students improved very little over the course of their college careers.  In recent years, community colleges and for-profits, having little to lose and potentially much to gain in the way of reputation, have pushed for the ability to publicize CLA scores- all to little avail.

This Time Could Be Different

The CLA+ though, represents the first concerted push from CAE itself to become a major player in the individual-level assessment business (a multi-billion dollar industry, unlike the small-change niche of institutional assessment) and the timing has never been more ripe.  Let’s consider the higher education ecosystem the CLA+ is stepping into:

  • There is increasing public distrust of the value-added by higher education compared to its cost, fueled in part by rising tuition costs and student debt; institutions with more to lose than they have to gain (largely elite institutions where enrollment streams are built on “reputation”) have led pushback against public and standardized metrics, but they represent a declining percentage of higher education space in terms of both enrollment and lobbying dollars.  Meanwhile…
  • The percentage and number of students attending “traditional” for-profit colleges (where the emphasis is on a student receiving a degree from that single institution) has exploded since the early 90’s.  They have been pressured more than any other sector to provide evidence of outcomes, and, with little to lose in the way of reputation, increasingly see value-added metrics as a way to set themselves apart from public and non-profit counterparts.  The same could be said of community and technical colleges, which research indeed shows can be best bang for your education buck but provide little in the way of reputational capital.
  • We may be seeing the first stages of an impending disruption eruption from unbundled, largely for-profit educational spaces that provide ways for students to abandon the traditional model by picking up credits, certifications, and experiences from multiple spaces.  These include not only MOOCs, but a growing number of other stand-alone, largely for-profit spaces.  Expert after expert has said that the key missing element is a viable credentialing option- and there’s money to be made for whoever figures it out.
  • Colleges serve many purposes, but most businesses will acknowledge that at least one of those purposes is filtering- a way to narrow the resume pile in an employer’s market.  Yet as the higher education landscape has become more expansive and more students have entered the pipeline, that filter has grown more porous.  We want access and completion to be a conversation about skills, but for employers with limited spots, it can be a conversation about numbers- assessments like the CLA, with their now “familiar” 1600 scale, could provide a new, more standardized filtering metric (in the same way that colleges use tests like the SAT)

For these reasons alone, the likelihood of something like this happening was high, and just as with college and graduate school admissions tests, there will be a great deal of money to be made in the tests themselves, services around their administration (like verification and testing centers), and preparation for them.  These are the comparisons worth drawing to tests like the SAT- except there’s one HUGE difference that all of these articles have missed and the CLA hasn’t acknowledged.

You’ll note that earlier I have referred to the claim of individual-level reliability (how consistently a test measures what it measures) and validity (how well-aligned a test is with whatever real-world skill it is trying to measure).  Both of these aspects of measurement can be incredibly sensitive to the motivation of those students completing the test (essentially, how seriously they take it) and here is the big thing we don’t know yet:

What would happen if students started studying for the CLA+ or something like it?  Currently the CLA model is very explicit: students aren’t supposed to study; .  That makes sense when it is being used to capture institution-level changes in students over four years- students have no incentive to study for their own sake, and institutions probably get a rough sense of how students stand “as-is.”  But just ask any high school junior and you’ll learn that the era of students taking the SAT without studying has gone the way of the dodo.  Similarly, if CLA starts to be used to judge even a subset of institutions in any real way, institutions will start operating under a very different set of incentives.  When some institutions and some students are using the tests in a high-stakes way and other institutions and students are using it in a very low stakes way, the test becomes less reliable comparative measure of actual institutional-level growth and can increasingly become a test of short-term preparation for that particular test.  That criticism is regularly lobbed at the SAT and ACT, but they at least have decades of experience norming the test as a high-stakes instrument, and it is almost exclusively used in that way.  What we have also seen is that when tests actually matter at the individual level, when students will study for it because it matters to them (and their parents), a for-profit industry will arise to game it.

How could this play out?

  1. I expect that we may already start to see a bit of this in for-profit spaces where it is being hawked as an alternative credential.  The problem for those right now (and, really, for any CLA claim of validitt) is that there isn’t really any firm evidence that the CLA is in any way predictive of on-the-job success. Relatedly, there is currently no evidence that a CLA score, good or bad, has any correlation to success in the job market.  Still, for-profits will latch onto the CLA’s language that it assesses the “sorts of skills that employers identify as most valuable in X survey,” and I won’t be surprised to see a gold seal reading “As featured in the Wall Street Journal” to spring up on the websites of for-profits offering it directly
  2. Watch for whether any college or university starts to use the CLA as a required, binding exit test.  As noted earlier, the CLA is currently voluntary, and even if it were made mandatory, its potential for reducing graduation rates even slightly would serve as a huge disincentive for an institution of any type to require some level of “passing” score for graduation.  Still, its not impossible that a for-profits, community college, or non-selective 4-year institution might make a gambit- higher education is becoming a competitive space, and the potential reputation-boosting upside might be worth it as an experiment
  3. It’s unlikely, but not impossible that a state government or the federal Department of Ed may will offer either incentive or exception to institutions requiring something like the CLA.
  4. More likely, we’ll see for-profit college guides and ranking lists start to request institutional CLA scores- first voluntarily, then, possibly, as a requirement.
  5. Perhaps the most potentially disruptive possibility is that colleges will not require something like the CLA, but Amazon, McKinsey, Microsoft, or another high-prestige employer starts accepting an alternative credential and makes the argument that it has worked for them as well as or better than college as a predictor of workplace success.  This, like the use of the tool as a binding exit test, could flip the motivational switch for students and change the way the CLA works both on the ground and in the marketplace.

Some in higher education will see the CLA’s move as a bait-and-switch, although a dozen years of gaining credibility with the traditional higher ed sector before diving head-first into the big-money world of the non-traditional sector seems like a particularly ambitious long-con.  But whether institutions took part in the CLA ten years ago or are thinking about it next year, they need to understand these changes and the motivations they represent in the present matter.  We’re almost certainly going to see something happen in the next few years around credible 3rd party credentialing, whether the CLA+ or something else, and it will change the way that potential students and employers consider college- if they consider it at all.

Data Visualizations of the New 3-Year Default Rates

I’m spending some time this weekend looking through the 2009 3-year default rates in preparation for the release of the 2010 3-yr rates (likely this September).  For years, the DOE has “officially” reported only on the 2-yr rate, even though study after study has shown that the 3 year mark better captures eventual long term default rates (and many researchers have recommended up to a 5 yr rate, noting that default rates don’t actually level off until about that point).  Starting with the 2005 cohort (released in 2008) though, the federal government began to collect and distribute 3-yr rates on a trial basis.  In 2012 they for the first time released “official” 3-yr rates for the 2009 cohort.  For whatever reason, those 3 yr rates haven’t yet made it to the site intended for “consumers”, only to the more data-heavy, Window-95-ish, “default management and prevention” data page.

The difference between these two rates is significant, and my guess is that it may also vary further by sector and certainly by institution.  What we know for sure is that for the 2009 cohort, the 2 year default rate overall was 9.1% while the 3 year rate of default was 13.4%.  For those keeping score at home, that means default rates increase by almost 50% between the 2nd and 3rd year.  Yet, for some reason that escapes me, nearly all of the public coverage on this issue (and certainly all of the better data visualizations I have seen) continue to use the 2 year default rate as the frame of reference.  Things are going to get serious around these 3-year rates soon- starting with the 2011 cohort rates (which will likely be released in early fall of next year) institutions with greater than 30% of students in default will face sanctions (that’s a shift from the current 25% at 2-year threshold, and given what we know about how rates change from year 2 to year 3, should be stricter).

So, with that in mind…

Here is a first pass at some interactive data visualizations of the 2009 3 year student loan default rates by institution, mapped by zip code, colored (and filterable) by institutional sector (public/private/proprietary), and additionally filterable by institutional default rate and institutional type (Associate’s, Bachelor’s, MA/PhD, etc).  The size of each circle represents the actual number of students in default by institution, not the default rate (but remember, that’s a filter); while that tracks institution size to some degree, you may be surprised at how much variation there is across similarly-sized institutions.  You can mouse over (or tap on iPad) to get detail on the institution represented by a particular circle.  To see which schools are at risk of sanction under the new policy, just limit the default rate to >30%.  To get the full screen experience, just click here.

Let me know your thoughts- what variables are missing here?  What are other visualizations that might be helpful?  I’m thinking about a look at institutions/sectors where the jump from 2 to 3 year default rates is particularly large, and some views that take into account changes in default rates since the release of the 3 year data in 2005.

Since the initial posting, I also threw this next viz together- a scatter of # of students of default (logarithmic) by default rate, with color corresponding to sector and shape corresponding to institution type.  As with the last viz, you can filter by institutional sector and type.  Again, here’s a link that will take you to a full screen version

This gives a better sense of the spread within sector than the previous version.  Note that while all sectors have a spread across # of students in default (reflecting the diversity of each sector), the actual spread in default rate is particularly wide for proprietaries (for-profits).  Private not-for-profits tend to cluster furthest to the left, then publics, then for-profits, but remember that this is largely a reflection of the student populations they serve- similarly, if you control for type of institution (2 yr vs 4 yr vs MA/PhD) publics and privates start to look more similar along the rate axis (publics are almost always larger as a group), reflecting the concentration of 2yr/Associate’s institutions in the public sector.

To go back to the geographic theme, here’s a state-level view where, as with the national map above, circle size corresponds to the actual # of defaulters at a given institution, but here the circles are colored by a scheme based on the default rate.  Relatively lower rates are greener and relatively higher rates are redder (so a large school might, not surprisingly, have a larger number of defaulters, but that school might still have a proportionately small default rate and thus would show up as green).  This may be unintentionally confusing in Wisconsin where, between UW and the Packers, red and green arguably both have positive connotations, but I think you’ll get the idea.  As with the previous vizzes (vises?  vizes? vizs?) a full-screen version can be accessed here:

Update:

One suggestion (thanks to dfcochrane for both the suggestion and prediction of the color scale problem!) was to simply scale-up the Wisconsin version (which colors default rate on a green to red scale) to the national level.  Straightforward… except for those pesky little institutions with a single borrower in repayment who also happened to be in default, making the overall default rate a whopping 100% (A+?).  That skewed the scale, making nearly everything that you could see without zooming somewhere between “green” and “forest green”.  Solution?  I shifted the center of the color scale to 13.4%, the average institutional default rate, so now every green circle is a school with a rate somewhere below 13.4% (below average) and every red circle is a school with a rate above 13.4% (above average).  Remember that you can use the default rate slider to limit further, such as, for example, only looking at schools with rates at or above 30% (and thus at risk for federal sanctions).   Full screen view here:

Another interesting request (thanks, n_hillman, for the idea and link to the source data) was to color/filter by accrediting agency.  There are, of course, SO MANY accrediting agencies if you consider all of their sub-regional and specialty variations  that you would definitely need a Crayola Big Box range of colors, and there would be a lot of “Is this ‘Screamin’ Green’ or ‘Granny Smith Apple?'”-type questions.  Given that I’ve decided to limit to institutional accreditators, and then I’ve further limited that to those institutions covered by the “Big 8” Regional Accreditors.  That said, I think it might actually be interesting to look for patterns within some of the lesser known accreditors.  Full-screen view here.

Side note- using a “color-blind” palette below after hearing from a colleague that a non-negligible number of readers (disproportionately men) have trouble making distinctions on most color-coded charts of this style.  If you normally have trouble with red/green, would love feedback.

 

Other ideas (for these or other datasets that might benefit from a little visualization)?  Edits? Recipes?  Let me know!

Rhetoric vs. Reality: The New Student Loan Rate Proposal

Updated 7/24/13: To the chagrin of some democrats, the proposed deal described below passed the Senate today with a vote of 81-18, and the House is expected to act on the bill by the close of next week.  Loan rates will be tied to the 10-year T-note.

A Senate committee announced this week that it had reached a compromise on student loan interest rates, and here’s (part of) what it says: Effective on passage, undergraduate interest rates will be “lowered from 6.8% to 3.85%” [note: newest language says 3.86%], graduate school interest rates will be “lowered from 6.8% to 5.4%”, and PLUS loans (which are available in addition to Stafford loans) will be “lowered from 7.9% to 6.4%”.

Why the doubt-inducing quotes?

Mostly in response to the overstated certainty, oft repeated in headlines like this or this,  that student loan interest rates are being, well, being lowered.

In the days ahead we’re going to hear that claim often (as well as equally extreme denunciations of it- more on that at the end), echoed by politicians on both sides of the aisle and from the White House.  And, to be sure, it is true. But only technically.  And only for the moment.

Let’s get right to it:

First the “technically” part-

This statement is probably most misleading in reference to undergraduate student loans.  Although the “official” rate for undergraduates has been 6.8% since 2006, the College Cost Reduction Act of 2007  steadily cut undergraduate interest rates to 6% (2008-9), 5.4% (2009-10), 4.5% (2010-11), and most recently 3.4% (2011-12).  So, to clarify, until July 1 of 2013, the official fixed interest rate was 3.4%, where it had been for year.  That decline was not just the government cutting students a sweetheart deal; it matched a similarly steep decline in the interest rate of three month treasury bills (you can do your own exploring here), which student interest rates were tagged to until 2006 and will be again any minute now.  Those 3 month T-Bill interest rates haven’t skyrocketed, so lowering the interest rate from its automatic increase to 6.8% a few days ago has actually been a foregone conclusion for weeks, and one of few truly bipartisan issues in Congress (every district has college kids, or parents of college kids, or people aspiring to be parents to college kids).

So why didn’t they change the rate sooner?  Some have called the delay a political tactic that allowed members of both parties to claim that the interest rate has “fallen,” even though it is technically higher now for undergraduates than it was three weeks ago.  Making the decision before the deadline would have had congressmen telling their constituents that rates had “only increased .4%,” which admittedly doesn’t have the same ring as “slashed by a whopping 2.95%!”  The latter really lends itself better to the exclamation mark.

For graduate students, who are a less potent political constituency, the story is a little different.  First of all, the College Cost Reduction Act didn’t affect the graduate student interest rate, which has sat at 6.8% since 2006, so 5.4% is by all accounts a decrease from the rate on loans issued over that period.

Two other big differences for graduate students: 1) Starting in 2012, subsidized deferment was not longer an option for grad students- that meant that, unlike undergraduates, who can defer interest until they graduate, graduate student loans start accruing interest almost immediately while their holders are still in school. 2) Graduate students can borrow more (the limit for grad students is $138,500 [$65,500 of which can be Stafford], compared to $35,500 for undergraduates [$23,000 of which can be Stafford]).  No surprise there; grad school tends to cost more, and grad students often study for more than 4 years, but this has some big implications where you’re talking about economies of scale.

And now that “just for the moment” part…

…and that’s the switch from a fixed rate to a variable rate.   In other words, student loans will switch to 3.45% and 5.4% if the legislation passes, but could rise shortly thereafter.  While caps have been proposed, (8.25%, 9.5%, and 10.5% for undergraduate, graduate student, and PLUS loans, respectively), a quick glance shows that all of these rates are significantly higher than the current rates- the undergraduate rate, for example, could more than double.

Some, including Elizabeth Warren, have compared the proposal to her arch nemeses, the credit card companies, likening the low “introductory rates” of students loans to a bait-and-switch.    Yet the argument at this extreme is a bit unconvincing as well.  First, the loans are still “locked-in” for the borrowers- if  you start a loan at 3.85%, that remains your rate for the life of that loan even if the rate for newly issued rises.  You can argue, as Warren has, that that’s like charging “current high sophomores to pay for current college sophomores” (because when the high school kids are in college their higher rates will subsidize the debt burden of the old college sophomores, in an attempt to ease the debt burden of the… you get it).  But lets remember that although the rate has been “fixed” for the past 8 years, it has changed *five times* during that stretch.  That’s not exactly stable, and not fair if we’re defining fair as equal interest rates regardless of context.

Additionally, Some economists argue that the old system (soon to be called the new system) of tagging student borrowing to treasury bills is more efficient because it better reflects the market and the potential of savings to contribute to repayment and more rapidly responds with lower rates in tough economic times (this historical chart shows how variable rates played out through the 90’s)- it’s worth noting that loans taken out before 2006 are now hovering around a 2% interest rate- it’s hard to beat that.  Republicans have suggested going a step further, slightly increasing interest rates in order to help repay the national debt- they claim they can get $715 million in the next decade (which seems like a lot, although, to contextualize it, that’s about 34% of the price of one B-2 bomber).  Others have expressed discomfort at the idea of the government “profiting,” as they characterize it, from student loans,  arguing a that the government has a public obligation to make higher education accessible.

It is important to note, as many have before me, that if we’re talking about an undergraduate who has borrowed $25,000 for college (remember, that’s the maximum Stafford, plus $2,000) and we’re thinking about a standard 120 month repayment, then the real differences we’re talking about here are small.  The shift from the 3.4% of a month ago to the 3.85% today comes out to a $5 difference per month; at the absolute high end, the cap of 8.25%, that moves up to $55 (about $30 for the average undergraduate borrowers), which is admittedly non-trivial if I’m remembering my first paycheck correctly, but most folks think it is unlikely that we’ll see them go anywhere near those levels without a major economic recovery.

For this reason, most commentators like Jason Delisle of the New America Foundation have been dismissive, saying that single digit percentage point shifts in interest rates are worth noting when considering a huge-ticket item like a mortgage, “but a home mortgage is $200,000, $300,000, $400,000. So moving the interest rate a little bit lower makes a big difference in your monthly payment. That’s not so on a $20,000 student loan.”  Noted.  Sort of.  But…

This tale of woe can be a bit more dismal for the graduate students, who, because of their different federal borrowing limits are more often referenced in particularly dire stories carried by the press recently- these are the students with $120,000 of debt (which, to Delisle’s point, can buy a decent starter home in most of rural America).  Some of these borrowers are professionals, with M.D.s. or J.D.s, or M.B.A.s that should make repayment a breeze (although that proposition is increasingly questionable with the latter two), but many are PhDs or M.A.s in non-lucrative fields (who the public increasingly seems less likely to feel sorry for) and M.A.- or certificate- seekers on non-traditional pathways scraping together classes at night, online, and at for-profits (who the public mostly just doesn’t think about.  Think “graduate student” and I’ll bet you’ll see a twenty-something at a research university library, but borrowing trends tell us otherwise).  For these students, let’s imagine a less extreme and not uncommon amount of graduate debt- $70,000.  That seems high, because the average grad student debt is around $30,000 for MA’s and $50,000 for PhD’s, but remember that those numbers vary a great deal between fields where aid and TA/RA positions are less frequently available.   With the shift from 6.8% to 5.4%, they’ll save about $49 a month (although a $756 monthly payment at the lower rate is nothing to sneeze at).   Should the rate increase to the max of 9.5%, that student with that debt would pay $905 per month- up $100 from the current 6.8% rate.

I’ve used the standard repayment option here, and there are others: extended repayment, income-contingent repayment, income-sensitive repayment, graduated-payment…. but if you’re getting overwhelmed already, so are most most borrowers, whose exit counseling typically consists of skimming a few webpages and multiple choice questions.

So let’s not overstate the enormity of the plan- whether you think it’s good or bad, it’s not a revolution- but let’s also avoid being so dismissive that we miss the individuals at the extremes.  For now, keep your eyes open for last minute tweaks and compromises (some democrats are pushing for a last ditch effort to lower the caps) and all of the tap dancing around language that will follow from both sides.

And, hey, not to make Senator Warren nervous, but if you happen to find yourself in the market for higher education at this particular moment, then you might want to lock in now.  These deals won’t be around for long.

 

Stirred, not Shaken- a Response to “Let’s Shake Up the Social Sciences” by Nicholas Christakis in today’s New York Times

An Opinion Article entitled “Let’s Shake Up the Social Sciences,” in which Yale sociologist-slash-physician Nicholas Christakis (most sociologists try to diagnose society’s problems, but he can actually write them a prescription)  appeared in today’s New York Times.  In it, Christakis reflects on what he characterizes as a Darwinian evolution of the natural sciences since his days in graduate school, with departments like anatomy, physiology, and biochemistry disappearing or gaining relic status while departments of neurobiology, systems biology, and stem-cell biology have risen to take their place.  Meanwhile, he suggests, the sociologists are still stuck with the same majors your grandfather might have encountered (sociology, economics, anthropology, psychology, and political science).  His read of this stability is that it is “not only boring but also counterproductive”- this because the seeming inability of social scientists to “declare victory” on particular areas of research limits research at the frontiers of discovery” and undermines their credibility with the public.

Christakis’s solution includes a mass redeployment of practitioners to new fields (he gives social neuroscience, behavioral economics, evolutionary psychology, and social epigenetics as possible avenues), manifested in the creation of “social science departments that reflect the breadth and complexity of the problems we face as well as the novelty of 21st-century science”.  He makes a quick pivot from research to pedagogy near the close, hypothesizing that these new departments could better train students by challenging them to investigate in “labs” using “newly invented tools” that make it “possible to use the internet to enlist thousands of people to participate in randomized experiments.”  In the end, though, his key premise and conclusion is more about changing “institutional structures”- he offers up departments of biosocial science, network science, neuroeconomics, behavioral genetics, and computational social science as possibilities (probably no surprise, but Christakis is also the very recently-named director of the Yale Institute for Network Science). 

Let me be the first to grant Dr. Christakis’s point that there is a depressing dearth of engagement in the social sciences with recent developments in computer science, biology, genetics, and technology (with the possible exception of statistics, a hot even if often misunderstood commodity in most social fields, where the lag between the social sciences and health sciences is more like 5 years rather than 15 or 50).  But to cite the work of Janet Weiss (the current dean of the graduate school at U Michigan and with an interdisciplinary background and set of research interests after Christakis’s own heart), I would argue that while we might agree on this as a problem, Christakis’s “theory of the problem” (his “this-caused-that” story of how the problem comes to be) seems underdeveloped, and it leads him to “theory of desired outcome” (what you want to happen, and how it addresses the problem) and “theory of intervention” (what can/should be done to cause the desired outcome, and how it fixes the problem) heavy on department reorganization that are likely to have unintended consequences. 

First of all, the “Go-west-young-man!” frontier mentality doesn’t exactly work for the social sciences.  Quotes like “everybody knows…that people are racially biased and that illness is unequally distributed by social class.” may not be saying “race and stratification doesn’t matter anymore”  (although I’ll bet it won’t take long to find examples of those reading it that way), but even his clarification that “There are diminishing returns from the continuing study of many such topics.  And repeatedly observing these phenomenon does not help us fix them” seems at least a bit dismissive.  I’ll give you that, yes, in the natural sciences once you have determined that the heart pumps blood and you have observed that in n=1 billion patients, you can pretty much close the book on the “does the heart pump blood?” question.  But here’s the difference- my heart pumps blood pretty much exactly the same way as it did for my ancestors in 1843, or 1955, or 1021 or what have you- our understanding of that process has changed across those time points, but what we are observing has not.   But saying that way that racism plays out today and the way health care is stratified today is the same as it was even ten years ago is fallacy, and if you build public policy around the way things were in 1955, I’m going to predict that it will probably not go well for anyone.  This is why studying things like race, and even using some of the same ol’ methodologies as our predecessors, have real and continued value over time.

Again, I don’t think that’s really the central argument Christakis is trying to make, but it has real implications for his solution that social scientists “devote a small palace guard to settled subjects and redeploy most of their forces to new fields.”  That’s because inasmuch as a subject becomes “settled” in sociology, anthropology, political science (and to a degree, psychology and economics), it is also time-stamped (for delivery to the historians, many of whom, by the way, also consider themselves to be social scientists); racism in 1995 might be settled(ish), but racism in 2013 is still pretty fresh because the social sciences are largely about context, and contexts change.  That means that there is not a potential wellspring of untapped research bodies that can simply be redeployed at no cost to public policy by recognizing that certain strains of research no longer have value.  This isn’t like a natural scientist saying “ok, gravity is thing, let’s check that off the list”.  This is more like a biologist saying “Ok, we figured out what samples of this incredibly fast-evolving strain of bacteria looked like 2 years ago, so I don’t think we ever need to check in on it again.”   Tackling the frontier of research requires either recruiting new researchers into the field or redistributing existing resources among research projects that, for the most part, all have real value.  So, with that in mind, let’s turn to Christakis’s solution.

First and foremost among the limitations of the article is that Christakis fails to make a compelling argument for why “departments” are the correct unit of analysis.  He offers no reason, for example, why departments can’t keep their name while evolving in their research and pedagogy.  The research conducted in chemistry departments, biology departments, and electrical engineering departments today little resemble the research they were doing 20 years ago even if their names have remained stable.  This is in no small part because research funding, which drives research agendas just about everywhere except at the Institute for Advanced Study, has favored a growing emphasis on these new areas of research.  It is also because scholars in these field have an interest (in addition to the aforementioned financial one) in doing work that is both innovative and valuable.  Similarly, it is unclear why new pedagogical “tools” Christakis refers to can’t be used in existing courses, but this seems added as quick aside to his central interest in research.  I would suggest that maintenance of the current research status quo may have more to do with the availability of funding that targets this type of research, silos between existing departments, and, indeed, the concept of the “department” itself.  More on that at the close…

There are other reasons why the sort of fusion between natural and social sciences that Christakis sees as necessary and inevitable is more complex a proposition than he suggests.

  • The founding fathers of the sociology and anthropology, including Franz Boaz, saw their work as very closely linked to the traditional sciences, borrowing heavily from observational methodology and attempting to use contemporary biological tests (such as those for blood type) to link cultural characteristics to biological ones.  This early research helped lead to an explosion of new social science departments across the country.  It also served as the intellectual basis for much of the eugenics movement in the early part of the 20th century.  Although we might argue that both the methodology and goals of today’s research are wildly different, the barrier that has been put up between the natural and social sciences since that time is a historically and politically fraught one.
  • Interdisciplinary departments, committees, and majors (the latter two often serving as a transitional step before an interdisciplinary department) are not new to the Social Sciences.  Chicago’s “committee” structure is perhaps the best known and one of the longest standing.  However, with few exceptions, the faculty who staff these programs have their degrees from traditional departments- job placement in higher education is a cyclical process and development of departmental reputation takes time.  A newly-minted PhD with an interdisciplinary degree is likely to have more trouble than an equally qualified student with a PhD from a traditional discipline, and right now it’s a buyer’s market for tenure-track positions.  The dual doctorates of the author are impressive, but also reflective of a system that is heavily driven by traditional departmental divisions.
  • Full-on departments need new chairs (both leaders and seats for students), new offices, new labs, new equipment- they don’t just give those things away for free.  Colleges and universities (except the incredibly small number of school with hedge fund-like endowments) are increasingly unlikely to approve departments in this financial climate without without a clearer case for them made in terms of net tuition or research funding.  

So while I agree that there is a need for change, and while I think Christakis’s actual research is incredible and represents exactly the sorts of frontiers we should be exploring, I am neither convinced that reorganization of departments is a particularly feasible intervention, nor (more importantly) that the intended solution would bring about anything near the level of change that Christakis suggests.

What seems more likely is that the real future of this type of interdisciplinary work is not around the creation of new departments, but rather in thinking about whether there are ways to organize graduate training so that research is not driven by “departments” at all: Why is re-creating or repackaging what is essentially a political and hiring structure every time there is a new methodological breakthrough or opportunity for collaboration any more efficient than maintaining structures created for those same reasons 80 years ago?  Is there a better way to organize the training of students, hiring of experts, and cross-methodological (can I just step in here to suggest that term this has more real meaning than cross-departmental or cross-disciplinary?) research in a world where technological breakthroughs are about as regular and surprising as political soundbites?  Can we imagine networks, exactly like those Christakis studies, of researchers that extend far beyond the bounds of departmental hallways and even of campuses or countries, while allowing opportunities for casual encounters and deep collaboration that are just as real?

Private research labs like Google[x] have met with success by allowing experts to eschew organizational and titular constraints to focus on problem-based work.  I’m not claiming that a similar model is the silver bullet for higher education, and research demands span far beyond the realm of driverless cars and Google Glass.  But with the baggage that comes with any sort of change in higher education, a departmental re-organization seems like more a 1917 solution than a 2013 one, and is more like stirring around the system we already have than a true “shaking up” of what we do.

So I certainly, and sincerely, wish new cross-disciplinary departments well and hope that they are just the first step on the pathway to something transformative, but if traditional colleges and universities are not yet ready to think more innovatively about how to bring about innovation, keep your eye out for who will.

How Google Hangouts Could Shape the Online Seminar

Google Hangouts

Although Google hangouts, the flashy group-video chat component of Google+ hasn’t turned into the social-fabric-ripping (sewing?) force that Google hoped it would be, there is serious promise here for collaborative learning in a way that may make it the future of collaborative learning, the seminar, and (I hope) the dreaded online workshop.

For those of you unfamiliar with their work, the principle behind google hangouts is pretty straightforward- most free video chat services (skype, ye olde Google Video Chat, FaceTime) provide a one-to-one connection for two people, and… here ends the reading.  Skype provides what they call “group video calling” for about 8 bucks a month, but Google has been the first major provider to offer it at the much more affordable price of free.

The Implications for Online Education

The implications here are important- one of the most alternatingly tiresome and effective critiques of education technology by the education establishment is that removing instruction from the physical classroom robs the experience of, and I’m paraphrasing here, the spidey senses of faculty, who, elder statesmen argue, are able to sense misunderstanding in the facial expressions of their audience and respond to it immediately.

Setting aside for a moment the debatable claim that the typical faculty member changes instruction based on the facial expressions of the audience, there is, embedded in this claim, a very limited understanding of what the online classroom can entail.  It is true that “lectures,” as such, have typically been discussed by Clayton Christensen here and elsewhere as being more efficiently delivered to large audiences by experts in either live streaming or video form.  What that means, though, is that thousands of faculty hours previously dedicated to face-to-face yet one-way lectures can now be spent in interactive synchronous seminar-type discussions while the one way lectures can be delivered in asynchronous form, delivered at the student’s own pace.  2U (the artist formerly known as 2tor) has been leading the charge with this model in higher ed (check out this Inside Higher Ed Article on their MBA Program at UNC), although it carries a MAJOR up-front cost to the institution and is still very limited in scope (a big benefit of 2U is their upfront investment cost in technology infrastructure and “production” of videos that incorporate the content of faculty lectures- somewhat akin to third party dining services in education who spend tens of thousands of dollars renovating dining halls and menus in exchange for a multi-year contract)

These online seminars replicate some of the most vital elements of the classroom (which are actually often lost in the typical lecture), the ability of all participants (not just the “instructor”) to respond to one another directly, ask questions at the appropriate pause, and gather facial cues to aid in interpretation.  The reviewer in the Inside Higher Ed argues that synchronous version was actually a more engaged experience than the typical seminar or lecture because faculty are able to “call” on individual students at any time, expanding their video screen for the whole class to see, and even when not speaking a live video of each individual is clustered in the upper right side of the screen.  It’s unclear how widespread this form of instruction is, but the author also mentions a social dynamic akin to the best parts of the traditional classroom cropping up in this setting as well- a few students stick around after class to ask questions, and a couple of students continue their conversation after the professor has left- no need to worry about a parking ticket or clearing out for the next class.

It’s still not perfect, and for most folks Google Hangout will still be a bit of an adjustment.  One of the strangest things is the prevalence of near, but not actual, eye contact.  This can be avoided somewhat with careful positioning of the webcam, but it does sort of change the “everyone in class is looking at me” feeling into an “everyone in class is looking at my chin” feeling.  Unless you’re willing to drop some serious bank, there isn’t a real solution to this yet, but look out for more integrated fixes to this sort of thing as online video communication becomes increasingly important in the new economy.  Also, just like we have all had to learn cellphone etiquette (ok, we all should have learned cellphone etiquette.  You know who you are), there are some basics of videochat etiquette we’ll need to get used to as well, like positioning ourselves in a space with minimal background movement and distraction.

Still, this type of technology opens the door to the type of interactive learning that may one of the best uses of the time savings we gain from asynchronous lectures, and provides a more meaningful role for the truly great active teachers- those who are not only good saying things clearly, but those who are actually skilled at helping students learn.

Stay tuned for updates on this technology from Google.  Currently the free version supports up to 10 simultaneous users, similar to the average 2U class size, but that could increase in the years ahead.  Most recently, Google has announced that a feature they call Hangouts on Air, basically the ability to stream hangouts live to an unlimited internet audience.  If you’re tired of flying to conferences just to listen to panels in crowded hotel convention halls, this could be the next big thing.