The international transformation in student loan systems
By Bruce Chapman and Lorraine Dearden
The University of Bologna, considered to be the first official university, was established in the late 11th century, and some scholars in need of finances were offered loans. This type of provision was not formalised as a student loan system until 1240 when the Bishop of Lincoln did so using money from the University of Oxford.
Many other universities followed suit, but it took until 1951 for the government of Colombia to initiate the world’s first national student loan scheme, known as ICETEX, which is still in (faltering) operation.
Over the 1960s and beyond, these arrangements became commonplace and today the higher education financing systems of the vast majority of countries are underpinned by student loan schemes.
There is a consensus in economics that loans are an essential part of government higher education policy to help relatively poor prospective students pay for tuition and-or to provide income support during periods of full-time study.
The reason is that, unlike in many other areas of funding (such as mortgages to finance the purchase of a house), commercial (bank) borrowing by students for human capital investments is unavailable simply because in the event of default a lending agency bears all risk; there is no collateral available to be sold to offset the cost of uncollectible debts.
Until 1989 these systems were all characterised by the collection of debt over a given time period, like a mortgage, and are known as time-based repayment student loans or TBRL.
However, nearly 30 years ago there began a quiet revolution internationally in higher education financing policy. This happened in Australia with the introduction of a student loan system in which debt obligations are not based on time, but instead depend on the future income of the debtor.
The relationship between A-level subject choice and league table score of university attended: the ‘facilitating’, the ‘less suitable’, and the counter-intuitive
By Catherine Dilnot, UCL Institute of Education
As the school exam season gets under way, English 18-year-olds hoping to go to a selective university will typically be taking papers in only three A-level subjects, chosen two years earlier from scores of possible subjects approved nationally, although in practice from the somewhat smaller number offered by their school or 16-18 college. This early specialism in so few subjects can have long-term consequences.
For many UK degree courses particular A-levels will be required – for example biology and chemistry for medicine. But many others don’t have subject pre-requisites, including popular degrees like business and law. So whether a sixteen year old isn’t yet sure what they want to do at university, or has an idea but wants to do a course without pre-requisites, it’s difficult for them to know which subjects to choose. The question then is whether some of the large number of A-level subjects available are more helpful than others in getting them to the university of their choice. Recent reforms have reduced the number of A-level courses approved for teaching in English schools from over 90 to 60, but it is still a bewildering array, both for students choosing, and for schools and colleges deciding what subset to provide.
One important reason that subject choice matters is because we know the sorts of A-levels chosen by 16-year-olds vary by socio-economic background. And while the number of young people going to highly selective university from low SES backgrounds has increased over recent years, UCAS figures for 2017 show that an 18-year-old in the top SES quintile is ten times as likely to attend than someone at the bottom. It’s clear that most of this gap is a result of differential prior attainment, but evidence on whether some subjects are helpful for entry to highly selective university could help chip away at the SES gap.
Lessons from the End of Free College in England
By Judith Scott-Clayton, Richard Murphy and Gill Wyness
This blog is based on a full-length article published at https://www.brookings.edu/research/lessons-from-the-end-of-free-college-in-england/
Earlier this month, New York became the first US state to offer all but its wealthiest residents free tuition at public four-year institutions in the state. This new ‘Excelsior Scholarship’ doesn’t make college completely free, nor is it without significant restrictions. Still, it demonstrates the growing strength of the free college movement in the United States.
The free college movement in the US is typically associated with liberal and progressive politics, and motivated by concerns about rising inequality and declining investments in public goods like education. Americans are thus sometimes surprised to hear the story of the end of free college in England was built upon very similar motivations.
Until 1998, full-time students in England could attend public universities completely free of charge. Two decades later, most public universities in England now charge £9,250 – equivalent to about $11,380, or 18% more than the average sticker price of a U.S. public four-year institution.
Has this major restructuring of higher education finance over the last twenty years led the English system backwards or forwards in terms of improving quality, quantity, and equity in higher education? We find that at a minimum, ending free college in England has not stood in the way of rising enrollments, and institutional resources per student (one measure of quality) have increased substantially since 1998. Moreover, after many years of widening inequality, socioeconomic gaps in college attainment appear to have stabilized or slightly declined.
Grade prediction system means the brightest, poorest students can miss out on top university places
By Gill Wyness
With UK tuition fees now among the highest in the world, but benefits from having a degree remaining substantial, choosing the right university has never been more important for young people. The government has tried to make this easier by offering more and more information not just on the university experience but on the quality of the institution and even the potential wage return students could reap.
Despite all these efforts to make the decision about where to apply as informed as possible, one issue remains: students still apply to university based on their predicted rather than actual qualifications. And these predictions are not always accurate.
Using information on university applicants’ actual and predicted grades and their university attended, obtained from the Universities and Colleges Admissions Service (UCAS), I find only 16% of applicants achieved the A-level grades that they were predicted to achieve, based on their best 3 A-levels.
Higher education, career opportunities, and intergenerational inequality
by Lindsey Macmillan and Gill Wyness
For the most part, when we think about social mobility, our concerns are with those on the lower rungs of society’s ladder; people “for whom life is a struggle and who work all hours to keep their heads above water” as Prime Minster Theresa May put it in her most recent speech on the matter. One of the issues often considered is how likely are those from disadvantaged backgrounds to enter into higher education. This is often viewed as the direct route to the top jobs in the UK where a degree is almost always a pre-requisite now. The hope is that, if society is meritocratic, rewarding those for effort and achievement rather than family background, if we get more disadvantaged kids into higher education then this will equalise their chances of reaching the top jobs. Unfortunately, in the UK, this does not seem to be the case. Recent research by ourselves, and colleagues from Cambridge, Bath and Warwick university has revealed that higher education is not the leveller we might hope it to be, and that socio-economic differences persist throughout higher education and into the graduate labour market, even comparing those with similar educational attainment.