International education assessments have become the lifeblood of education governance in Europe and globally. However what do we really know about how education systems are measured against one another and the effects this measuring produces? Operating as a new form of global education governance, international assessments create a powerful comparative spectacle focused on the performance and apparent ‘effectiveness’ of education systems around the world; this spectacle is now not only including the global rich but also those countries which are often pejoratively described as ‘developing’. However, and despite international assessments’ dominance and ever-pervasiveness into the logic and planning of education, there are still many areas of critique and complexity: the ways these studies are organised and delivered; the impacts they have through decontextualizing education and quantifying some aspects of it (but not others); the effects they have on what is considered worthy of teaching and knowing; and most importantly, the interlinkages that are silently yet powerfully made through commensurating education with the application of similar policy instruments that measure the economy, the labour market, even health, migration, international development; the list can go on.
Much attention has so far been given to the OECD Programme of International Student Assessment (PISA). But why and how has PISA become such a powerful force in education policy-making? To use a metaphor from the medical sciences, PISA took an apparently rapidly worsening patient (according to the diagnosis of the OECD) – education in Europe – and supplied it with a life-saving, and life-changing, transplant. All the essential parts were already there: an education industry; numerous national experts and statisticians; the believers in linking education with the labour market, as well as its critics; and the indicators that the OECD had already been preparing since the 1970s, as well as other international studies that had prepared the field: the IEA’s Progress in International Reading Literacy Study (PIRLS), Trends in International Mathematics and Science Study (TIMSS) and the previous OECD International Adult Literacy Survey (IALS) and Adult Literacy and Life Skills Survey (ALL) studies. In addition, from a more European point of view, a soft governing tool (with a hard agenda!), the Open Method of Coordination, was also ready to be launched and change the European education policy landscape for good. PISA became the heart that breathed life into this previously disparate body. This heart was beating the beat of comparison and competition, connecting the parts into a single entity, itself represented by the OECD rating and ranking tables. The PISA charts became the totemic representations of the new governing regime, excluding caveats or any awkward knowledge in order to offer policy makers what they are often after – fast-selling policy solutions.
This is the beginnings of a story that has been eloquently described and analysed by a number of academics in the field. The Laboratory of International Assessments was set up to investigate ‘chapter 2’ of this story and ask; now that international assessments are with us (and seem to be with us to stay), what are their long-term effects on education governance in Europe and globally? What do they mean for the knowledge and policy relationship and what do they suggest about the changing politics of education policy in the 21st century? How do policy makers use them (if they do)? Can participation in their organisation and management be more open and democratic or is it that their statistical complexity renders them legible only to the very few? These and many other questions are what we intend discussing over the next couple of years in the Economic and Social Research Council (ESRC) seminar series on ‘The Potentials, Politics and Practices of International Education Assessments’. The first seminar, on ‘Education Governance and International Assessments’ will take place at the University of Edinburgh this December 11 and 12 – it is already oversubscribed, a fact which shows the increasing interest and attention to the phenomenon by the scholarly, policy and testing agency communities. For more commentaries and focused analysis, watch this space – we are only just starting!