Winner of the New Statesman SPERI Prize in Political Economy 2016

Saturday, 20 January 2018

Microfoundations and the values of policymakers

For economists

This post started an interesting discussion, directed largely by Beatrice Cherrier‏ (@Undercoverhist), about how economists are increasingly tending to hide the value judgements they make. By value judgement I do not mean the trivial, like why did you get interested in this area rather than others, but more serious issues like what values are assumed as part of their analysis. (The distinction between the two was the point of my original post.)

It occurred to me that microfounded macro has an issue that is related to that discussion. It is in fact discussed in my OXREP paper, but used there as an example of where microfoundations had gone one step backwards, with only the prospect of going forwards in the future. The example is the derivation of a benevolent policy maker’s preferences from the utility function of the representative consumer assumed as part of the model, a line of research initiated by Michael Woodford.

Before getting on to the values point, let me note that it is a good example of the primacy of internal consistency in microfoundations rather than the Lucas critique. Before Woodford’s work, microfoundations macroeconomists were embarrassed that they typically assumed an ad hoc objective function for the policy maker choosing between the bads of deviations in inflation from target or deviations of output from its natural rate. Typically, results were presented with alternative values for the policy maker’s preferences between the two. But if the policy maker was benevolent and the model is internally consistent, shouldn’t this objective function reflect the utility function of the representative consumer in the model? What Woodford showed was how this could be done, and better still how it implied the form of objective function, quadratic, that had previously been used on an ad hoc basis. The preference between output and inflation deviations was now an implication of the model.

It was, it is important to admit, an exciting breakthrough. We could now tell policymakers that, if this is the utility function of the representative consumer, and the model was a good representation of reality (yes, I know), this is how you should be trading off output and inflation losses. It was a literature I participated in with colleagues. The derivations were hard and tedious to do, and could take pages of algebra, but within a year every macro paper of this kind had switched from ad hoc objective functions to derived objective functions. If you were doing macro and wanted the paper published in a good journal, this is what you had to do. 

There was only one problem. The simple version of a New Keynesian model that most researchers used implied that inflation deviations were much more important than output deviations. This was very different from the adhoc objective functions that had been used before, where equal weights were commonly used. It also appeared unrealistic: not only did policy makers not act as if inflation was all important in reality, but consumers in happiness studies tended to rate unemployment as more important than inflation. That was the step backwards that I mentioned earlier.

But what it also did, I think, was to make less transparent the value judgements that the researcher was implicitly making. Everyone, including policymakers, know that macro models are huge simplifications, but to get interpretable results that is what you have to do. Yet they also have some idea of their preferences between excess output and inflation. But if the policymaker’s preferences were now endogenised, they would generally get welfare results presented to them with no choice to make involving their own preferences.

Researchers were not hiding anything. The utility function of the representative agent was there to see, and most papers would show the derived objective function with a low relative weight on output deviations. But what was often not shown was how the results would differ under alternative objective functions: why would you as a modeller committed to microfoundations, as to use any other weights than those implied by the model was internally inconsistent. Thus internal consistency took a value judgement away from policy makers.   

Thursday, 18 January 2018

What Carillion tells us about public sector outsourcing

Jeremy Warner, an editor at the Telegraph, once said there are either big state people or small state people. I felt the same way following the reaction to the collapse of Carillion: there are either private good, public bad people or public good, private bad people. Of course, reality is somewhere in between.

Corillion went bust because of cost overruns or delays in three large construction projects. The nature of such projects involve that kind of risk, but clearly the company - despite its size - was not resilient enough to withstand those failures. It did not go bust because of privatisation of public services, unless you think the government should build its own hospitals or roads. If anything, it shows that those contracting out public contracts were getting a good deal.

There will always be public projects contracted out to the private sector. Much of the increase in public investment planned by Labour if it wins the next election will be undertaken by private firms. Getting the contracting relationship right is difficult and fraught with dangers.

The government clearly has questions to answer about why it continued to award contracts after the profit warning, and we need some informed analysis to determine whether the government, as they claim, had fully protected all but one (!) of these projects and will not lose any money as a result of the collapse. As David Allen Green suggests, this smacks of ministerial failure. (The link also shows public procurement can have its funny side.) Also why was the position of the “crown representative” who was meant to be overseeing scrutiny of, among others, Carillion left vacant? The government should also ask whether companies should be allowed to pay large dividends when their own pension fund is underfunded. And why Carillion's auditors, KPMG, gave it a clean bill of health when its balance sheet was already showing signs of stress
.
To see what lessons the collapse of Corillion does have for the debate over whether the public sector should privatise certain of its activities or do them in house, we need to go through some of the pros and cons

There is one main benefit of contracting out public services, which is that it can save money. To mention ‘the market’ here is not very helpful, because with one buyer and only a few sellers for something (the contract) agreed once every few years, this is hardly a normal market. [1] It is instead about the incentives faced by managers and workers, both in achieving efficiency and fostering innovation. Managers have a clearer incentive system in a private sector firm to maximise profits, and that incentive is provided by the need to bid low to win the contract and nevertheless make a profit. As Carillion shows, margins on most public sector outsourcing are not large. In that sense Carillion confirms that part of this mechanism is working. A single public sector entity cannot replicate this advantage, unless it too is in competition with private sector firms. In short, competition improves incentives.

One important qualification to this argument involves information. The temptation of a bidding system based on the lowest price is to cut quality. So the public sector has to have a clear means of not just specifying quality in the contract, but of ensuring the contract is being fulfilled once it is awarded. Sometimes politics can get in the way of that happening. For activities where quality is difficult to observe, contracting out is not a good idea.

Another qualification involves the attitude of public sector workers before privatisation. If they, for whatever reason, internalise the need for efficiency and innovation, because for example they can see how both improve the outcome for customers, then contracting out to the private sector will achieve little. The NHS could be a case in point.

A further problem with privatisation is finance. When people argue that public money should not be wasted paying the shareholders or creditors of private firms, they are both right and wrong. They are wrong in the sense that without contracting out the same amount of money has to be raised by the public sector, and so it “wastes” money by having to pay interest on government debt. But they are right in that the rate of interest on government debt is much less than the rate of interest a private firm has to pay on any debt, or in the form of dividends to shareholders. The reason for this is that investors do not like risk: people who lend to the UK government know they will always get their money back, while as the shareholders and creditors to Carillion have just found this is not true for private sector firms.

This is why PFI projects undertaken just so that the borrowing is done by the private rather than the public sector are costly from an economic point of view. It is why it makes sense to exclude public investment from any fiscal rule: fiscal rules that restrict public investment are an open invitation to politicians to undertake PFI type financing. In my view the best constraint on public investment is the expected social return, assessed with the help of an independent body. It is often said that PFI type projects ‘avoids risk to the taxpayer’. Again this is the wrong way round. It is far easier and cheaper for the public sector to take risks than the private sector, so PFI projects are paying far too high a price to avoid risk to the public sector. 

Another problem related to risk is the interrelationship between what the private company contracts to do and what actually happens when government forecasts go wrong, as they always will. This may have happened with the East Coast line “bailout” (but if it was, we should be told), and it did happen with privatising the probation service. Public sector contracting out forces each side to commit to guesses about the future, whereas if everything remains in-house there can be much more flexibility. There is also the cost of having to train more civil servants in the art of writing good contracts.

One further problem that Carillion reminds us of is that privatisation runs the risk of a degree of interruption if the company goes bankrupt. Disruption is nothing new. If privatisation is to have any benefits, the contract from the public sector has to come up for renewal every few years, and if the private sector provider is changed that will involve some dislocation of service.

One final point, which is contingent on what I hope will be a temporary state of affairs. Nowadays the management overheads for private sector firms are likely to be far higher than in the public sector, for reasons that have little to do with management quality. Ben Chu sets out how much management was being paid at Carillion compared to equivalent public sector managers. And what on earth were shareholders doing allowing the directors to relax clawback conditions on management’s pay if things went wrong, which even the Institute of Directors described as “highly inappropriate” and “lacking effective governance”. In truth the public sector is much better at stopping managers using their monopoly power to be paid over the odds than the private sector appears to be.

So the economist’s answer on public sector outsourcing is, it depends: on all the factors outlined above and probably more I have momentarily forgotten. (Like economies of scale and expertise: no one would ever suggest the public sector makes its own paperclips.) Where the balance will be is bound to be case dependent. But it would be incredibly surprising if at least some of the outsourcing undertaken by this government was not ideological rather than evidence based. This suggests that Labour, if it wins the next election, should undertake a thorough independent review when it has all the facts at its disposal. That at least might ease fears that we will lurch from one ideological position to its opposite.


[1] This is an interesting example of a longstanding debate with myself. If you want to claim that much of this kind of outsourcing represents neoliberal ideology at work (which it probably does), and also that neoliberalism is all about the market, then your definition of a market has to be pretty wide. But of course a large firm like Carillion, as Ronald Coase said, involves the large scale supersession of the price mechanism. By which he meant that firms are an alternative to markets, and large firms suppress what would be market activity if it was replaced by lots of smaller firms buying and selling to each other. This is a contradiction at the heart of neoliberalism as market worship: firms are alternatives to markets. 



Tuesday, 16 January 2018

The problem with a second referendum

My last post upset a few people who are campaigning to reverse Brexit, because I was so pessimistic about the chances of a second referendum before we leave in early 2019. They mistook my pessimism for defeatism. I would never suggest that those fighting for a second referendum, or an end to Brexit by any other means, should give up, just because the outlook looks bleak. You can never be certain about how things will turn out.

The path to a second referendum is clearly laid out by Andrew Adonis in this Remainiacs Podcast. Two things have to happen. First Corbyn needs to start arguing for a second referendum, which Adonis thinks he will do in the summer or autumn. I think this is conceivable, although far from certain. I would merely note that Remainers who declare that Corbyn will never do so because he is a Brexiter at heart are not only wrong, but are therefore by implication far more pessimistic about Brexit than I am, because this first stage is a necessary condition if a second referendum is going to happen.

The second thing that has to happen is that a majority of MPs write in the need to hold a second referendum as an amendment to the Brexit bill, a bill which thanks to rebel Conservative MPs is now a requirement. Yet there is a world of difference between demanding a proper bill before leaving, and demanding a second referendum. The Brexiters will ensure the government throws everything into preventing a second referendum, including perhaps its own survival. As I said in that earlier post, I cannot see it happening in the current environment, and this is the source of my short term pessimism.

One of the reasons I am so pessimistic is related to an earlier post, where I talked about how anti-democratic the concept of the transition period is. I could imagine at least some Conservative MPs arguing for a second referendum when the exact nature of the final deal is known. The first referendum was a decision to put an offer in for a new house: now the surveys and council searches are in we can take a final decision.

But, because of the transition period, what the final deal will be remains unclear, at least to most of the media and the public. The transition allows the Brexiters to continue to live in a fantasy land, where the final deal keeps all the advantages of being in the EU without any of the costs. I have argued, as have others, that the first stage agreement restricts the scope for what the final deal could look like, but this is denied by the government who are still busy eating cake. There is no reason for this to change in the next year, because the focus will be on the government’s futile attempts to avoid transition on EU terms. In this sense a second referendum will be just like the first: the realists will argue as hard as they can for reality, but reality will either not get a look in with the right wing press, or be balanced against fantasy by the broadcast media.

To threaten to bring down their government by voting for a second referendum, rebel Conservative MPs need a cast iron moral case. Alas because of transition they cannot argue that the second referendum will be a vote on the final deal, because the Brexiters can still claim the final deal will be all things to all men and women.

Thursday, 11 January 2018

Does Brexit end not with a bang but a whimper?

Most media commentary on Brexit makes a huge mistake. It focuses on what the UK government may wish to do or should do. The first stage agreement told us one thing that we should have known the moment Article 50 was triggered: the EU is calling the shots in these negotiations. [1] But the fact that the UK agreed to the text, and particularly the parts on the Irish border, has told the EU something important: the current UK government is not going to walk away with no deal, and even if it did the current parliament would almost certainly stop it.

That in turn tells the EU that it can get, to the first approximation, the agreement it wants. So what we should be asking is not what the UK’s next move will be, but what the preferred outcome for the EU is. My guess would be that their preferred outcome is a formalisation of the transition arrangements. This satisfies their three criteria: it avoids a hard Irish border, it imposes no additional trade restrictions, and the UK is clearly worse off as a result of leaving (because it has no control over the rules it must obey).

As Martin Sandbu points out, the first criteria could be satisfied by a deal that kept the UK in the Customs Union and Single Market for goods, but not for services. As the UK exports more services than it imports to/from the EU, the EU’s second criteria might still be roughly satisfied by such a deal. If the UK avoids accepting free movement as part of the deal, whether the third criteria is satisfied becomes debatable. Still it would be a possibility. Anything beyond this would mean a hard border in Ireland. It is difficult to imagine why the rest of the EU would want to seriously harm relations with Ireland by agreeing to such a thing.

Suppose something between these two alternatives, of staying in the Single Market and staying in it just for goods, does become the final deal. I think the Labour leadership could live with it if they are in government when the deal is done. Perhaps a majority of Conservative MPs could. But it means that dreams of doing trade deals with other countries would no longer be possible, and for that and other reasons a large part of the Conservative party would not be happy. The Conservative’s Europe problem would not be solved.

The fact that the Brexiters will still be agitating for a more pronounced break from Europe will be one reason why the UK will still suffer in economic terms (albeit much less than with No Deal), and this will be increased if we are no longer in the Single Market for services. Firms will always be reluctant to locate in the UK because trade might be disrupted if the Brexiters win again. Less immigration from the EU will also hurt the economy. And of course the Brexiters will remind everyone that the UK is having to accept rules on trade that it plays no part in creating.

All that suggests any deal will not be sustainable in the longer term. Norway and Switzerland may be able to tolerate being out of the club but obeying its rules because they would probably reason their impact within the club would be small, although what Ireland will achieve with the Brexit deal is a counterexample. An economy with the size and more importantly the history of the UK will find that more difficult to accept this. 

Does this mean that any deal will just be the first stage of breaking away from Europe? The Brexiters will agitate for this, but I doubt it will happen. The Brexit is essentially a project of the old. It seems far more likely to me that as time passes a majority for rejoining will emerge, and Brexit will come to an end. This mad period of UK politics, and all the political and economic harm it has done, will be a complete dead end, a colossal and damaging waste of time. 

This is my best guess at how Brexit will end, although I take no pleasure in that. [2] Not with the bang of a second referendum or a parliamentary vote, but slowly over time. The vote that rules them all today will gradually be seen not as the liberation and empowerment that so many now believe, but instead as just the machinations of a small number of hollow men. Hollow men who dream of empire renewed, and as a result are casting their country from the world stage. Hollow men who dream of personal power, and who instead turn out to be powerless. Their day will soon pass, as wind in dry grass.

[1] Here I think informed analysis, from commentators like David Allen Green for example, got it right. As I wrote: "Anyone who actually wants a good deal from the EU when we leave should realise that the UK’s negotiating position becomes instantly weaker once Article 50 is triggered."

[2] The Brexiters will not let the government propose a second referendum. A majority of MPs will not vote for one unless public opinion becomes much more anti-Brexit. Without something like a major recession, which looks unlikely, I fear a shift in public opinion will not happen in time for 2019. The post-Brexit Remain campaign has not ‘broken through’ because the tabloid press, and broadcasters following the wishes of politicians, will see Brexit through to completion because they made Brexit possible.


Would things be different if Labour campaigned for a second referendum? In terms of public opinion, that would make a difference to how broadcasters treated the issue. But Corbyn will only consider that if he could be sure that enough Conservative would back him, and by making the issue party political he cannot be sure of that. It is the fact that too few Conservative MPs are prepared to stand up against their leadership, and the 'will of the people', that makes leaving inevitable.      

Tuesday, 9 January 2018

Why does economics get so much stick?

Because the advice of economists is so hopeless, you may say. Well think about the following thought experiment. After the financial crisis. suppose people had done the opposite of what the majority of economists said they should do. We do not need to imagine over Brexit, because most of the 52% who voted for Brexit chose to ignore, or more likely did not hear, the advice of 90+% of economists that Brexit would make them worse off. For those who work that belief was quickly shattered as their real wages fell as a direct result of Brexit.

Immediately after the financial crisis interest rates would not have been cut and austerity would have started in 2009, not 2010. Banks would have gone bust because economists said we needed to bail them out. In which case the Great Recession would have become the second Great Depression. Because the majority of economists did not support austerity you would have had continuing cuts in spending during this new depression.

So comparing this thought experiment with reality, we can see that economists have prevented a rerun of the 1930s depression, and if their majority advice had been taken we would have had a stronger recovery and the UK would not have left the EU. Sounds pretty good to me. But, as I’m sure you are now saying, what about the financial crisis the economists failed to warn of?

That was a mistake, but what are the consequences? Do you really think that if most economists had warned about how fragile the sector was anything would have happened? Banks would have continued to lend because they were making money and they had a guaranteed bail out from the state. Their campaign contributions would have weighed far more heavily in politicians’ minds than warnings from economists. So yes, not warning about the financial crisis was a mistake, but it would not have changed anything if the mistake had not happened. Economists are often told to stop being naive about politics, but the same needs to be said to their critics.

Despite such a strong record in macroeconomics since the crisis, why does economics get so much stick? I think there are three reasons. The first is simple: when the economy goes wrong, economists are easy to blame, particularly because of those forecasts that never predict downturns. In reality virtually no academic macroeconomists are involved in forecasting because they know that kind of unconditional forecasting is a mugs game [1], and furthermore most economists are not macroeconomists, but for some that kind of detail is irrelevant. (There are also plenty of highly successful pieces of microeconomics, but most critics act as if economics was just macroeconomics.)

The second reason is politics. Carlyle in 1849 called economics the dismal science because economists did not support his idea of reintroducing slavery. Ever since then economics has annoyed politicians and their supporters of various colours by pointing out the problems with various political programmes or schemes.

Politics is also at the heart of the third reason for criticism: politicians and ideologies of the right use the aspects of economics that suits their cause. Want to promote markets? Just take the idea from economics that an ideal market is an optimal way of exchanging goods, and ignore all the ways that real markets deviate from this ideal (ways which, incidentally, a great many economists spend a lot of their time studying). Some heterodox economists of the left, rather than use mainstream economics to point out how the right plays fast and loose with economic ideas, prefer to suggest that mainstream economics is much closer to the right wing caricature than it is in reality. It is why, as Noah Smith observes, so much of this criticism can be found in the pages of the Guardian.

This misrepresentation of mainstream economics is either deliberate or reflects ignorance. Ignorance about the fact that a lot of economics has become more empirical and therefore more eclectic in its use of theory over the last few decades, perhaps in part because of the influence of behavioural economics. Ignorance that even in macroeconomics, where ideological influences can be strong, there is more consensus around New Keynesian economics than some mainstream Keynesian economists imagine. (See my survey with André Moreira of post graduate teaching at the top schools here.) Nowadays you will find that in most areas of economics (alas not yet macro so much) there is nothing limiting the analysis to selfish individualistic behaviour. The idea that economics is like a religion is absurd.

But sometimes it is hard not to believe that popular criticisms of economics choose to ignore how far economics deviates from the neoliberal characterture. There is no excuse for ignoring that, for example, the best arguments against health care being left to the market can be found in a paper by Nobel prize winning economist Kenneth Arrow written decades ago. As the recent book by Colin Crouch suggests, the best critiques of neoliberalism come from within economics.

Another ridiculous charge against economics is that economics has a natural bias against state intervention. Indeed it is possible to argue the opposite. In my own field it is typical to assume the existence of a benevolent policy maker, who maximises social welfare. It is essentially just a useful analytical device, but you could argue if you wished to that this device biases those that use it to favour state intervention.

Judging by recent conversations I have had, many heterodox economists attack the mainstream because it uses the distinction between positive (value free) and normative economics. An example of positive economics would be me saying a temporary cut in government spending when interest rates are stuck at their lower bound reduces output. A normative statement would be that austerity is unfair. Heterodox economists like Sheila Dow seem to suggest that everything is value laden, and the positive/normative distinction allows economists to avoid being “morally implicated in the advice they give.”

I think this criticism is either trivial (yes, of course there may be normative reasons for choosing particular research topics) or dangerous. It is dangerous if it suggests that economists should be encouraged to base their analysis on assumptions that reflect their values. Economics, even though it is a social science, should conform to the scientific method: it should be as much like a science as medicine. Indeed I think it would greatly improve the public debate if both economists and their critics realised that economics, even though it is a unique and inexact science, is more like medicine than any of the hard sciences.

Dow writes “Getting policy-makers or the general public onside over a particular argument is therefore, critically, a matter of persuasion rather than demonstrable proof (since that proof is impossible).” But surely the best way of trying to persuade a policymaker not to impose austerity is to say that most models, including the consensus theoretical model, and nearly all the evidence suggests austerity will reduce output. In contrast it is far too easy to persuade a politician of things they want to hear. We do not want politicians to pick advice only if it is given by ‘one of us’ (by those who share their values), or as a result of the rhetorical skills of the academic.

The danger in encouraging plurality is that you make it much easier for politicians to select the advice they like, because there is almost certain to be a school of thought that gives the ‘right’ answers from the politicians point of view. The point is obvious once you make the comparison to medicine. Don’t like the idea of vaccination? Pick an expert from the anti-vaccination medical school. The lesson of the last seven years, in the UK in particular, is that we want mainstream economists to have more influence on politicians and the public, and not to dilute this influence through a plurality of schools of thought.

All this does not mean that economists are beyond criticism. As my last post pointed out, I have fundamental criticisms about current macroeconomic methodology. An important point to note about the microfoundations methodology is that it excludes economists who are not prepared to sign up to what is currently considered (by macroeconomists) acceptable microeconomics, or who do not think microfoundations is where you have to start in doing macro. But this critique has nothing to do with values. The mistake macroeconomics made in the 1980s was not in their desire to look for microfoundations, but in deciding that models that had internally consistent microfoundations were the only admissible models.

The big problem with most criticisms of economics you see in the media is not that economics is beyond criticism: as the paragraph above suggests it in many cases should be criticised, and there are plenty more interesting criticisms of economics available. The problem is that these more important criticisms are not those you find in the pages of the Guardian. The typical criticisms you see in the press are just not very good, and I fear reflect either ignorance or ideological antipathy.

[1] A lot of the criticisms of forecasters are themselves spurious. Someone who writes “economists should not need to pretend that we can predict things that do not really matter to several decimal places” are themselves pretending that there are any serious forecasters who do pretend this.

Saturday, 6 January 2018

Why the microfoundations hegemony holds back macroeconomic progress

When David Vines asked me to contribute to a OXREP (Oxford Review of Economic Policy) issue on “Rebuilding Macroeconomic Theory”, I think what he hoped I would write on how the core macro model needed to change to reflect macro developments since the crisis with a particular eye to modelling the impact of fiscal policy. That would be an interesting paper to write, but I decided fairly quickly that I wanted to say something that I thought was much more important.

In my view the biggest obstacle to the advance of macroeconomics is the hegemony of microfoundations. I wanted at least one of the papers in the collection to question this hegemony. It turned out that I was not alone, and a few papers did the same. I was particularly encouraged when Olivier Blanchard, in blog posts reflecting his thoughts before writing his contribution, was thinking along the same lines.

I will talk about the other papers when more people have had a chance to read them. Here I will focus on my own contribution. I have been pushing a similar line in blog posts for some time, and that experience suggests to me that most macroeconomists working within the hegemony have a simple mental block when they think about alternative modelling approaches. Let me see if I can break that block here.

Imagine a DSGE model, ‘estimated’ by Baynesian techniques. To be specific, suppose it contains a standard intertemporal consumption function. Now suppose someone adds a term into the model, say unemployment into the consumption function, and thereby significantly improves the fit of the model. It is not hard to think why the fit significantly improves: unemployment could be a proxy for the uncertainty of labour income, for example. The key question becomes which is the better model with which to examine macroeconomic policy: the DSGE or the augmented model?

A microfoundations macroeconomist will tend to say without doubt the original DSGE model, because only that model is known to be theoretically consistent. (They might instead say that only that model satisfies the Lucas critique, but internal consistency is the more general concept.) But an equally valid response is to say that the original DSGE model will give incorrect policy responses because it misses an important link between unemployment and consumption, and so the augmented model is preferred.

There is absolutely nothing that says that internal consistency is more important than (relative) misspecification. In my experience, when confronted with this fact, some DSGE modellers resort to two diversionary tactics. The first, which is to say that all models are misspecified, is not worthy of discussion. The second is that neither model is satisfactory, and research is needed to incorporate the unemployment effect in a consistent way.

I have no problem with that response in itself, and for that reason I have no problem with the microfoundations project as one way to do macroeconomic modelling. But in this particular context it is a dodge. There will never be, at least in my lifetime, a DSGE model that cannot be improved by adding plausible but potentially inconsistent effects like unemployment influencing consumption. Which means that, if you think models that are significantly better at fitting the data are to be preferred to the DSGE models from whence they came, then these augmented models will always beat the DSGE model as a way of modelling policy.

What this question tells you is that there is an alternative methodology for building macroeconomic models that is not inferior to the microfoundations approach. This starts with some theoretical specification, which could be a DSGE model as in the example, and then extends it in ways that are theoretically plausible and which also significantly improve the model’s fit, but which are not formally derived from micofoundations. I call that an example within the Structural Econometric Model (SEM) class, and Blanchard calls it a Policy Model.

An important point I make in my paper is that these are not competing methodologies, but instead they are complementary. SEMs as I describe them here start from microfounded theory. (Of course SEMs can also start from non-microfounded theory, but the pros and cons of that is a different debate I want to avoid here.) As a finished product they provide many research agendas for microfoundation modelling. So DSGE modelling can provide the starting point for builders of SEMs or Policy Models, and these models when completed provide a research agenda for DSGE modellers.

Once you see this complementarity, you can see why I think macroeconomics would develop much more rapidly if academics were involved in building SEMs as well as building DSGE models. The mistake the New Classical Counter Revolution made was to dismiss previous ways of modelling the economy, instead of augmenting these ways with additional approaches. Each methodology on its own will develop much more slowly than the two combined. Another way of putting it is that research based on SEMs is more efficient than the puzzle resolution approach used today. 

In the paper, I try to imagine what would have happened if the microfoundations project had just augmented the macroeconomics of the time (which was SEM modelling), rather than dismissing it out of hand. I think we have good evidence that active complementarity between SEM and microfoundations modelling would have investigated in depth links between the financial and real sectors before the financial crisis. The microfoundations hegemony chose the wrong puzzles to look at, deflecting macroeconomics from the more important empirical issues. The same thing may happen again if the microfoundations hegemony continues.



Thursday, 4 January 2018

Minimum Wages, Monopsony and Towns

Alan Manning has a very good article in Foreign Affairs about minimum wages. The impact of minimum wages on employment is a politically charged issue in economics, and so is similar in that sense to most of macro. With minimum wages, the battlefield is empirical. I often think of this battle when people accuse mainstream economics as being hopelessly neoliberal: it was mainstream economists (David Card and Alan Krueger) who first showed that the data did not conform to what Manning describes as Econ 101 economics, and other mainstream economists who have continued to find this result.

I think two conclusions can be drawn from the many studies that followed Card and Krueger. First, empirical work clearly shows plenty of examples where imposing or increasing minimum wages did not reduce employment. However few would argue that result will hold in all situations for all levels of the minimum wage. That is why, before George Osborne raised it, the UK minimum wage level was set by the Low Pay Commission, who tried to assess these issues. Perhaps the Commission became too cautious, but no doubt we will see more studies on the Osborne increase in due course.In my view another key issue future studies should address is whether increasing nominal wages has any impact on productivity.

Manning also makes a point about the limitations of minimum wages as a tool to deal with poverty. He points out that as “an hourly rate, the minimum wage on its own reveals little about the household income of those who earn it.” He suggests that minimum wages work well alongside earned income tax credits. Minimum wages can help prevent employers capturing part of tax credits by cutting wages in the knowledge that the state would make up the difference.

There are two main reasons why Econ 101 (first year undergraduate) economics gives the wrong answer on minimum wages: search and monopsony. Take search first. In the Econ 101 world no one celebrates getting a new job or worries about losing an existing one. One reason most people do both is search: it takes time and effort to find a new job. Equally it costs the firm money to recruit new people. That creates a zone around the Econ 101 wage within which variations in wages would not lead to job losses or people leaving. Where the actual wage is within that zone will depend on bargaining power between the worker and firm.

Monopsony is the situation where alternative employment opportunities for workers are scarce, which gives the firm the power to set wages below the perfectly competitive level of the standard Econ 101 model. (There is an element of search here too: the costs of moving location. These are much larger than the costs of looking for a job in your own area, particularly for families.) The classic example of monopsony is the town where there is just one major employer.

I suspect many labour economists regard monopsony in the labour market as something of a special case. That perception may need updating, argues Marshall Steinbaum here, drawing on recent work by him and coauthors for the US. They find “that most labor markets (as defined by occupation and geography) are very concentrated [few firms], and that this concentration has a robust negative impact on posted wages for job openings.” That is exactly what you would expect from monopsony: the fewer firms there are in a location, the less often vacancies occur, so the less these firms when suppressing wages have to worry that workers will quit.

The article considers a number of policy implications stemming from widespread monopsony that are worth reading. This could include, in the UK, improving rail communications into cities besides London. The one directly relevant to this post is that these results may help explain why minimum wages do not reduce employment. In the absence of minimum wages, relatively poorly performing firms may be able to shift the impact of poor performance from profits to wages. The minimum wage stops that happening. 

If monopsony is prevalent in large towns but not big cities, I couldn't help wondering if this might have something to do with the difference between towns and cities in the Brexit vote I mentioned in my last post. Support for Trump is also strong in the rural parts of the US, which is where Steinbaum et al find monopsony is prevalent. What this monopsony study suggests is that working conditions within firms are likely to be worse in towns than in cities. What impact might that have on voters? One response to worker exploitation in towns is for people to leave, as they do. For those who stay, an overriding concern might be the survival of firms within the town. This in turn could have an important impact on voter attitudes.