Modeling: Don’t let the perfect be the enemy of the good
I opened this series by saying that public interest models of extractive industry projects did not serve only the purposes financial models have been used for to date but had, in addition to their analytical function, the three other main functions of pedagogy, advocacy and strengthening government support.
In this post I will focus on how public interest models – published on the Internet and relying on public domain data – measure up purely in the traditional modeling role of financial analysis, and at the project level. I will make two main arguments:
- Public interest models can put useful system-level analysis of the upstream oil and gas industry into public understanding now.
- That using public domain data alone is viable in many cases, even purely from an analytical point of view.
In a follow-up post I will deal with the analytical power that open project models can deliver cumulatively.
Who is Your Model For?
But first it would be good to get a bit of context: what do financial models do, anyway? What don’t they do? What are the strengths and limitations of models? How would any of that change under a public interest modelling approach?
What a model does depends on what you want it to do – which depends on who you are. This might seem like a truism but actually isn’t. There can be a tendency for outsiders to be so in awe of the very process of modeling – all those numbers and charts! – that modeling can take on a kind of aura of transcendent knowledge. Wisdom from on high which defines its own terms. It is that which it is!
But there are differences of economic sector. Extractives modeling, for example, is more focused on the assets on its balance sheet like, say, banking, compared with other industries such as consumer goods. The cyclicality and volatility within commodities are more crucial aspects of financial modeling than in other sectors. And even within the oil and gas industry, there are at least two different schools of thought over how to account for exploration costs, “full cost” or “successful effort”, and assets tend to deplete rather than acquire value as they do in many other industries.
A large oil company might need modeling to compare the relative attractiveness of one potential investment project with another one on the other side of the world. Or the juggling act of how to manage operations simultaneously in many producing fields to maximise profit, while maintaining both reserves and cash flow, against constantly shifting price and cost bases. A company focused on exploration and production can stop its modeling where the crude has been shipped to market, which is just where a model for a refining and processing project starts.
In the market at large, the oil company itself is often the unit of analysis. Analysts look at the market valuation of listed companies and try to work out whether to buy or sell – and you can bet there is a lot of that happening at the moment with falling prices. These models can look very different to models of projects that might predominate in the upstream – check out this one of Occidental which ranges from exploration right through to petrochemicals, or watch this video about how to model the significant hedging used to trade gas in the continental US.
As we are focused on the governance aspect of the oil industry, it might seem safe to say we can focus on the government side of things. But of course that will also vary from country to country. Some countries have active national oil companies and some don’t. Some countries are invested in every part of the value chain, others are not. Gas is different to oil in many aspects – term contracts, regional markets – and mining is different again.
The point here is not, oh the mind-blowing complexity of so many considerations. Precisely because you don’t have to consider them all at once. The point is specificity.
The definition of a successful model is one which reliably serves a particular audience for a particular purpose. Every successful model was created by some specific people for themselves, or for other specific people. It doesn’t get simpler than that. Any model maker should be able to answer with an individual name the question: who is this model for?
The Governance World – welcome to FARI
If we zoom in on the upstream model for governance purposes – which is probably where public interest type models though probably not where they will end – we can make a few observations about what these kinds of models do.
Probably the largest practitioner of this kind of model is the International Monetary Fund, which regularly advises governments on this. In the 2000s, as the pace of technical missions stepped up, the IMF developed a tool to evaluate upstream economics known as the Fiscal Analysis of Resource Industries, or FARI for short. Originally used in-house to inform macroeconomic advice to governments, FARI quickly caught the interest of governments, and there is talk now of FARI being open sourced, as a contribution to public interest modeling.
FARI has developed a number of uses, such evaluation of negotiations, bid reviews, revenue forecasting, and tax gap analysis. But it is perhaps best known for its evaluation of fiscal regimes. By having built a library of terms across a range of projects, and a number of different “fiscal regimes” – all the interlocking terms of a contract and its surrounding legal environment – FARI can run numerical comparisons on different contracts against the same oil field or mine. This comes with a health warning, of course. Quantitative comparability tells us nothing about investor perceptions of risk, which dominate what returns a company is seeking and therefore influence project economics, or the market strength of a particular government. Be careful, therefore, to compare apples to apples. But FARI is characterised by this ability to create such evaluations, built on top of a bottom-up, project-specific approach to the modeling itself.
What Models Don’t Do
Given the lack of familiarity with modeling at the moment, we should also lay down a few things models, including FARI, don’t do.
- They don’t allow you by themselves to conclude a deal was good or bad, since this question cannot be settled by financial analysis alone, as noted above.
- Although they compare different fiscal regimes, they don’t pay much attention to the different legal modes of contracts per se, that is to say, differences between a production sharing contract, a concession agreement, or a service agreement, which are often hot political topics in producing countries. That makes sense, in fact, because despite what is commonly misunderstood is that the differences between such legal approaches are deceptively small. In economic terms it is possible to produce the same financial results in a project from any contract mode.
- Nor do these models concentrate on giving direct analysis of headline terms, such as two different royalty rates, or the impact of a rise in corporate income tax. Such analytical capacity is certainly implicit – you can enter one of these models and change any of these inputs and see what happens. But the driving emphasis of the model as a whole is the project as a whole. It is there to represent all the interactions between multiple terms, not single terms standing alone.
- Last but not least, such models rarely provide a direct line to “actuals”, the real world payments made by companies to governments. They certainly provide a useful start. But even with good data (the inputs) and accurate characterisation of the fiscal regime (the engine in the middle) there are so many artefacts in the process that such a match to actuals, when it happens, requires a great deal of reconciliation, somewhat similar to some of the more challenging EITI reports.
In fact one of the characteristics of modeling large upstream oil projects compared to other economic sectors is that there is such large variation of terms (the number and configuration of possible rules inside the model) over a relatively small data set (the number of projects modeled). The literature describes dozens of individual fiscal tools, each capable of being implemented in several ways at least and most of them combinable with most of the others. This tends to confirm that idea that direct comparison will always be an art rather than a science, and that, even if harder, project-specific modeling (bottom up) is what needs to happen to build the foundations of solid understanding of the money flows in extractives, rather than a top-down approach led from, for example, trying to run numbers across a whole sector without its constituent projects.
“All Models Are Wrong But Some Are Useful”
A quote yes but from?… George Box, professor of statistics at Princeton and one time president of the American Statistical Association.
We are faced with the paradox that we are proposing to introduce models into the public domain because they can create greater certainty around the massive volatility of the oil and mining industries. And yet at the same time we must expose their limitations.
The imperfection of modeling has been openly acknowledged by leading economists since it was first deployed.
Alfred Marshall spearheaded the quantification of economics at Cambridge in the late nineteenth century which led to economics as we know it today, a social science (none of Smith, Ricardo or Mill were economists by today’s criteria). His view: “The laws of economics are to be compared with the laws of the tides, rather than with the simple and exact law of gravitation. For the actions of men are so various and uncertain, that the best statement of tendencies, which we can make in a science of human conduct, must needs be inexact and faulty.”
John Maynard Keynes wrote that “Economics is a science of thinking in terms of models joined to the art of choosing models which are relevant”. But this art of choosing is scarce, he added, “because the gift for using ‘vigilant observation’ to choose good models, although it does not require a highly specialised intellectual technique, appears to be a very rare one”.
Both Marshall and Keynes attributed uncertainty to the fact that economics seeks to describe the actions of humans. This is undoubtedly true. One way in which all extractives models must necessarily fail is in predicting the agency of human management, for example. Will a consortium continue production through a period of operating losses to keep a project running? Will they revise the risk premium they attach to a project or country over time? Will they invest in secondary enhancement techniques to boost or extend production?
But there is another generic class of errors in models which we are more familiar with in modern times: the understanding, as George Soros says, that human economic activity involves feedback loops across networks which have the potential to amplify. System complexity. And that therefore the classical assumption of equilibrium as a natural underlying state, against which all disturbances are local and to which they will always ultimately return, is an illusion.
The most obvious impact of this is on future pricing. Everyone now knows that anyone who predicts the price of oil beyond a year out is either a knave or a fool. And in an age of financialisation this volatility cannot only be taken as the extreme but measurable sensitivity of the market to fluctuations in supply of a highly inelastic commodity. Not when trading volumes of derivatives and options are maybe 30 times larger than physical oil.
This is not just a philosophical nod at human imperfection. These two general principles together – any model is only for a defined purpose or set of purposes, and all models are flawed – have specific implications when we consider the implications of public interest models.
“In modelling there is God, Exxon, and everybody else”
Not all margins of error are equal. Once we are comfortable with the idea that all models are flawed, the question, or rather questions that occur about any given margin of error is: first, how big is it? And second, how material is it to the specific purpose in hand?
That is the wisdom behind the old saw I heard from a seasoned analyst in the Middle East. “God” (or choose other culturally appropriate expression of omniscience) alone knows the future. “Exxon”, or the incumbent large integrated company, alone knows the geological prospectivity and cost basis of the project. Everyone else is left guessing from the outside. And the potential scale of margin of error at each level, in absolute terms, approaches an order of magnitude.
This might seem like bad news for public interest models since they are, by definition, looking in from the outside.
But there is still cause to stay hopeful. The first is that there are plenty of models out there already in a not dissimilar situation. Governments in theory of course have access to full information about geological prospectivity and costs, so they should – in theory again – be up there with Exxon. Globally, some are, in some aspects of both these key data inputs. But there are dozens of governments around the world, including those the governance community is trying the hardest to work with, who are effectively nowhere near either, whatever their contractual rights of access. Where governments don’t have good access to project level data, then neither does anyone else downstream of them such as international financial institutions.
So in these cases, modeling relying on public data will not necessarily be worse off, and the distinction between public interest and other kinds of models should not be placed into a paradigm of “more accurate/less accurate” so much as “more openly imperfect/less openly imperfect”.
The perfect as the enemy of the good?
To demonstrate the relative materiality of margins of error, let’s take an example oil project and some data input estimates going into it and try to see what margin of error may be generated by what estimates – and how these margins of error relate to a range of different purposes the model might have.
A project in Africa is run under a Production Sharing Contract. There are some documents the company issued to investors which have entered the public domain and the full text of the contract has been published. In order to make a model, we have to extract the terms from the contract and combine them with other legislation to produce a fiscal regime analysis. Then we have to estimate various inputs to feed the model in order to get outputs.
Our default scenario assumes a $68 per barrel price, total lifetime production of 115 million barrels, exploration costs of $200 million, capital costs of just under $1.1 billion and operating costs at $34 million a year fixed plus $2.50 per barrel variable.
If we have a model, we can test what the impact of margins of error in various inputs might be on the various things that we are using the model to test. The table below shows the sensitivity of the model to variations in each of the major classes of inputs
Illustration 1: Variation in results relative to variation in inputs: Glencore PSC of Mangara-Badila fields in Chad (illustrative)
From this it quickly becomes clear that the accuracy of some classes of input matter more than others when related to possible variations in results. Exploration, operating costs and capital expenditures produce variations of 3% or less in an assessment of the government take. Price is more significant at 5%. But it is production which makes a huge difference in determining government take. At a lower production figure for the project (1P) there is hardly any profit left to be distributed, whereas at the highest of three production scenarios (3P) six times more oil is produced, the company reaches positive cash flow sooner, and the government graduates to a higher share of profits quicker.
In estimating government revenues there is a similar differentiation in the impact of variation in data inputs. Exploration costs again have the lowest impact. Operating costs and capex both have considerable impact. But using a lower or higher price estimate can create a difference of double, and the difference between the lowest and highest production profiles available represent almost an order of magnitude.
This is a purely illustrative example. Some of the differentials relate to the particular terms of this contract and would vary in projects which were structured differently.
But it demonstrates the basic principle: the perfect risks being the enemy of the good in public interest modeling. There is reason to believe that public domain data can be used to create models, and we are the beginning of a debate about which data can reliably be used for what purpose, not the end of it.