What happens if you marry Open Source to Oil?

Why don’t we use open source software development techniques in the energy industry? There’s an easy answer to that, of course. The current structure of competition is so heavily biased towards zero-sum game conceptions, between companies and between the corporate sector and host governments, that it would be hard to make happen. Ghana, or Yemen, or the UK come to that, wants to sell a data package to oil companies bidding for exploration licenses. They want to keep the results of their processing and analysis of that data as a competitive edge. Doh!

This has been bugging me for some time, being both an oil and open source geek. But it was given fresh edge yesterday while reading Daniel Yergin’s new Quest (of which one can only really say “masterful”), where he described the breakthrough of Petrobras in deep offshore Brazil.

Yergin interviewed Jose Sergio Gabrielli, the president of Petrobras, who was discussing Brazil’s stratopheric rise to major oil producer in the last decade by tapping into the pre-salt reservoirs of Tupi, where drilling went through 6,000 feet of water and 15,000 feet of rock beneath it. Pre-salt had always been tricky because the structure (waves hands wildly and makes it up based on what other people have written) of salt layers, sometimes more than a mile thick, distort the return soundwaves from seismic exploration, disabling the normal 3D seismic processing and modelling techniques now standard in oil exploration.

“The breakthrough was pure mathematics,” Yergin quoted Gabrielli as saying. “We developed the algorithms which enabled us to take out the disturbances and look right through the salt layer”.

Impressive! But what about if open source techniques were available, so that the breakthrough didn’t depend on Petrobras having the gumption to figure out a route to solve the problem and then being able to source directly the skills that could do it. That would be even better!

There are algorithms, of course, and there is data processing. Both required, in spades, for early and wide area exploration. Think of all those early stage producers, many of them marginal, hoping to find commecial level deposits.

But not only exploration. What about Carbon Capture and Storage modelling? An oil major executive recently told us modelling techniques for how carbon dioxide behaves under the ground aren’t yet mature enough to standardise assessments of what CCS would be possible where at what cost. His company was spending tens of millions of dollars trying to build such models in just one project. Or further work on enhanced recovery techniques, which are surely (just guessing) extraordinarily intensive algorithmically and computationally?

Open source could work at both those levels. Linux after all is generally considered more secure than Microsoft partly because so many different minds worked on it, bugs could be caught, and contributors could be found in a much more fluid way than in closed software projects. Open source software systems have entered the mainstream of computing at all levels because it can offer a more efficient building process (declaration of interest: this article is being written in OpenOffice on an Ubuntu machine).

So that’s the classic open software develoment paradigm, Eric Raymond’s Bazaar to Bill Gates’ Cathedral, But there’s also the data processing stage. If the degree of complexity of seismic data, for example, is non-linear, and the number of data points is increasing also in some non-linear curve (as data gathering techniques – think of offshore seismic oil exploration with 20 miles of cables trailing in the water gathering input every few microseconds)… then it should follow that data processing needs are increasing almost exponentially. Oil is becoming a more and more information technology driven business.

Now marry that to a SETI style application – two million desktops, say, processing data packets in parallel in the down CPU cycles of their owners, the packets batch processed back and forth across the Internet. Why would people want to do that, and help Big Oil? Well for one, or both, of two strong reasons. They could effectively obtain CPU-sweat equity in a venture. And the venture itself could be structured in a way which brought the commercial value of the data processing to transparency or good governance initiatives. In many cases this value could be marginal to the entire size of the transaction. But not all. It’s possible to think of early stage producers who don’t have any takers, for example, where you could strike a deal to bring some processing power to the table in exchange for guarantees on the way any industry would be developed going forward.

A stretch? Perhaps. At OpenOil we like to ignore constraints, consider things from first principles… and only then come back to the real world and see if any of it could stick. And I guess that’s the stage we’re at with this right now.

But if Petrobras can build a six million barrel a day industry based on algorithmic breakthroughs… what if? What if there are other computational problems out there that open source techniques could address?

Category: Brazil · Tags:

Leave A Comment