Wednesday, May 23, 2012

Important Factors in Migration Modeling and Charge Analysis

There is a tendency for researchers to focus on mechanisms and algorithms when it comes to migration modeling. You may have heard debates over Darcy vs IP, and finite element vs finite difference, local grid refinement, etc. at modeling conferences, such as the one later this month in Houston. My opinion is that all these do not matter (compared with other factors we often ignore). The theory of fluid flow in porous media have been figured out more than 60 years ago (Darcy, Hubbert). There is not really any debate that oil and gas will migration up dip as long as there is a carrier bed.

The biggest problem is that many carrier beds are often below seismic resolution. A 5 meter sand is typically not recognizable but will easily divert migration laterally. The presence and absence of carrier beds and their extent and connectivity are basically not observable in most cases and migration modeling based on different assumptions of these will yield very different answers.

The other large uncertainties are paleo-geometries. Typically basin modeling tools back strip layers of sediments to "determine" paleo-strucutre. This more often than not produce the wrong paleo-geometry. This is because (1) basins form before sediments are deposited, the sediments fills the lows in the basin (not the way basin models are constructed), and (2) that depositing 1 km of sediment does not cause basin to subside 1 km, it subsides much less because of isostacy (the mantle material is denser than sediment), and (3) the shape of the basin does not change following exact the shape of the newly deposited sediment layer, because the lithosphere has finite rigidity. For example, building a city like Houston does cause subsidence, but perhaps only a few centimeters. Not taking into account these first order effects will certainly give wrong answers to paleo-migration.

In reality, even present day geometries are often incorrect. Seismic interpretation are often very uncertain. Two interpreters may and often make different structure maps based on the same data. The structure maps can only be treated as "models" themselves, not data. Have you run a migration model using geometries based on 2D seismic and 3D seismic data? Are they not very different? I would even argue, that 3D seismic still is not enough to resolve migration paths exactly, a fault with 5 meter throw may still not be visible on seismic but it will change migration direction.

Seals are as important as carriers - they determine lateral vs vertical migration, yet the parameters that determine seal capacity are uncertain by a factor of 2 to several orders of magnitude. I will perhaps talk about seals in my next post.

Given these large uncertainties, our time is better spend trying to improve the geological model by making and requiring better maps, testing scenarios of carrier presence, extent and taking into account the first order large scale geological processes (isostacy, paleo-bathymetry, flexure, etc) rather than worrying about 3rd or 4th order things like flow mechanisms, and mathematical algorithms. We also should treat our modeling software as tools to help us think about the problems rather than giving us answers.

Friday, March 4, 2011

Is Timing of Hydrocarbon Generation Really Important ?

Haven't posted for a while now. Here I would like to challenge another good old petroleum system concept. We have been told that if a trap forms after oil generation is finished, it will not be able to receive charge, or it will receive only gas charge. It is one of the responsibilities of the basin modeler to demonstrate the timing of oil and gas generation, relative to timing of trap formation, perhaps using what is called a petroleum system event chart like this one.

Many of us now realize this is not necessarily true. Actually, if we believed this concept, we might have missed a lot of big important petroleum discoveries.   Below are some examples that contradict the theory:

1) Perhaps a good example is the Bohai Bay, where Phillips Petroleum made a big discovery in 1999 at only about 5000 ft in the Minghuazhen/Guantao formation ( check out the story here). What is interesting to me is that the reservoir was merely deposited about 5 million years ago. This would mean that the oil has to have been generated in the last couple of millions years if we allow the reservoir to be deep enough to have a seal. Right? Well, no, geo-history modeling shows that the Shahejie 3 source rock in the kitchen went through the oil window about 23 million years ago, currently at about 8 km deep and 300 °C, and the reservoir contains low maturity oil!      

2) In deep water Gulf of Mexico, the Jurassic source rock is currently 35,000 to 45,000 ft deep near many big fields and models show oil generation occurred about 15-10 million years ago and the source is currently in the "gas window" under these fields. The Miocene reservoirs are deposited about 9 million years ago, yet they contains very low maturity oil. 

3) The Foinaven and Schiehallion fields in the West Shetlands basin contain under saturated oil in the Paleocene reservoirs. Again, the basin model shows that oil generation happened late Cretaceous, prior to the reservoir deposition. A "Motel model" (oil had to migrate to a parking lot and wait for trap formation) was used to explain the apparent timing mismatch (Lamers and Carmichael, 1999). Interesting, isn't it. 

I can list more examples, but suffice to say, this seems to become a norm rather than exception. Perhaps we should rethink about how important timing of generation is ? I actually argue that these fields are probably still receiving charge today.

Perhaps the companies which discovered the fields in these examples did not listen to their basin modelers ? How could they have predicted oil in the reservoirs based on the so called petroleum system event chart ? I think at least in the deep water of Gulf of Mexico, certain large oil company may have listened to their modelers and missed most of the action that lead to the discovery of many multi-billion barrel fields there.    


Saturday, March 6, 2010

Transient Effects Revisited

Today I had a chance to check out the book "Fundamentals of basin and petroleum systems modeling" by Thomas Hantschel and Armin Kauerauf (Springer-Verlag, 2009). It seems that the transient effects may be still fundamentally misunderstood (and underestimated). Their fig. 3.4a on page 109 (shown below) shows a 1D model going through the deposition, hiatus, and erosion stages. With the assumption that the heat flow at the base of the sediment stays constant at 60 mW/m2, the model predicts small  (±5 mW/m2) changes in heat flow in the sediment column. The authors conclude that the transient effect is smaller than that caused by radioactivity within the sediments. You may click on the image to see a version with better resolution.
When evaluating transient effects, it may not be appropriate to assume constant heat flow at base sediments.  You can see from the figure that the forced base boundary is limiting the extent of the transient effects. With a deeper boundary, the heat flow change should be more significant. More importantly, by setting the boundary at base of sediments,  it considers only the process of heating the sediments, but misses the problem that the deposition of the new layer also puts the entire lithosphere out of equilibrium by moving the surface boundary.

The figure below shows this concept. After adding the new sediments, to establish steady state thermal equilibrium again (green curve), temperature, therefore heat flow must change through out the entire lithosphere, not just within the new sediments. Secondly, since the entire lithosphere needs to be heated (not just the sediments) to reach the new equilibrium, it may take much, much longer (lithosphere is typically 10-20 times thicker than the sediments) than heating the sediments alone (see my previous post on this below).

Below is a model with same conditions as the Hantschel and Kauerauf's model, except that it does not assume a constant heat flow at the base of sediments. Rather the temperature at base of the lithosphere at 120 km is fixed at 1330 °C. The transient effects are much stronger compared to the figure at the top.

The following figure shows the predicted heat flow at the base of the sediment column  through time. You see that it is far from constant. From an initial 60 mW/m2, basal heat flow decreases to 48 mW/m2 at the end of the deposition period, and increases gradually during the hiatus. Then it increases to 72 mW/m2 at the end of the erosion period.
This indicates a ±12 mW/m2 change over 10 million years with deposition and erosion rates of 250 meters/my, a bit higher than the average deposition rate. However, the deep water of the Gulf of Mexico has deposition rates several times as high, and the heat flow at the base of sediment today is around 35 mW/m2, while a steady state heat flow would have been about 50 mW/m2.

In recently uplifted parts of North Africa, we see higher heat flows today. Follwing this analysis, it may be concluded that the heat flow prior to the uplift could be 10 mW/m2 lower depending on erosion rates. See this post for details.

The basin modeling literature is littered with papers making assumptions of heat flow at the base of sediments independent of deposition/erosion rates. Where sedimentation rates are high, or vary significantly over time, the application of such thermal models can cause significant errors in estimating the maturity and timing of petroleum generation. To be fair to the authors, this was how I used to do it in the 90s. But I have learned my lessons from those who learned before me.

Wednesday, January 13, 2010

How Long Does a Sedimentation Induced Thermal Disequilibrium Last?

This figure shows how sedimentation rate affects heat flow. It is based on a simple 1D basin model, with a steady-state initial thermal condition. A shale (with typical shale properties assumed by basin modelers) layer is deposited between 100 and 99 million years ago followed by a hiatus till present day.



a fixed temperature of 1300 °C at 120 km below the basement and a fixed 10 °C at the sediment surface provide the boundary condition. In a typical basin with continued and varying deposition rate over 10's of million years, the temperature in the sediment column may be always in disequilibrium.

Wednesday, January 6, 2010

Two Types of Shale Gas?

Happy New Year!

It seems there are two types of shale gas:

Type 1: Shallow depth (few hundred to 2000m), sorption dominated, TOC critical (7 scf/ton for each 1% of TOC). Maturity important only to improve sorption capacity. May be biogenic origin or mixed origin. Mechanism and therefore estimation methods are similar to CBM.

Type 2: Deep depth (>2000 m), compression (free) gas dominated, porosity critical (20 scf/t for each 1% porosity unit) TOC less important. High maturity very important not only to improve sorption capacity, generate the gas but to reduce liquid volume which reduces sorption and lowers relative permeability. Higher pressure improves scf/ton value for the same porosity.

Shale gas evaluation requires a comprehensive model that takes into account the following: (a) a burial and thermal history model to predict maturity and porosity; (b) the Langmuir sorption model to calculate the amount of sorption gas in the organic matter; and (c) a pvt model to calculate in situ free/compression gas and dissolved gas in the residual oil. In general, the behavior of such a model looks like the following:


These curves are shale gas capacity based on 5% TOC and 1.8% VRo. The curves will also vary with pressure gradient, thermal gradient.

Tuesday, December 1, 2009

Why you should not use heat flow as input in your basin model

It is still common practice to use heat flow as input to basin models. It is really a bad practice, especially when the heat flow is supplied at the base of the sediment column.  Modelers usually fit a heat flow to match temperature data and then use the same heat flow in the kitchen or even over geological time. The problem is heat flow is a function of deposition rate (so called transient effects) , which changes laterally (the kitchen area usually has higher sedimentation rates), as well as in time. Yes I am talking about basement heat flow. We recommend using 1330 °C at base of lithosphere as the boundary condition. This will automatically determine the heat flow in the kitchen and its variation in time. Here are a couple of examples:


This figure shows heat flow (at base of sediment column) vs time for the Gulf of Mexico deep water areas. The rapid drop in heat flow in Miocene is caused by rapid deposition. By assuming 1330 °C at base of lithosphere, basin models will automatically determine heat flow based on sedimentation rate as well as the conductivity of the rocks being deposited. Faster deposition rates with a lower conductivity rock will depress heat flow more. Heat flow will slowly equilibrate to steady state if there has been no deposition for about 40 million years.


The second example here is from North Africa, which has undergone significant uplift and erosion during the Tertiary. Heat flow calculated based on 1330 °C at base lithosphere shows how heat flow increases during periods of erosion.
You may check out this earlier post to see how this approach can help determine heat flow in the kitchen area without wells. In most situations, vitrinite reflectance data (and other thermal indicators) are not sensitive enough in determining the paleo-heat flow as deeper burial at present day overprints any impact of cooler temperatures in the past.

Wednesday, November 18, 2009

A caveat of looking at Rock-Eval data from cuttings

I would like to share a recent experience I had working with a Jurassic source rock.

We had hundreds of rock-eval data of the source rock all telling us it is a pretty gas prone source (HI ranging 50 to 250 mg/gTOC). However, in the sub-basin all we have found so far are low GOR oil accumulations. The problem turns out to be due to the way the source rock is typically sampled - from cuttings. Cuttings are usually a mixed bag of samples from over 30 ft or more. In this case (or is it more prevelent?), it does not capture the actual source rock, which are coal seams much less than 2 meter thick each. These coals actually have 50% TOC and HI up to 500 mg/gTOC! Over the 200 meter source interval there may be only 10 meters of net coal. But it is equivalent to a 100 meter thick good type II oil prone source rock!

I can imagine that as long as the source rock is not unitform (how many of them are?), Rock-Eval data from cuttings may tend to downgrade the source rock to some degree, making it less oil prone.

I hope this story may help remind other petroleum system modelers to use some caution when working with "real data". Happy Holidays!

Sunday, September 27, 2009

A Look at Shell's Genex model

The recent paper by John Stainforth (Marine and Petroleum Geology 26, 2009, pp. 552–572) gave us a hint of how Shell models hydrocarbon generation and expulsion. I can't help but to comment a bit here. The paper begins by explaining the problems of other models, quote:

"Models for petroleum generation used by the industry are often limited by (a) sub-optimal laboratory pyrolysis methods for studying hydrocarbon generation, (b) over-simple models of petroleum generation, (c) inappropriate mathematical methods to derive kinetic parameters by fitting laboratory data, (d) primitive models of primary migration/expulsion and its coupling with petroleum generation, and (e) insufficient use of subsurface data to constrain the models. Problems (a), (b) and (c) lead to forced compensation effects between the activation energies and frequency factors of reaction kinetics that are wholly artificial, and which yield poor extrapolations to geological conditions. Simple switch or adsorption models of expulsion are insufficient to describe the residence time of species in source rocks. Yet, the residence time controls the thermal stresses to which the species are subjected for cracking to lighter species."

Kinetics: the paper shows the calibration of kinetic models to some "natural data" (his fig. 9) from an unspecified location (calculating a transformation ratio from rock eval data is a tricky business, and we understand big oil companies need to keep secrets). Below are comparisons of the Shell models with some previously published models. Keep in mind that there is always a range of kinetics for each type and natural data tend to have a lot of scatters.




For type I source, oil conversion only, there does not seem to be a big difference between the Shell model, BP model the IFP model. The Bohai model is derived from subsurface data fitted with the Pepper and Corvi (1995) scheme. The Green River model is from Lawrence Livermore labs.


Here is comparison of the type II sources. For oil only conversion (** denotes oil only), the Shell model requires a higher maturity, but it is almost the same as the BP class DE (a more gas prone type II) facies. When I threw in some sub-surface data points I have available, all of the models are reasonable within the variability of data. Note the oil only and the bulk (oil + gas) curves for the BP facies bracket the data set.

Now, lets look at type III source rocks. This is interesting! The IFP kinetics published more than 20 years ago does a better job fitting the Shell data than Shell's own model. Again, if I throw in some of my own real data for a type III source, you can imagine what they look like. Gee, why are my data always more scattered?

Expulsion model: Shell's expulsion model assumes hydrocarbon expulsion is a diffusion process. I like the behavior of the model in terms of the implications on composition of the expelled fluids and the time lag it predicts. I am not sure that we need to compare that with the simple expulsion models some commercial software uses. For expulsion volumes, the choice of a simple threshold model in the commercial software is advantageous that it provides quick answers (volumes and GOR) well within the uncertainty of the data and allows scenario testing and even probabilistic modeling of charge volumes. The Shell model may predict a different position the residual oil may peak in the source, but if you plot some real data, the scatter is a lot bigger than the theoretical differences.




This figure shows retained S1/TOC over an interval in oil window (VRo=1.0-1.2, type II source). We can not really see evidence of any of the expulsion flow mechanisms - Darcy flow or diffusion. The retained oil is probably mostly adsorbed in the organic, as it shows S1 plotted by itself is more scattered. The average 100-120 mg/g TOC is what the simple expulsion model defaults to, which is a good practical approach without dwelling on the exact mechanism. There has to be some free hydrocarbons in the pores as well that may allow Darcy flow.

Some recent data set has cast a serious doubt in all the expulsion models, including diffusion. In the gas producing Barnett shale (an oil source), the total current retained gas is in excess of 100 mg/g TOC. This is several times more than any of the models predict. The shale has been uplifted and no active generation is occurring. 

This paper is good research, and may give us some insights into the processes, but I am not sure I see anything that will change the way we rank prospects which I assume is our job as a petroleum system analyst. The paper lists several theoretical advantages of the Shell model, for example, expulsion during uplifting, the predicted composition, GOR profiles etc. But it seems to me non of these will make the any difference when we apply the models in exploration. His figure 13b predicts type I source rock expelling 1000+ scf/bbl GOR oils at very low maturity (VR<0.7%). Even if it is true, are we really going to try to find some of these oil fields (if it is not clear to you, the volumes expelled before VR<7% is almost nil)? The typical situation is that we may have some Rock eval data from wells drilled on highs we assume are the equivalent source rock in the kitchen. But the uncertainty due to this assumption can be huge. In the Dampier sub-basin of NW shelf Australia, plenty of oil has been found, while all available source rock data show a type III gas prone type. The actual source rock is rarely penetrated. Even if it was, it would be too mature to derive kinetics or even original HI from it. Seismic data will have roughly a 100 m resolution at the depth of the source, so we do not even have a good estimates of its thickness. What is the point of worrying about minor differences in kinetics?

As for expulsion during uplifting, I am not sure we can prove it with geological data. Since there is definitely expulsion before uplifting, additional volumes expelled may be trivial compared to the volumes expelled before cooling, or to the uncertainty in calculating the volumes in an uplifted basin. In addition, the other models actually do still expel some volume because the typical kinetic models do not shut off right away.

The paper's criticism of Pepper and Corvi (1995b) in that they did not show gas expulsion during oil window may not be accurate. As far as I am aware, the Pepper and Corvi source facies are all tuned to give appropriate GOR ranges during oil window, even if it may not be obvious on the mass fraction graphs in the original paper.

Saturday, September 26, 2009

The Myth About Kinetics


The general practice among modelers is to use a published kinetic model for a given source rock type or organo-facies to predict the degree of maturation. Within each organo-facies (for example, type II or clay rich marine shales) an variation in kinetics behavior is expected. This is an uncertainty researchers try to reduce. There are several controversies surrounding the use of kinetic models.
  • Custom kinetics: This is when we take a sample of an immature source rock and attempt to measure the kinetic parameters in the lab. Some people regard this as an improvement over generic published models. There are a couple of issues with custom kinetics, a) the wells we take samples from are usally on the highs or edges of basins, so the samples may not represent the real source rock in the kitchen. b) When lab derived kinetic models are extrapolated to geological time scales, they tend to not match well data very well. Some of the published models have corrected this effects and hence can account for the differences while custom kinetics may not.
  • Compositional kinetics: This is where I think the researchers may have gone a bit overboard. The idea is that we can predict the composition of the fluids beyond just a simple oil and gas classification. The problem is that 1) the source rock facies in the kitchen may not be the same as where we have samples. Typically our well samples are a biased as they are drilled on higher structures rather than the actual kitchen. 2) We usually do not even have a good estimate of the basic source rock parameters, thickness, hydrogen index and TOC, let alone the variations laterally and vertically. The figure below shows a typical source interval with organo-facies changing over time. We also know this would change laterally into the kitchen area which we have no measurements. Our prediction of fluid types would depend on how we account for this vertical and spatial variability, rather than making a sophisticated compositional kinetics model.   

  • Different ways to determine kinetics: There are also debates about what is the best way to measure kinetics, e.g. hydrous pyrolysis versus anhydrous, isothermal versus programmed heating rates. My experience with this is that the results may be slightly different, but the differences are smaller than that caused by the uncertainty that we have in extrapolating the samples to source rock kitchens, and uncertainty in estimating temperatures in the kitchen. 
In summary, we are better off thinking about the variability of the source rock in terms of depositional environment and account for the such variations in the source rock model rather than details in the kinetics. Let's face it, it is not possible to accurately predict the detailed composition of the fluids in the trap. The practical approach is actually to run multiple different scenarios and rank our prospects based on the range of predictions.

You see that building a more accurate speedometer will not improve our prediction of the time it would take us to drive from our home to downtown.

Darcy, Percolation or Ray Tracing?

I personally believe that the subsurface migration of hydrocarbons should follow Darcy's law. However, the numerical solution is so time consuming that it is practically impossible to build models at 3D seismic resolution. Anything less is not really geologically realistic. Structure details missed by 2D seismic will significantly affect migration direction. At geological time scales, most people believe that Percolation or Ray tracing is a good approximation of the migration process, and can handle 3D seismic resolution.

However, even 3D seismic resolution is not nearly enough. Vertically migrating oil and gas is easily diverted to a different direction by the existence of a carrier bed just a few inches thick. That is beyond well-log resolution. A petroleum system analyst should be practical about this, and think about where these carriers may exist and test different scenarios. The prospects that can be charged under more scenarios are assigned lower risk and vise versa.

Heat Flow Prediction

How should I go about estimating heat flow when the nearest well is 50 miles away? The common wisdom in basin modeling is to calibrate a heat flow from temperatures in a well, and use the same heat flow in the near by kitchen. The problem is that the kitchen usually has a different heat flow than where we drill wells. Heat flow is a function of mainly crust thickness and sedimentation rate, among a few other variables. So the correct thing to do is to estimate the crust thickness in the kitchen relative to the well location. The effect of the sedimentation rate should be taken care of using a 1D basin model, provided that the model includes the entire lithosphere. This will allow the best prediction of heat flow away from wells - even 50 miles away. Here is an example from NW Australia.

There are a couple of sources of surface heat flow data - one from surface probes, which measures heat flow in the first few meters under sea floor using a transient measurement. The technique has improved over the past few years. The other technique is using seismic bottom-simulating-reflection (BSR). These techniques may be used to provide a sense of relative regional heat flow variations, but should not be used directly in basin models to estimate temperatures at the reservoir or source depths. Aside from large uncertainties, in most offshore regions, the heat flow varies significantly with depth. The surface heat flow fluctuates over time due to short term ( 1 - 100 thousand year scale) changes in sedimentation rates.