It is still common practice to use heat flow as input to basin models. It is really a bad practice, especially when the heat flow is supplied at the base of the sediment column. Modelers usually fit a heat flow to match temperature data and then use the same heat flow in the kitchen or even over geological time. The problem is heat flow is a function of deposition rate (so called transient effects) , which changes laterally (the kitchen area usually has higher sedimentation rates), as well as in time. Yes I am talking about basement heat flow. We recommend using 1330 °C at base of lithosphere as the boundary condition. This will automatically determine the heat flow in the kitchen and its variation in time. Here are a couple of examples:
This figure shows heat flow (at base of sediment column) vs time for the Gulf of Mexico deep water areas. The rapid drop in heat flow in Miocene is caused by rapid deposition. By assuming 1330 °C at base of lithosphere, basin models will automatically determine heat flow based on sedimentation rate as well as the conductivity of the rocks being deposited. Faster deposition rates with a lower conductivity rock will depress heat flow more. Heat flow will slowly equilibrate to steady state if there has been no deposition for about 40 million years.
The second example here is from North Africa, which has undergone significant uplift and erosion during the Tertiary. Heat flow calculated based on 1330 °C at base lithosphere shows how heat flow increases during periods of erosion.
You may check out this earlier post to see how this approach can help determine heat flow in the kitchen area without wells. In most situations, vitrinite reflectance data (and other thermal indicators) are not sensitive enough in determining the paleo-heat flow as deeper burial at present day overprints any impact of cooler temperatures in the past.
Tuesday, December 1, 2009
Wednesday, November 18, 2009
A caveat of looking at Rock-Eval data from cuttings
I would like to share a recent experience I had working with a Jurassic source rock.
We had hundreds of rock-eval data of the source rock all telling us it is a pretty gas prone source (HI ranging 50 to 250 mg/gTOC). However, in the sub-basin all we have found so far are low GOR oil accumulations. The problem turns out to be due to the way the source rock is typically sampled - from cuttings. Cuttings are usually a mixed bag of samples from over 30 ft or more. In this case (or is it more prevelent?), it does not capture the actual source rock, which are coal seams much less than 2 meter thick each. These coals actually have 50% TOC and HI up to 500 mg/gTOC! Over the 200 meter source interval there may be only 10 meters of net coal. But it is equivalent to a 100 meter thick good type II oil prone source rock!
I can imagine that as long as the source rock is not unitform (how many of them are?), Rock-Eval data from cuttings may tend to downgrade the source rock to some degree, making it less oil prone.
I hope this story may help remind other petroleum system modelers to use some caution when working with "real data". Happy Holidays!
We had hundreds of rock-eval data of the source rock all telling us it is a pretty gas prone source (HI ranging 50 to 250 mg/gTOC). However, in the sub-basin all we have found so far are low GOR oil accumulations. The problem turns out to be due to the way the source rock is typically sampled - from cuttings. Cuttings are usually a mixed bag of samples from over 30 ft or more. In this case (or is it more prevelent?), it does not capture the actual source rock, which are coal seams much less than 2 meter thick each. These coals actually have 50% TOC and HI up to 500 mg/gTOC! Over the 200 meter source interval there may be only 10 meters of net coal. But it is equivalent to a 100 meter thick good type II oil prone source rock!
I can imagine that as long as the source rock is not unitform (how many of them are?), Rock-Eval data from cuttings may tend to downgrade the source rock to some degree, making it less oil prone.
I hope this story may help remind other petroleum system modelers to use some caution when working with "real data". Happy Holidays!
Sunday, September 27, 2009
A Look at Shell's Genex model
The recent paper by John Stainforth (Marine and Petroleum Geology 26, 2009, pp. 552–572) gave us a hint of how Shell models hydrocarbon generation and expulsion. I can't help but to comment a bit here. The paper begins by explaining the problems of other models, quote:
"Models for petroleum generation used by the industry are often limited by (a) sub-optimal laboratory pyrolysis methods for studying hydrocarbon generation, (b) over-simple models of petroleum generation, (c) inappropriate mathematical methods to derive kinetic parameters by fitting laboratory data, (d) primitive models of primary migration/expulsion and its coupling with petroleum generation, and (e) insufficient use of subsurface data to constrain the models. Problems (a), (b) and (c) lead to forced compensation effects between the activation energies and frequency factors of reaction kinetics that are wholly artificial, and which yield poor extrapolations to geological conditions. Simple switch or adsorption models of expulsion are insufficient to describe the residence time of species in source rocks. Yet, the residence time controls the thermal stresses to which the species are subjected for cracking to lighter species."
Kinetics: the paper shows the calibration of kinetic models to some "natural data" (his fig. 9) from an unspecified location (calculating a transformation ratio from rock eval data is a tricky business, and we understand big oil companies need to keep secrets). Below are comparisons of the Shell models with some previously published models. Keep in mind that there is always a range of kinetics for each type and natural data tend to have a lot of scatters.
This paper is good research, and may give us some insights into the processes, but I am not sure I see anything that will change the way we rank prospects which I assume is our job as a petroleum system analyst. The paper lists several theoretical advantages of the Shell model, for example, expulsion during uplifting, the predicted composition, GOR profiles etc. But it seems to me non of these will make the any difference when we apply the models in exploration. His figure 13b predicts type I source rock expelling 1000+ scf/bbl GOR oils at very low maturity (VR<0.7%). Even if it is true, are we really going to try to find some of these oil fields (if it is not clear to you, the volumes expelled before VR<7% is almost nil)? The typical situation is that we may have some Rock eval data from wells drilled on highs we assume are the equivalent source rock in the kitchen. But the uncertainty due to this assumption can be huge. In the Dampier sub-basin of NW shelf Australia, plenty of oil has been found, while all available source rock data show a type III gas prone type. The actual source rock is rarely penetrated. Even if it was, it would be too mature to derive kinetics or even original HI from it. Seismic data will have roughly a 100 m resolution at the depth of the source, so we do not even have a good estimates of its thickness. What is the point of worrying about minor differences in kinetics?
"Models for petroleum generation used by the industry are often limited by (a) sub-optimal laboratory pyrolysis methods for studying hydrocarbon generation, (b) over-simple models of petroleum generation, (c) inappropriate mathematical methods to derive kinetic parameters by fitting laboratory data, (d) primitive models of primary migration/expulsion and its coupling with petroleum generation, and (e) insufficient use of subsurface data to constrain the models. Problems (a), (b) and (c) lead to forced compensation effects between the activation energies and frequency factors of reaction kinetics that are wholly artificial, and which yield poor extrapolations to geological conditions. Simple switch or adsorption models of expulsion are insufficient to describe the residence time of species in source rocks. Yet, the residence time controls the thermal stresses to which the species are subjected for cracking to lighter species."
Kinetics: the paper shows the calibration of kinetic models to some "natural data" (his fig. 9) from an unspecified location (calculating a transformation ratio from rock eval data is a tricky business, and we understand big oil companies need to keep secrets). Below are comparisons of the Shell models with some previously published models. Keep in mind that there is always a range of kinetics for each type and natural data tend to have a lot of scatters.
For type I source, oil conversion only, there does not seem to be a big difference between the Shell model, BP model the IFP model. The Bohai model is derived from subsurface data fitted with the Pepper and Corvi (1995) scheme. The Green River model is from Lawrence Livermore labs.
Here is comparison of the type II sources. For oil only conversion (** denotes oil only), the Shell model requires a higher maturity, but it is almost the same as the BP class DE (a more gas prone type II) facies. When I threw in some sub-surface data points I have available, all of the models are reasonable within the variability of data. Note the oil only and the bulk (oil + gas) curves for the BP facies bracket the data set.
Now, lets look at type III source rocks. This is interesting! The IFP kinetics published more than 20 years ago does a better job fitting the Shell data than Shell's own model. Again, if I throw in some of my own real data for a type III source, you can imagine what they look like. Gee, why are my data always more scattered?
Expulsion model: Shell's expulsion model assumes hydrocarbon expulsion is a diffusion process. I like the behavior of the model in terms of the implications on composition of the expelled fluids and the time lag it predicts. I am not sure that we need to compare that with the simple expulsion models some commercial software uses. For expulsion volumes, the choice of a simple threshold model in the commercial software is advantageous that it provides quick answers (volumes and GOR) well within the uncertainty of the data and allows scenario testing and even probabilistic modeling of charge volumes. The Shell model may predict a different position the residual oil may peak in the source, but if you plot some real data, the scatter is a lot bigger than the theoretical differences.
This figure shows retained S1/TOC over an interval in oil window (VRo=1.0-1.2, type II source). We can not really see evidence of any of the expulsion flow mechanisms - Darcy flow or diffusion. The retained oil is probably mostly adsorbed in the organic, as it shows S1 plotted by itself is more scattered. The average 100-120 mg/g TOC is what the simple expulsion model defaults to, which is a good practical approach without dwelling on the exact mechanism. There has to be some free hydrocarbons in the pores as well that may allow Darcy flow.
Some recent data set has cast a serious doubt in all the expulsion models, including diffusion. In the gas producing Barnett shale (an oil source), the total current retained gas is in excess of 100 mg/g TOC. This is several times more than any of the models predict. The shale has been uplifted and no active generation is occurring.
This paper is good research, and may give us some insights into the processes, but I am not sure I see anything that will change the way we rank prospects which I assume is our job as a petroleum system analyst. The paper lists several theoretical advantages of the Shell model, for example, expulsion during uplifting, the predicted composition, GOR profiles etc. But it seems to me non of these will make the any difference when we apply the models in exploration. His figure 13b predicts type I source rock expelling 1000+ scf/bbl GOR oils at very low maturity (VR<0.7%). Even if it is true, are we really going to try to find some of these oil fields (if it is not clear to you, the volumes expelled before VR<7% is almost nil)? The typical situation is that we may have some Rock eval data from wells drilled on highs we assume are the equivalent source rock in the kitchen. But the uncertainty due to this assumption can be huge. In the Dampier sub-basin of NW shelf Australia, plenty of oil has been found, while all available source rock data show a type III gas prone type. The actual source rock is rarely penetrated. Even if it was, it would be too mature to derive kinetics or even original HI from it. Seismic data will have roughly a 100 m resolution at the depth of the source, so we do not even have a good estimates of its thickness. What is the point of worrying about minor differences in kinetics?
As for expulsion during uplifting, I am not sure we can prove it with geological data. Since there is definitely expulsion before uplifting, additional volumes expelled may be trivial compared to the volumes expelled before cooling, or to the uncertainty in calculating the volumes in an uplifted basin. In addition, the other models actually do still expel some volume because the typical kinetic models do not shut off right away.
The paper's criticism of Pepper and Corvi (1995b) in that they did not show gas expulsion during oil window may not be accurate. As far as I am aware, the Pepper and Corvi source facies are all tuned to give appropriate GOR ranges during oil window, even if it may not be obvious on the mass fraction graphs in the original paper.
Saturday, September 26, 2009
The Myth About Kinetics
- Custom kinetics: This is when we take a sample of an immature source rock and attempt to measure the kinetic parameters in the lab. Some people regard this as an improvement over generic published models. There are a couple of issues with custom kinetics, a) the wells we take samples from are usally on the highs or edges of basins, so the samples may not represent the real source rock in the kitchen. b) When lab derived kinetic models are extrapolated to geological time scales, they tend to not match well data very well. Some of the published models have corrected this effects and hence can account for the differences while custom kinetics may not.
- Compositional kinetics: This is where I think the researchers may have gone a bit overboard. The idea is that we can predict the composition of the fluids beyond just a simple oil and gas classification. The problem is that 1) the source rock facies in the kitchen may not be the same as where we have samples. Typically our well samples are a biased as they are drilled on higher structures rather than the actual kitchen. 2) We usually do not even have a good estimate of the basic source rock parameters, thickness, hydrogen index and TOC, let alone the variations laterally and vertically. The figure below shows a typical source interval with organo-facies changing over time. We also know this would change laterally into the kitchen area which we have no measurements. Our prediction of fluid types would depend on how we account for this vertical and spatial variability, rather than making a sophisticated compositional kinetics model.
- Different ways to determine kinetics: There are also debates about what is the best way to measure kinetics, e.g. hydrous pyrolysis versus anhydrous, isothermal versus programmed heating rates. My experience with this is that the results may be slightly different, but the differences are smaller than that caused by the uncertainty that we have in extrapolating the samples to source rock kitchens, and uncertainty in estimating temperatures in the kitchen.
You see that building a more accurate speedometer will not improve our prediction of the time it would take us to drive from our home to downtown.
Darcy, Percolation or Ray Tracing?
I personally believe that the subsurface migration of hydrocarbons should follow Darcy's law. However, the numerical solution is so time consuming that it is practically impossible to build models at 3D seismic resolution. Anything less is not really geologically realistic. Structure details missed by 2D seismic will significantly affect migration direction. At geological time scales, most people believe that Percolation or Ray tracing is a good approximation of the migration process, and can handle 3D seismic resolution.
However, even 3D seismic resolution is not nearly enough. Vertically migrating oil and gas is easily diverted to a different direction by the existence of a carrier bed just a few inches thick. That is beyond well-log resolution. A petroleum system analyst should be practical about this, and think about where these carriers may exist and test different scenarios. The prospects that can be charged under more scenarios are assigned lower risk and vise versa.
Heat Flow Prediction
How should I go about estimating heat flow when the nearest well is 50 miles away? The common wisdom in basin modeling is to calibrate a heat flow from temperatures in a well, and use the same heat flow in the near by kitchen. The problem is that the kitchen usually has a different heat flow than where we drill wells. Heat flow is a function of mainly crust thickness and sedimentation rate, among a few other variables. So the correct thing to do is to estimate the crust thickness in the kitchen relative to the well location. The effect of the sedimentation rate should be taken care of using a 1D basin model, provided that the model includes the entire lithosphere. This will allow the best prediction of heat flow away from wells - even 50 miles away. Here is an example from NW Australia.
There are a couple of sources of surface heat flow data - one from surface probes, which measures heat flow in the first few meters under sea floor using a transient measurement. The technique has improved over the past few years. The other technique is using seismic bottom-simulating-reflection (BSR). These techniques may be used to provide a sense of relative regional heat flow variations, but should not be used directly in basin models to estimate temperatures at the reservoir or source depths. Aside from large uncertainties, in most offshore regions, the heat flow varies significantly with depth. The surface heat flow fluctuates over time due to short term ( 1 - 100 thousand year scale) changes in sedimentation rates.
Subscribe to:
Posts (Atom)