Wednesday, January 13, 2010

How Long Does a Sedimentation Induced Thermal Disequilibrium Last?

This figure shows how sedimentation rate affects heat flow. It is based on a simple 1D basin model, with a steady-state initial thermal condition. A shale (with typical shale properties assumed by basin modelers) layer is deposited between 100 and 99 million years ago followed by a hiatus till present day.



a fixed temperature of 1300 °C at 120 km below the basement and a fixed 10 °C at the sediment surface provide the boundary condition. In a typical basin with continued and varying deposition rate over 10's of million years, the temperature in the sediment column may be always in disequilibrium.

9 comments:

  1. Now we are getting somewhere. I guess we can now agree that there are no theoretical reasons why a sedimentary column cannot be at, or very close to a thermal steady state during sedimentation, and no theory that predict any infinite relaxation time.

    The question is simply how extensively do we need to impose a change to the system (whatever way) before it becomes significantly transient, and stays significantly transient for a significant amount of time. Hence we need a to come up with actual numbers for "extensively" and "significant".

    Without thinking in terms of specific formulations and required number of spatial dimensions for the evaluation, we should first simplify the problem to a calorie count. We need e.g., 1 calorie per gram to heat water 1 degree C at room temperature, and we know that the earths surface flux have a narrow range, again for simplicity say 1-2 micro calories per cm2 per second. Hence it is not rocket science to get a ballpark estimate of the likelihood of going transient (being calorie starved) with a calculator. We have the masses of rocks we are going to heat or cool per time unit, we have realistic heat capacities, and the number of calories flowing through the system (if sources and sinks balance out). We can also get a rough idea of how long time it will take to catch up with an energy deficiency.

    Continues...

    ReplyDelete
  2. This is actually an old story, around 25 years old to be precise. In the childhood of basin fiddling it was common practice to use very small computational domains, and typically only the sediments themselves were modeled and boundary conditions were at the sediment-basement interface (and sometimes even within the sediment columns itself). This was the Lopatin-Waples-Yukler-Welte period of basin fiddling. This lead to the general view that you needed very high sedimentation rates (like in parts of the GOM) to have a transient state simply because the amount of rock that needed to be heated (with such setup) was only the sediments. (Notice that all your examples are on the high end of the spectrum: 5 times to 2 orders of magnitude higher sedimentation rates than the reference point I suggested you should evaluate.)

    Then IFP came out and stated "we have probably under-estimated the thermal resistance of the lithosphere" (I think that was how Ungerer formulated it), meaning there is probably much more mass of rock involved in the process than currently part of the computational domain (thermal resistance of something you did not include in the calculations in the first place was probably not an accurate way of describing it :-) ). Quite sophisticated 2-D thermal models (with the kitchen sink of crust and mantel processes included, e.g., BMT (as version numbers grew)) were developed and a consensus was reached that to evaluate thermal history theoretically, you need to model at least the lithosphere. A side-show was that many realized that the common wisdom that thermal modeling was essentially a 1-D problem (which I myself had promoted) could in too many cases be severely flawed (we should come back to that later). The bottom line was though this: previous work had underestimated the tendency for sediment columns to go transient, while the newer 1-D approaches, placing the lower thermal boundary condition(s) at the base of the lithosphere (see e.g., Vik & Hermanrud (1993) in Basin Modeling: Advances and Applications (NPF: Doré..): who used the same approach as you) potentially would overestimate the tendency (and in a few specific scenarios underestimate the tendency).

    To see why your 1-D approach is problematic both in terms of magnitude and in terms of causality (what causes what) we need you to describe your 1-D model in terms of mass distribution (and it's change with time), the distribution of heat capacities (and it's change with time) and the response of the rest of the lithosphere to sediments dumped onto it (are there any coupled response, or is you tectonic model independent of what you do to the top of the pile). Then it would be illustrative if you zoom into the first 10ma, and show the heat flow variation at the bottom of the NEW sediment added, a reference point at the bottom of the "sedimentary basin", a reference point in the middle of the crustal lithosphere, a reference point in the middle of the mantle lithosphere, and a point at the base of the lithosphere(very important). We also need the variation in vertical positions of those reference points through time. FInally, how do you deal with radiative heat transport at T > 600 .. 1300C particularly in the mantel lithosphere with a lot of less black-body radiation-absorbing olivine (This is a significant part of your computational domain). I know it is slightly a pain in the but to do this, but it makes it much more simple to have a constructive review.

    ReplyDelete
  3. 1) The new sediment added will require the entire lithosphere to warm up - roughly 10 to 20 times more heat required to heat up just the sediments.

    2) Continued sedimentation at the same rate (not just a single event as in this test example) will increase the transient effects at a deeper reference point over time.

    3) Bohai, South China Sea, GoM, Cook Inlet, Trinidad are a few examples where sedimentation rates can exceed 1 km/my. Most Tertiary deltas have short periods of sedimentation rates over 500 m/my.

    4) My experience is that we are typically 10 - 20% below steady state temperatures in most Tertiary depocenters (our kitchens). GoM can be 30-40% below. In a typical GoM deep water model, heat flow is around 25 mW/m2 at surface and 35 mW/m at basement, with steady state solution being around 50 mW/m2.

    ReplyDelete
  4. I really do not want to get into debate about the theory and details. It is pretty much impossible to prove if a numerical model (all the research or commercial codes, including my own) is implemented correctly. Ultimately the test if a model is useful is whether it can be predictive (Just being able to match calibration temperature data is not that useful).

    We take our 1D model, using ALL default rock parameters, 120 km lithosphere thickness, 10 km upper crust, and only match the observed sedimentary stratigraphy. It predicts 50 mW/m2 in deep water Angola, 35 mW/m3 in deep water GoM, and 55 in deep water in Brazil, to mention a few (all +/- a few mWs and sans any salt effects). All seem to be consistent with regional observations. We can then add the variation of crust thickness based on regional geology, to predicts ca 60 mW/m2 on the shelf and 70 mW/m2 onshore of these basins.

    Once we can do that, we think the model is capturing the first order effects and may be used to extrapolate long distance away from temperature calibration points. This was the single point I was trying to make with my first post - with the Browse basin example.

    ReplyDelete
  5. I thought the purpose of the Petroleum System blog was to blog about the issues, including an understanding of the processes that operate in sedimentary basins.

    You are making statements regarding how petroleum systems work and also regarding theory, so it should be appropriate to question any such statements if one thinks it can be demonstrated the statements are inaccurate and may represent or lead to a wrong way of thinking of how petroleum system works.

    I do not doubt any "implementation". When you say: "It is pretty much impossible to prove if a numerical model (all the research or commercial codes, including my own) is implemented correctly." you are missing what I am trying to say; which problem does it actually solve ? A 1-D simulator can be correctly implemented, but if the posed problem is not 1-D, using a 1-D simulator is using the wrong tool. I do not say that it is always problematic to use 1-D tools; what I am saying is that it is very problematic to use them to learn about the MAGNITUDE (and time scale) of transients (it is obvious that piling kilometers of cold sediments fast on top of the lithosphere is a heat sink); and that was the core of your presentations.

    Continues..

    ReplyDelete
  6. What I am trying to point out is that for the high sedimentation rates you are presenting, using 1-D models at different locations in the basin will get you into a lot of trouble if you try to dig deeper. Take for example your 100km apart well example. You think that you can model these independently because they are so far apart. Your computational domain is 120Km vertically; 100km lateral spacing is less that your vertical dimension. You can see from your own calculations the rapid heat-flow response your perturbations gives for that scale (and if we had gone through my suggestions we would be able to cast light on many more issues at the core of the problem, but which it is virtually impossible to deal with in 1-D). The conductivity is typically 20-30% higher in the horizontal dimension. If you did your example in a 2D model that is identical apart from dimensionality (and identical boundary conditions), you would get different result at both your well locations; and they could be significantly different. You will not be able to use the same input parameters and get the same results (heat flow history) between 1-D and 2 or 3-D models. If you get significantly different results between 1-D to 2-D..3-D with the same parameters, obviously your problem is not a 1-D problem. (You also mention the Gulf of Mexico, which is one of the most extreme cases in the world; still at any time in the Tertiary, you will probably not find a position with an extreme sedimentation rate, where it will not be reduced to say 30% within a distance of 100Km.) Also notice that all the simplifications you have described, all drive the extent of the transients in the same direction. I think your results are over-estimates. We can use rough analytical estimates, for AVERAGE lithosphere thicknesses (assuming no internal deformation) and we will see that transients will take millions of years to sizzle out; we can easily have a Tertiary basin in a transient state over it's entire history (see e.g., Lachenbruch & Sass (1977; A.G.U)) but within the uncertainties we can be off with a factor of 2-3 on the time-scale, and the MAGNITUDE related to different disturbances is NOT something we should generalize as a function of sedimentation rate.


    There is a fundamental difference between the statement "heat flow is essentially a function of sedimentation rate" and the statement "in the model, heat flow is essentially a function of sedimentation rate".

    Continues...

    ReplyDelete
  7. There are two goals of petroleum system models.

    1) To have predictive tools.
    2) Learn something about the processes themselves.

    When you say: "Ultimately the test if a model is useful is whether it can be predictive (Just being able to match calibration temperature data is not that useful).", I completely agree; that is the test for usefulness. The USPS slogan "If it fits, it ships" hits the nail on the head in the pragmatic world of petroleum exploration.

    However, that does not necessarily mean you can learn anything about the geological processes from it; pure statistical models can also be useful good predictors.

    You can fulfill the goal of a good predictive tool without having to dig deeply into the processes themselves: Since they are multi-parameter models, bringing them into consistency with observations in different basins, is an error minimization process not too different from teaching a speciality neural net. Also, and that is extremely important to realize; we are matching against data that have very low resolution; that is why e.g., vitrinite reflectance is so popular (:-) ); there is a wide family of parameters that will place the model somewhere within the data-region (you will manage before lunch :-) ).

    Let us not try to learn about geological processes in models that are significantly deficient in solving the posed problem; when we do that, we will inevitable create physically and geologically impossible situations; and 1-D thermal models will in fast subsiding basins virtually always be inconsistent between different positions in the basin compared to a 2-D (or 3-D) model when it comes to our topic: the magnitude, time dependency and spatial variability of the heat flux.

    If we are trying to learn something from a theoretical numeric model, we have to scrutinize it, to ensure it is formulated in a manner that actually do solve the problem. Simplifications are always required, but we must ensure we do not simplify to the extent that we no longer are solving the problem we set out to solve in the first place. This is extremely difficult (small terms can become governing terms because everything else sizzle out), but we have to be very sensitive to it, and all of us have a lot of dirty laundry hidden in the basement regarding this. The petroleum system literature are full of models that do not address the posed problems sufficiently, e.g., the most cited work on the effect of volume expansion of organic matter in source rocks, is a work that did not consider the biggest volumetric change in the source rocks: the conversion of kerogen to petroleum. General compaction models is another good example, where relevant but much more refined models exist in other fields; and they behave differently; develop features that every petroleum geologists would recognize... but WE give a completely different interpretation compared to what causes the features in "the other" models, because we use OUR compaction models as reference (and they are unsubstantiated relics from a dark past :-) ).

    If we cannot discuss, whether a family of models do or do not allow us to make geological statements, generally or specifically, I have a problem of seeing what the purpose of a Petroleum System Blog should be.

    ReplyDelete
  8. I agree with most of you philosophical comments. It seems that we do not agree whether 1D thermal models can be used to estimate the transient effects.

    Although the vertical 120 km is more than the 100 km horizontal direction, most of the conductivity heterogeneity (x vs y direction) is near the surface AND the slope of that is 5/km over 100 km. A higher lateral conductivity does not cause lateral heat flow if things are flat. The slope matters. Hence I do not believe 2/3D will have a significant effect.

    Secondly, existence of 2/3d heat flow does not negate the conclusion from a 1D model as to how much transient effect the 1D model predicts. My point was that a simple 1D model is giving a transient effects that seem to predict the variation in heat flow in basins with sedimentation rates being the only variable in the model.

    No doubt there are 2/3D effects, lateral variation in in crust composition and thickness, that affect heat flow. If we include these variables, the model may be better in theory, but since we may not know some of the parameters well and it may not improve the prediction more than the uncertainty these parameters will add.

    We see an empirical correlation between heat flow and basement depth (which we see in most basins), I can use such correlation to make heat flow maps, which "predicts" heat flow away from the calibration points. This is a simple and useful model without needing to know what the cause for the correlation is (I suspect all the effects mentioned above).

    If I use a 1d model and tried to correlate the prediction of the model with heat flow data varying only the sediment stratigraphy, I seem to get a better correlation (hence prediction) than I get from using only basement depth - it is an improvement I am happy to take. There can be better models of course, but this is what I have time for and it seem to give me the ball park prediction I think the uncertainty allows.

    Cheers,

    ReplyDelete
  9. Do you own the copyright for the image shown in this blog posting? If so, I would like to secure permission to use it within an internal company technical training course. Please reply at the earliest convenience. Thank you in advance and I look forward to your timely response.

    ReplyDelete