SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Microcap & Penny Stocks : NAMX -- North American Expl.-- Que Sera Sera! -- Ignore unavailable to you. Want to Upgrade?


To: alchemy who wrote (2097)1/12/1998 11:50:00 AM
From: Michael T Currie  Read Replies (1) | Respond to of 4736
 
I have written this from the original post. Please excuse any errors that have been corrected in the newer version.

> Electro Magnetic Sounding Technology measures flit distribution of Hydrocarbons in a alluvial Reservoir

Why just alluvial reservoirs? For those of you do not know, these are sands deposited by river systems. If you have ever looked down on a meandering river from a plane, you will have seen the sandy point bars being deposited on the inner bends. These are the targets that NAMX is referring to. However, there are many other modes of deposition, including deltaic (e.g. mouth of the Mississippi as it enters the Gulf of Mexico) and deepwater (sorry, I can't give an analog that you would have seen for these). After reading the rest of this information, I can't quite understand why they would restrict themselves to a single type of target (unless they were just trying to throw in a technical term to be impressive <g>).

> Then a geologist examines and interprets the data to determine which of the data corresponds to the hydrocarbons of no interest (example: Coal)

Coal is not a hydrocarbon. It is a high carbon content mineral with impurities. The amount of impurity depends on the type of coal. Lignite will have the lowest carbon content, anthracite the highest. Sometimes, but not always, coal will have associated methane (CH4, the simplest hydrocarbon and under most circumstances the main component of natural gas). I do not like the company's statement at all. First, this is a purely technical mistake and should not have been written into the literature. Second, it presupposes that a geologist has a significant prior knowledge of the subsurface in order to discriminate between `coal' and `hydrocarbons'. Past readers of my rantings will know that a very large question in my mind has been the amount of subsurface data available to the company prior to running the EMSounder and drilling. To summarise those concerns, the rate of drilling success is far higher for appraisal and development wells (i.e. those with nearby well data where average successes can easily be as high as 4 in 5) than for exploration projects. This, in my view, makes the claims for improved accuracy somewhat less impressive.

> All hydrocarbons have the same signature, therefore, the data must be combined with the knowledge and experience of a geologist to obtain a reliable distribution for cost effective exploration.

At least they recognise that there is some interpretation involved. Interpretation is inherently accompanied by risk.

EM Sounder, the Electro Magnetic analog to 3-P Seismic, is an airborne echo sounding technique that can identify' the presence of hydrocarbons ii) situ.

Interesting. Not too long ago, NAMX stated very clearly that 3D seismic was NOT a direct hydrocarbon indicator. It can be under certain circumstances. I would be very interested to know the final sample spacing of a typical EMSounder grid. For comparison's sake, a 3D seismic grid might be processed into 12.5m X 15m bins. In other words, the lateral coverage is essentially complete, as each seismic trace (which at the very least has a structural component for depth and may have a reservoir/fluid quality component as well, depending on the rocks) is only 15 meters or so from the next sample in any direction. If we imagine a 4 square kilometer grid, that grid will contain 21,500 lateral samples, more or less. Sorry if this is not clear - it's a whole lot easier to explain with visuals. There is a point in all of this rambling. This has to do with the concept of spatial aliasing, which is the ability (or inability) to resolve a feature (e.g. one of the point bar targets) for a given sample rate. I'll try to use another analogy. Imagine that there are ten treasure chests each measuring 0.5m X 1m buried in your back yard (very appropriate, don't you think?). Five of those chests are empty. Three have a couple of coins in them which actually do not make the digging worthwhile. One pays off your mortgage. The last allows you to retire to the Bahamas. If you simply go out and dig holes 5m apart, you stand a very good chance of missing all of the treasure chests and giving up. However, if you happen upon a couple of treasure chests and can convince yourself from other information (maybe your neighbour found that HIS treasure chests occurred along a single line) that there is a nonrandom pattern, you might be able to limit the number of holes that you dig to find all of the chests. Question: will you give up after finding, say, two of the chests with the small number of coins? If (and I still think it's a very big if - see below) EMSounder actually works, it may help you in locating treasure chests with something in it. Unfortunately, drilling wells is a pretty expensive proposition and there is no guarantee that the Bahamas retirement chest is there at all. The analogy breaks down when you decide to bring in an excavator <vbg>. This use of observation of modern day sedimentary processes (the neighbour's experience) to apply to prediction is a pretty good summary of what most geologists do, by the way. P.S. I have just looked back on this paragraph and realised that I have mixed two analogies - spatial sampling and the concept of discovery vs. commercial discovery. Sorry about that.

So, after all of that, the questions that I would ask are: 1) what is the density of sampling along each EMSounder line and 2) what is a typical line spacing. Answers to these will at least give an idea of how likely they are to sample a potential target.

> Developed at a cost of USD 90 million, EM Sounder can be classified as the first breakthrough in the effort to find a direct method for locating economic hydrocarbon deposits. However, it does not qualify as a general direct method. In any event, when EM Sounder is used in areas of applicability it has increased the success rate to one in three.

Conflicting claims here, in my opinion. Read the bit about `does not qualify as a general direct method' and then (from the start of this release), "is an airborne echo sounding technique that can identify' the presence of hydrocarbons ii) situ". I realise that this part is the qualification on the method, I just don't like the first general statement.

Assuming that they are not drilling in rank wildcat areas, 1 in 3 is not impressive (see above). The statement also conflicts with Marty's assertions of success rate. Marty, I would not even consider suggesting that your figures are wrong, but I wonder if this is a recognition that all apparent `successes' are not necessarily commercial ventures? Or have I missed out on other drilling?

> Technically, the EM-Sounder is a frequency agile, mono-cycle ground probing radar, that is three single cycle pulses (at 125 MHz, 250 MHz, and 500 MHz) are transmitted and the echoes received to make a sounding into the ground at each measurement location.

I would be interested to know what they mean by `frequency agile'. The same sentence implies that frequencies are fixed. This may not be relevant, but is another example of lack of clarity.

> It accomplishes this at a rate of 100 soundings per second. Being an airborne device it takes data at an average rate of 100 kilometers per hour, an extremely fast method of data acquisition.

Is this an indication of the sample rate along a line? If so, it works out to about 1 sample per 0.28m. Unless they are employing some mode of stacking (i.e. combining samples to improve coherent signal and reduce incoherent noise), this is ridiculously OVERsampled. I'm really confused on this one. It seems pretty inefficient, but let me stress that I do not know the technical details of their processing. There still remains the question of line spacing.

> The transmitted pulses reflect a portion of their energy back to the antenna at each interval or discontinuity in the ground. For each frequency, this locates each layer in time, not in absolute distance.

Make a note of the second sentence here (see below).

> The resolution of the process can scan 10 X 10 X 2 m to depths of approximately 4,000 m (13,000 ft).

They have stated a measure of the lateral resolution here (10m X 10m). From a purely technical view, I would interested to know the acquisition parameters. I simply do not believe the 2m vertical resolution part. Everything in my experience tells me that megahertz frequencies will be filtered out within the first few tens of meters of overburden, regardless of whether the original signals were generated by GPR, seismic sources, or anything else. For you technogeeks out there, this is a factor called `Q', which has been the subject of masses of theoretical and experimental studies. Simply put, the absorption (i.e. attenuation) of higher frequency energy occurs at a greater rate with depth than that of the low frequency component (note: NAMX are stating that they do not generate a low frequency component). Anything outside of the boundaries imposed by the natural earth filter will be observed as noise regardless of the original amplitude of the pulses.

> Unlike seismic, as each frequency travels through the ground at different velocities it is a simple matter to locate each layer in absolute distance.

I'm not sure if my confusion over this sentence is the result of OCR problems or not. Any sort of time based variable, in this case the amount of time that it takes for a signal to return from a reflector, is dependent on a knowledge of the velocities of the rocks to convert to depth. I can think of no reason why the interpretation of EM radiation is not just as subject to time-depth conversion error as seismic data. It is not simple.

> Certain substances have unique, characteristic signatures when looked at in this mariner Water, for example, has a dielectric constant of 80, and an absorbivity of from 0.001 dB per meter for flesh water to 654a4 per meter for sea water. Metals are another substance with a ~ signature and have infinite dielectric constants which reflect the entire pulse back. Hydrocarbons have dielectric constants in the range of 2.2 to 2.6, arid essentially no loss per meter and this is the signature of' importance.

Now we're getting into the good stuff. Hydrocarbon filled sands have a very different dielectric constant from nearly all other rocks due to the very high resistivity of the oil and/or gas. The trick is getting the radiation to such targets in the first place (see above). I won't go into all that I have written before but a few things should be borne in mind: 1) all sedimentary rocks have fluid in the pore spaces 2) there is a very strong correlation of salinity with increasing depth. Note the very high absorption constant of sea water. At typical reservoir depths, you can frequently observe chloride concentrations 2-4 times that of sea water, which should have constants even higher. Having written this, I may have answered my own question about alluvial reservoirs. Since these are `land based' to begin with, chloride contents should be much lower. That doesn't say much about the overburden rocks though. Frequently, the only reason that alluvial sediments are preserved is because the overlying rocks are deltaic to deepwater types i.e. the land was flooded. High salinity in the overburden would kill off any chance of getting the odd megahertz through <g>. This suggests to me that the population of reservoirs suitable for EMSounder is extremely limited. They would have to meet the following conditions: 1) alluvial deposition, which are not the most numerous targets to begin with 2) Purely continental sediments in the overburden, also unlikely because the likelihood of preserving alluvial reservoirs is also a function of the likelihood of a subsequent influx of marine sediment. All this assumes that the thing works at all at high frequencies. I have already stated my reservations about this.

> The Electronics Subsystem consists of a Ultra High Precision Global Positioning System. This unit records the position within 10 centimeters and time position twice per second. An independent Radar Altimeter measures the above ground elevation in such a manner that the Mean Sea Level Elevation of the data is preserved.

I won't comment on the instrumentation except for one thing. I have never heard of any sort of GPS that is accurate to 10cm. This is hard to believe, but I'll leave it to someone else to check as it's well outside of my area of expertise.

I will put in my usual disclaimer here. I do not own nor have I ever owned NAMX stock. I am sorry for my continued pessimism, but I am hoping that you will take this as it is offered - as an educated opinion and nothing else.

Mike



To: alchemy who wrote (2097)1/12/1998 11:55:00 AM
From: Michael T Currie  Respond to of 4736
 
I have written this from the original post. Please excuse any errors that have been corrected in the newer version.

> Electro Magnetic Sounding Technology measures flit distribution of Hydrocarbons in a alluvial Reservoir

Why just alluvial reservoirs? For those of you do not know, these are sands deposited by river systems. If you have ever looked down on a meandering river from a plane, you will have seen the sandy point bars being deposited on the inner bends. These are the targets that NAMX is referring to. However, there are many other modes of deposition, including deltaic (e.g. mouth of the Mississippi as it enters the Gulf of Mexico) and deepwater (sorry, I can't give an analog that you would have seen for these). After reading the rest of this information, I can't quite understand why they would restrict themselves to a single type of target (unless they were just trying to throw in a technical term to be impressive <g>).

> Then a geologist examines and interprets the data to determine which of the data corresponds to the hydrocarbons of no interest (example: Coal)

Coal is not a hydrocarbon. It is a high carbon content mineral with impurities. The amount of impurity depends on the type of coal. Lignite will have the lowest carbon content, anthracite the highest. Sometimes, but not always, coal will have associated methane (CH4, the simplest hydrocarbon and under most circumstances the main component of natural gas). I do not like the company's statement at all. First, this is a purely technical mistake and should not have been written into the literature. Second, it presupposes that a geologist has a significant prior knowledge of the subsurface in order to discriminate between `coal' and `hydrocarbons'. Past readers of my rantings will know that a very large question in my mind has been the amount of subsurface data available to the company prior to running the EMSounder and drilling. To summarise those concerns, the rate of drilling success is far higher for appraisal and development wells (i.e. those with nearby well data where average successes can easily be as high as 4 in 5) than for exploration projects. This, in my view, makes the claims for improved accuracy somewhat less impressive.

> All hydrocarbons have the same signature, therefore, the data must be combined with the knowledge and experience of a geologist to obtain a reliable distribution for cost effective exploration.

At least they recognise that there is some interpretation involved. Interpretation is inherently accompanied by risk.

EM Sounder, the Electro Magnetic analog to 3-P Seismic, is an airborne echo sounding technique that can identify' the presence of hydrocarbons ii) situ.

Interesting. Not too long ago, NAMX stated very clearly that 3D seismic was NOT a direct hydrocarbon indicator. It can be under certain circumstances. I would be very interested to know the final sample spacing of a typical EMSounder grid. For comparison's sake, a 3D seismic grid might be processed into 12.5m X 15m bins. In other words, the lateral coverage is essentially complete, as each seismic trace (which at the very least has a structural component for depth and may have a reservoir/fluid quality component as well, depending on the rocks) is only 15 meters or so from the next sample in any direction. If we imagine a 4 square kilometer grid, that grid will contain 21,500 lateral samples, more or less. Sorry if this is not clear - it's a whole lot easier to explain with visuals. There is a point in all of this rambling. This has to do with the concept of spatial aliasing, which is the ability (or inability) to resolve a feature (e.g. one of the point bar targets) for a given sample rate. I'll try to use another analogy. Imagine that there are ten treasure chests each measuring 0.5m X 1m buried in your back yard (very appropriate, don't you think?). Five of those chests are empty. Three have a couple of coins in them which actually do not make the digging worthwhile. One pays off your mortgage. The last allows you to retire to the Bahamas. If you simply go out and dig holes 5m apart, you stand a very good chance of missing all of the treasure chests and giving up. However, if you happen upon a couple of treasure chests and can convince yourself from other information (maybe your neighbour found that HIS treasure chests occurred along a single line) that there is a nonrandom pattern, you might be able to limit the number of holes that you dig to find all of the chests. Question: will you give up after finding, say, two of the chests with the small number of coins? If (and I still think it's a very big if - see below) EMSounder actually works, it may help you in locating treasure chests with something in it. Unfortunately, drilling wells is a pretty expensive proposition and there is no guarantee that the Bahamas retirement chest is there at all. The analogy breaks down when you decide to bring in an excavator <vbg>. This use of observation of modern day sedimentary processes (the neighbour's experience) to apply to prediction is a pretty good summary of what most geologists do, by the way. P.S. I have just looked back on this paragraph and realised that I have mixed two analogies - spatial sampling and the concept of discovery vs. commercial discovery. Sorry about that.

So, after all of that, the questions that I would ask are: 1) what is the density of sampling along each EMSounder line and 2) what is a typical line spacing. Answers to these will at least give an idea of how likely they are to sample a potential target.

> Developed at a cost of USD 90 million, EM Sounder can be classified as the first breakthrough in the effort to find a direct method for locating economic hydrocarbon deposits. However, it does not qualify as a general direct method. In any event, when EM Sounder is used in areas of applicability it has increased the success rate to one in three.

Conflicting claims here, in my opinion. Read the bit about `does not qualify as a general direct method' and then (from the start of this release), "is an airborne echo sounding technique that can identify' the presence of hydrocarbons ii) situ". I realise that this part is the qualification on the method, I just don't like the first general statement.

Assuming that they are not drilling in rank wildcat areas, 1 in 3 is not impressive (see above). The statement also conflicts with Marty's assertions of success rate. Marty, I would not even consider suggesting that your figures are wrong, but I wonder if this is a recognition that all apparent `successes' are not necessarily commercial ventures? Or have I missed out on other drilling?

> Technically, the EM-Sounder is a frequency agile, mono-cycle ground probing radar, that is three single cycle pulses (at 125 MHz, 250 MHz, and 500 MHz) are transmitted and the echoes received to make a sounding into the ground at each measurement location.

I would be interested to know what they mean by `frequency agile'. The same sentence implies that frequencies are fixed. This may not be relevant, but is another example of lack of clarity.

> It accomplishes this at a rate of 100 soundings per second. Being an airborne device it takes data at an average rate of 100 kilometers per hour, an extremely fast method of data acquisition.

Is this an indication of the sample rate along a line? If so, it works out to about 1 sample per 0.28m. Unless they are employing some mode of stacking (i.e. combining samples to improve coherent signal and reduce incoherent noise), this is ridiculously OVERsampled. I'm really confused on this one. It seems pretty inefficient, but let me stress that I do not know the technical details of their processing. There still remains the question of line spacing.

> The transmitted pulses reflect a portion of their energy back to the antenna at each interval or discontinuity in the ground. For each frequency, this locates each layer in time, not in absolute distance.

Make a note of the second sentence here (see below).

> The resolution of the process can scan 10 X 10 X 2 m to depths of approximately 4,000 m (13,000 ft).

They have stated a measure of the lateral resolution here (10m X 10m). From a purely technical view, I would interested to know the acquisition parameters. I simply do not believe the 2m vertical resolution part. Everything in my experience tells me that megahertz frequencies will be filtered out within the first few tens of meters of overburden, regardless of whether the original signals were generated by GPR, seismic sources, or anything else. For you technogeeks out there, this is a factor called `Q', which has been the subject of masses of theoretical and experimental studies. Simply put, the absorption (i.e. attenuation) of higher frequency energy occurs at a greater rate with depth than that of the low frequency component (note: NAMX are stating that they do not generate a low frequency component). Anything outside of the boundaries imposed by the natural earth filter will be observed as noise regardless of the original amplitude of the pulses.

> Unlike seismic, as each frequency travels through the ground at different velocities it is a simple matter to locate each layer in absolute distance.

I'm not sure if my confusion over this sentence is the result of OCR problems or not. Any sort of time based variable, in this case the amount of time that it takes for a signal to return from a reflector, is dependent on a knowledge of the velocities of the rocks to convert to depth. I can think of no reason why the interpretation of EM radiation is not just as subject to time-depth conversion error as seismic data. It is not simple.

> Certain substances have unique, characteristic signatures when looked at in this mariner Water, for example, has a dielectric constant of 80, and an absorbivity of from 0.001 dB per meter for flesh water to 654a4 per meter for sea water. Metals are another substance with a ~ signature and have infinite dielectric constants which reflect the entire pulse back. Hydrocarbons have dielectric constants in the range of 2.2 to 2.6, arid essentially no loss per meter and this is the signature of' importance.

Now we're getting into the good stuff. Hydrocarbon filled sands have a very different dielectric constant from nearly all other rocks due to the very high resistivity of the oil and/or gas. The trick is getting the radiation to such targets in the first place (see above). I won't go into all that I have written before but a few things should be borne in mind: 1) all sedimentary rocks have fluid in the pore spaces 2) there is a very strong correlation of salinity with increasing depth. Note the very high absorption constant of sea water. At typical reservoir depths, you can frequently observe chloride concentrations 2-4 times that of sea water, which should have constants even higher. Having written this, I may have answered my own question about alluvial reservoirs. Since these are `land based' to begin with, chloride contents should be much lower. That doesn't say much about the overburden rocks though. Frequently, the only reason that alluvial sediments are preserved is because the overlying rocks are deltaic to deepwater types i.e. the land was flooded. High salinity in the overburden would kill off any chance of getting the odd megahertz through <g>. This suggests to me that the population of reservoirs suitable for EMSounder is extremely limited. They would have to meet the following conditions: 1) alluvial deposition, which are not the most numerous targets to begin with 2) Purely continental sediments in the overburden, also unlikely because the likelihood of preserving alluvial reservoirs is also a function of the likelihood of a subsequent influx of marine sediment. All this assumes that the thing works at all at high frequencies. I have already stated my reservations about this.

> The Electronics Subsystem consists of a Ultra High Precision Global Positioning System. This unit records the position within 10 centimeters and time position twice per second. An independent Radar Altimeter measures the above ground elevation in such a manner that the Mean Sea Level Elevation of the data is preserved.

I won't comment on the instrumentation except for one thing. I have never heard of any sort of GPS that is accurate to 10cm. This is hard to believe, but I'll leave it to someone else to check as it's well outside of my area of expertise.

I will put in my usual disclaimer here. I do not own nor have I ever owned NAMX stock. I am sorry for my continued pessimism, but I am hoping that you will take this as it is offered - as an educated opinion and nothing else.

Mike



To: alchemy who wrote (2097)1/12/1998 12:16:00 PM
From: bob  Read Replies (1) | Respond to of 4736
 
Marty,

Who did you receive the technical info on the EMS from?

Bob