SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Gold/Mining/Energy : Naxos Resources (NAXOF) -- Ignore unavailable to you. Want to Upgrade?


To: Richard Mazzarella who wrote (14845)7/27/1998 10:06:00 PM
From: jbIII  Read Replies (2) | Respond to of 20681
 
Richard, Do we have any information on the statistical
distribution characteristics for the ore?


I would think that the purpose of the expanded drill program would be to establish the distribution characteristics. Obviously a moot point now since all the previous data is from LeDoux and we were all hoping to see a continuous stream of consistent data.

I see your point about the large sample size being more likely to contain a random "nugget" that pads the average. However in multiple samples, no matter the size, some would likely see a "nugget" effect.

This is similar to a concept I deal with a lot. "AQL" or Acceptable Quality Level, developed by a man named Juran. The object is through random sampling to capture any product failures that have escaped the original testing. Depending on the desired or required AQL, the size of the sample and the number of acceptable failures is determined. For a typical AQL of .04% we would sample 315 pieces with 0 defects allowed. If you got one failure in the sample then you would retest the lot 100%. The problem is with a 10,000 piece lot, you could very well pick a random sample of 315 good pieces and pass a lot that has some defects in it. So the more 315 piece samples you do the more likely you are to catch failures. We don't do more samples but we tell our customers that we guarantee an AQL of .04%, so they know what to expect.

My point is if we equate a failure and a "nugget", even with a small sample size, with multiple samples we are likely to detect any nuggets. Since RMG and Knight are reporting nil on all samples I think we can rule out the "nugget effect" at LeDoux. I really find it hard to believe that with all the samples and data we have seen from LeDoux that there is a problem at their end. I expect and certainly hope that they determine that there was a procedural or processing problem at
RMG and Knight.

Hope this makes sense,
regards,
jb3