Microseismic Event ‘Quality’. A Personal Approach to Data QC

I am not a luddite. Although you may have started to think so having read last week’s blog. If that’s the case, then this week’s edition will surely reinforce that. But I am not, really!

Geoscience applications of Machine learning (ML) will be increasingly impactful, and when thoughtfully applied, will result in advantages for those vendors and operators at the forefront of the technology.

This week I want to chat about a couple of things. Firstly, the process of assessing the ‘quality’ of a microseismic data set, what ‘quality’ might mean in this context and the need for a systematic and consistent method to do this. And secondly normalizing a data set to remove the acquisition footprint so we can compare one data set with another.

If anyone was holding out for a story combining less accepted forms of geophysics and genealogy, as I had mentioned before, then I apologize. I wanted to get these thoughts out as in some ways they follow on from last week.

Articles abound in the microseismic community on the use of new ML techniques for event detection (Chen et al, 2018), and hypocenter location, and in particular for use with DAS datasets, which, given their typically huge data volumes, makes complete sense (Stork et al, 2020). This is an ideal application for machine-based learning algorithms and can offer a significant advantage in reducing processing time and improving processing consistency.

However, when it comes to post processing, prior to interpretation, there is a need for a rigorous, human based intervention to assess the overall dataset. This will include applying several attribute filters to the data and perhaps a feedback loop to additional modeling and processing to validate (or not), outliers and anomalies. Ideally, this process, including the set of filters, should be consistent across all the microseismic datasets in a basin. Ultimately the interpretation of the microseismic data will begin at the well, or pad, but will eventually be integrated with other microseismic data sets. We therefore strive to make them as comparable as possible. 

This is just one of the many data checks and validation steps undergone in a microseismic project. This discussion is based on the assumption that these other steps have been completed and the following assumptions are satisfied. I will talk about the implications of this not being the case in future blogs. So, for the time being I am assuming,

  • All primary microseismic data was recorded with the acquisition array in the desired position throughout. It did not move, slip or slide and it was placed (based on modeling) in an optimal position given the geometry available (this is a BIG assumption and not the case in many instances).

  • All acquisition geometry (well locations, trajectories, etc.) has been applied correctly and to consistent datums and projections, all data (microseismic, pumping) is in a consistent and synchronized time zone.

  • The velocity model has been derived and applied in a consistent and appropriate manner (again, one for more discussion in the future).

  • All processing parameters are optimized and consistent throughout processing. All data has been processed with a consistent minimum signal to noise ratio.

Assuming the above, we have, hopefully, many thousands of events.

microseismic%2BQC%2Bblog.jpg

Microseismic event waveforms of varying signal to noise ratio (SNR)

 

The fundamental measure of the ‘quality’ of a microseismic ‘event’ is the signal to noise ratio (SNR) and underlies most of the other quality attributes.

The ‘noise’ is measured in a time window sometime before the first P-wave arrival and the ‘signal’ is measure in a window around the P-wave arrival.

There are different ways to do it and it can be done for P and S waves. Normally the SNR of a microseismic ‘event’ is the average SNR across all seismic traces for that event. Examples of typical event SNRs are shown.

Obviously those microseismic events with lower SNR will have greater location errors associated with them compared to those events with a higher SNR.

The SNR of an event is mainly dependent on two factors; the distance of the event from the receiver array and the moment magnitude of that event, and the interplay between those two. For example, a large magnitude event generated a large distance from the receiver array could be recorded with low SNR whereas a much smaller magnitude event generated much closer to the array would be recorded with a higher SNR. It is important to remember therefore that the recorded SNR and moment magnitude are independent of one another. 

Microseismic event location requires knowing the time difference between the P-wave and S-wave arrival for an event which, when ray traced through a velocity model gives the distance that event was generated from the receiver array. Clearly the precision in P and S-wave time picks will be higher for high SNR events, and vice versa. 

The depth at which the microseismic event occurred is derived from the move out of the P-wave arrival across the array. Again, this is based on the P-wave travel times with a precision determined by the SNR.

The direction (or azimuth) of the event is found by assessing how the event energy is partitioned across the horizontal components of each of the geophones. This is achieved by cross plotting the horizontal component amplitudes in a window around both the P and S-wave arrival. A best fit line through these ‘Hodograms’ gives the back azimuth of the event. Higher SNR gives a more scattered hodogram (lower rectilinearity) and increased error in the azimuth of the event (see below). This error is compounded for distant events as a small directional error has a larger impact with increasing distance. 

microseismic+QC+blog+2.jpg

Hodograms for high SNR (left) showing well constrained back azimuth projection and low SNR (right) showing poorly

constrained back azimuth projection

 

This azimuthal error is often provided as a P-S rectilinearity term; knowing that the P-phase and S-phase of an event should be 90 degrees apart, a comparison of the derived P and S back azimuths provides another measure of the ‘quality’ of an event.

So, the precision of the three main location parameters of a microseismic event; distance, azimuth and depth (from move-out) is directly related to the SNR of the event. 

Given the significance of SNR in assessing the quality of microseismic data it is fortunate that it is relatively straightforward to determine, and we would therefore expect (hope) all vendors to arrive at approximately the same value for a given record. 

SNR does not address the inherent geometry bias that exists in all microseismic data sets which must be addressed before interpretation and integration. An additional normalization step is required. This bias is evident if we cross plot event magnitude and the distance of the event from the receiver array. The picture below is from the MSEEL project and, as expected, it shows that we need to be quite close to the receiver array to detect the smaller events, whereas larger events can be detected from large distances. For this project, the vendor removed the distance bias by filtering all events smaller than about magnitude -2.5. Personally, I would have used -2.3.

 
microseismic+QC+blog+3.jpg

Event magnitude against event distance from receiver array (MSEEL data)

 

Having removed events with low SNR and removed the distance bias using a magnitude filter it is important to take a step back and assess any obvious anomalies. These could take many forms; suspiciously aligned events (particularly horizontal), absence of events, events detached from the zone of interest etc etc. Depending on the concern it may be necessary to do some additional modeling to validate (or not) the effect. There was an excellent poster at the recent GeoConvention discussing an apparent alignment of events, which, after considering a Monte Carlo simulation of synthetic data turned out to be the fact that, guess what, horizontal arrays have a tough time resolving depth.

To wrap up, my recommendation would be to apply two filters; and SNR filter and a magnitude filter and then to assess any obvious anomalies appropriately. At some point, this point, you have a robust dataset (given all the assumptions above!), free from unreasonable location errors and geometrical bias. Also, one which given the simplicity of the proposed filtering, is comparable between wells, pads and (hopefully) vendors.

References

Chen, Yangkang, Fast waveform detection for microseismic imaging using unsupervised machine learning, Geophysical Journal International, Volume 215, Issue 2, 1 November 2018, Pages 1185–1199, 

Stork, Anna L. et al., Application of machine learning to microseismic event detection in distributed acoustic sensing data, Geophysics, 2020 ,85(5):  KS149

Previous
Previous

Fanciful Geophysics and the art of the Revealer

Next
Next

The VSP Corridor Stack - An Imperative Constraint in the Age of Machine Learning