temporal statistics

Upcoming paper: statistical descriptors in the use of vibration critiera

If you are a member of IEST and have some interest in low-vibration environments in research settings, I will encourage you to sit in on a session on "Nanotechnology Case Studies" at ESTECH. I'll be presenting some work on the statistical methodologies that we have been developing over time. I know of at least three other presentations, and they all look intriguing. So, I hope that this will be well-attended. See the abstract for my paper below.

"Nanotechnology Case Studies", ESTECH 2017, May 10 from 8AM to 10AM

Consideration of Statistical Descriptors in the Application of Vibration Criteria

Byron Davis, Vibrasure


Vibration is a significant “energy contaminant” in many manufacturing and research settings, and considerable effort has been put into developing generic and tool-specific criteria. It is important that data be developed and interpreted in a way that is consistent with these criteria. However, the criteria do not usually include much discussion of timescale or an appropriate statistical metric to use in determining compliance. Therefore, this dimension in interpretation is often neglected.

This leads to confusion, since two objectively different environments might superficially appear to meet the same criterion. Worse, this can lead to mis-design or mis-estimation of risk. For example, a tool vendor might publish a tighter-than-necessary criterion simply because an “averaging-oriented” dataset masks the influence of transients that are the actual source of interference. On the other hand, unnecessary risk may be encountered due to lack of information regarding low- (but not zero-) probability conditions or failure to appreciate the timescales of sensitivities. 

In this presentation, we explore some of the ways that important temporal components to vibration environments might be captured and evaluated. We propose a framework for data collection and interpretation with respect to vibration criteria. The goal is to provide a language to facilitate deeper and more-meaningful discussions amongst practitioners, users, and toolmakers. 

How to Read Centile Statistical Vibration Data

I recently wrote about timescales and temporal variability in vibration environments

In that post, and in a related talk I gave at ESTECH, I presented a set of data broken down statistically. That is to say, I took long-term monitoring data and calculated centile statistics for the period. This plot illustrates how often various vibration levels might be expected. Here's an example:

The above data illustrate the likelihood of encountering different vibration levels in a laboratory. Each underlying data point is a 30-second linear average. In this case, the statistics are based on 960 observations over the course of 480 minutes between 9AM and 5PM. 

The above data illustrate the likelihood of encountering different vibration levels in a laboratory. Each underlying data point is a 30-second linear average. In this case, the statistics are based on 960 observations over the course of 480 minutes between 9AM and 5PM. 

Like spatial statistics, temporal statistics are based on multiple observations. Unlike spatial statistics, however, far more data points may be collected. Considerably greater detail is available, and you can generate representations far more finely-grained than "min-max" ranges or averages.

For field or building-wide surveys, our practice is to supplement the spatial data gathered across the site with data from (at least) one location gathered over time. This really helps illustrate how much of the observed variability in the spatial data might actually be due to temporal variability. If vibrations from mechanical systems are present, and if they cycle on and off (like an air compressor), then it also helps you see those impacts.

But interpreting these data isn't intuitive for some people. So, I figured it would be helpful to explain a little bit about how to look at centile statistical vibration data. Returning to the example above: Each Ln curve illustrates the vibration level exceeded n% of the time. This is calculated for each frequency point in the spectrum. In other words, the L10 spectrum isn't the spectrum that was exceeded 10% of the time. Instead, the L10 spectrum shows the level that was exceeded at each individual frequency 10% of the time.

One of the most striking features of centile statistical data is the presence of "bulges" and "pinches". The large bulge between 8 and 10Hz is indicative of a very wide range of vibration levels at this frequency – the distribution is skewed, such that higher vibration levels are more likely than at other frequencies. The pinch at 63Hz indicates that the range of vibration levels has collapsed, perhaps due to the dominance of near-constant vibrations emitted by a continuously-operating nearby machine.

The figure may be read to say that, for a typical work day on the 30-sec timescale, this particular environment meets VC-D/E 99.7% of the time; VC-E 99% of the time; and VC-E/F 95% of the time. There are other interpretations, but this is the most straightforward way to think about it.

Of course, your input data (the monitoring period) has to be representative. If you do a 24-hour measurement, then the statistics won't make perfect sense in a lab that operates 9~5. And if you are looking for rare events, then you'd better collect enough data to be able to make credible statements about those rare events. 

Temporal variability in vibration environments

I've written before about temporal variability in vibration environments, and I recently gave a talk on the subject at IEST's ESTECH conferece. The issue is becoming important enough that there is discussion of addressing it in one of IEST's upcoming standards.

The problem is that no realistic environment is truly "stationary". This is especially true for many research-oriented environments (think: nanotechnology labs) for which low-vibration environments are critical. I've been asked to write some more about this for a working group, and I think the place to start is to think about the timescales of interest. So, when we say that an environment isn't perfectly stable, what exactly do we mean?

This environmental vibration variability can occur on many timescales. In many cases, it's driven by what most people call "cultural vibrations": those vibrations generated by the activities and movement of people in the area. 


On the milli-second timescale, “near-instantaneous” transients might result from cars hitting potholes or the slam of an office door. Car and subway pass-by events are often seconds in duration. Long freight trains might generate impacts lasting minutes. Rush-hour and general transportation impacts typically create hours-long cycles of somewhat-higher and somewhat-lower average vibration levels. And these cycles are indeed important: when we perform campus-scale surveys of vibration sensitivities, one of the questions we ask of research groups is whether they sometimes work at night to avoid interference.

But even longer timescales can be relevant. In some campus settings, reduced local traffic leads to lower weekend vibration levels, during which researchers might schedule their most sensitive experiments. Conversely, weekly visits to the building for gas deliveries and trash pickup can cause semi-regular spikes. Sometimes, the impact might be so regular that it can itself appear as a signal: an emergency generator test occurring, say, every Tuesday at noon could be said to produce a discrete 1.6µHz signal (once per week, or every 604,800 seconds). At the most-extreme timescale, eventually there will be an earthquake.

Extraordinary timescales are rarely relevant. And computing the return period on extremely rare events (like earthquakes) is notoriously fraught, and probably irrelevant to any contemporary lab uses, anyway. 

But timescale is an important parameter in considering vibration impacts. And while there are technical reasons to consider timescale (is my apparatus even sensitive to milli-second-scale excursions? what are the chances that I'm even doing something sensitive at the moment when the transient occurs?) economic and practical considerations can be just as important. If your lab executes experiments that take huge budgets and months of planning to pull off, then even rare events might be a real threat, if only because the consequences of failure (however unlikely) are so dire.