vibration data/statistics

Environmental vs local sources of building vibration

We previously wrote about receiver-based vibration isolation systems – isolation pads or platforms that sit right below electron microscopes and other sensitive nanoscale imaging tools. In that discussion, we pointed out that these systems are more effective at reducing micro-vibration at higher frequencies than at lower frequencies. In fact, a major component of the vendors’ marketing materials is to demonstrate good low-frequency performance, often by citing a system resonance or "isolation frequency". Just like machine vibration isolators, lower is better.

This is important because many common imaging tools are more sensitive at lower frequencies. What’s more, floor vibrations in labs aren’t uniform across the spectrum: there might be more or less low-frequency content to begin with. But what governs how much energy we see at different frequencies in the spectrum?

Here’s an example micro-vibration spectrum. These statistics are based on data taken across the footprint of an aging university laboratory. Obviously, there is a lot more high-frequency than low-frequency vibration. While not shown here, the narrow…

Here’s an example micro-vibration spectrum. These statistics are based on data taken across the footprint of an aging university laboratory. Obviously, there is a lot more high-frequency than low-frequency vibration. While not shown here, the narrowband (high-resolution) data indicated that building machinery vibrations dominate at high frequencies. Based on the data, this site meets the “VC-C” criterion of 500 micro-inches/sec (12.5 micro-meters/sec); however, it could perform far better if mechanical system vibrations were addressed.

This “mix” of energy content in the spectrum isn’t completely random: different kinds of sources contribute to or even dominate different parts of the spectrum. The first and most obvious distinction between different "kinds of sources" is local vs. environmental.

Local vibration sources are those things in and around the building itself: machinery, foot traffic, structural/foundation systems, the parking lot. Environmental sources are those things external to the building: roads, transit lines, rail corridors, even the “seismicity” of the regional geology.

Here’s another example micro-vibration spectrum. These statistics are based on data taken at a proposed site for a university imaging center that would house vibration-sensitive electron microscopes. As you can see, there is a lot of energy down at …

Here’s another example micro-vibration spectrum. These statistics are based on data taken at a proposed site for a university imaging center that would house vibration-sensitive electron microscopes. As you can see, there is a lot of energy down at around 2Hz. Based on the data, this site meets the “VC-D” criterion of 250 micro-inches/sec (6.3 micro-meters/sec). There's not much you could do to improve this site, since the spectrum is dominated by ground vibrations arriving from outside the building. 

 

What is interesting about this differentiation is the degree of control that might be exerted over those sources. The owner of a vibration-sensitive laboratory building has far more control over local sources than environmental sources. During design and construction, the owner and design team can aggressively isolate new or existing building machinery or lay out the project so as to increase local distances between sensitive labs and vibration sources. But the owner typically has little or no control over environmental sources: traffic from city-owned streets or nearby rail systems might dominate, and nobody can do much to improve the regional geotechnical condition or alter the soil dynamics that determine long-range ground vibration propagation.

It’s true that a large entity like a university or a national lab will sometimes be able to influence the local authorities that operate and maintain nearby roadways or transit lines. Additionally, on large campuses the owner’s own roads might be relevant. These are important special cases, and the owner should use its resources and position to demand maintenance schedules or alignments that minimize the ground vibration impacts from these sorts of sources. But in general, you can think of environmental sources as things that are practically beyond anyone’s (straightforward) control.

It turns out that there is an important rule-of-thumb when it comes to micro-vibration frequency content from these two kinds of sources. In general, environmental sources tend to dominate at low frequencies, while local sources are often more important at higher frequencies. This is all down to physics: low frequencies travel farther in soil, so we don’t often see a lot of high-frequency ground vibration from far away; meanwhile, machinery vibrations are strongest at the RPMs of the shaft speeds, so building machinery creates the greatest vibrations mid- and upper-spectrum frequencies near 15Hz (900RPM), 30Hz (1800RPM), 60Hz (3600RPM), etc.

 
A decent rule-of-thumb for building vibration, including low-vibration labs and other engineered settings: environmental sources like traffic and nearby rail lines tend to dominate at low frequencies, while local sources like building machinery tend…

A decent rule-of-thumb for building vibration, including low-vibration labs and other engineered settings: environmental sources like traffic and nearby rail lines tend to dominate at low frequencies, while local sources like building machinery tend to control at high frequencies. Obviously, there are exceptions, but this is a reasonable starting point when trying to decide if a site can be made to work for sensitive uses like electron microscopy. 

 

This distinction is important because environmentally-driven building vibration is very difficult to mitigate against. So it’s important to be able to understand where those floor vibrations are coming from, and whether you have much ability to do something about them – other than moving to a low-vibration lab space at a remote location.

Upcoming paper: statistical descriptors in the use of vibration critiera

If you are a member of IEST and have some interest in low-vibration environments in research settings, I will encourage you to sit in on a session on "Nanotechnology Case Studies" at ESTECH. I'll be presenting some work on the statistical methodologies that we have been developing over time. I know of at least three other presentations, and they all look intriguing. So, I hope that this will be well-attended. See the abstract for my paper below.

"Nanotechnology Case Studies", ESTECH 2017, May 10 from 8AM to 10AM

Consideration of Statistical Descriptors in the Application of Vibration Criteria

Byron Davis, Vibrasure

Abstract:

Vibration is a significant “energy contaminant” in many manufacturing and research settings, and considerable effort has been put into developing generic and tool-specific criteria. It is important that data be developed and interpreted in a way that is consistent with these criteria. However, the criteria do not usually include much discussion of timescale or an appropriate statistical metric to use in determining compliance. Therefore, this dimension in interpretation is often neglected.

This leads to confusion, since two objectively different environments might superficially appear to meet the same criterion. Worse, this can lead to mis-design or mis-estimation of risk. For example, a tool vendor might publish a tighter-than-necessary criterion simply because an “averaging-oriented” dataset masks the influence of transients that are the actual source of interference. On the other hand, unnecessary risk may be encountered due to lack of information regarding low- (but not zero-) probability conditions or failure to appreciate the timescales of sensitivities. 

In this presentation, we explore some of the ways that important temporal components to vibration environments might be captured and evaluated. We propose a framework for data collection and interpretation with respect to vibration criteria. The goal is to provide a language to facilitate deeper and more-meaningful discussions amongst practitioners, users, and toolmakers. 
 

Can we isolate this microscope from floor vibrations?

For projects that house sensitive instruments and activities – like nanotech labs or vivariums – vibration and noise impacts from the outside world can interfere with research productivity. When it comes to these environmental (rather than locally-generated) building vibrations, location is often the single most important variable. Usually, the farther you can get from external sources -- like major roadways or rail alignments -- the better. 

Of course, most projects don't have the luxury of avoiding the sound and vibration sources that come with civilization: you have to put your building somewhere, and mostly due to cost and convenience, that somewhere is almost always going to be in a populated area. 

Since there's only so much you can do about the environment, we are often asked about local vibration isolation systems that act right at the tools themselves. These are devices like active isolation pads that sit under electron microscopes as well as passive systems like pneumatically-floated-slabs (sometimes built into a pit in the foundation) or spring-based systems that cradle the tool in a height-saving outrigger. Conceptually, they are similar to air tables but are designed to sit below an otherwise floor-mounted tool. These systems are only getting better as technology improves, and you can't ignore the possibilities that they offer when it comes to micro-vibration problems in sensitive buildings. 

 
An example transmissibility curve for a vibration isolation pad like those used to protect sensitive electron microscopes. Here, "transmissibility" can be thought of as the fraction of floor vibrations that get through the system and affect the micr…

An example transmissibility curve for a vibration isolation pad like those used to protect sensitive electron microscopes. Here, "transmissibility" can be thought of as the fraction of floor vibrations that get through the system and affect the microscope. Therefore, on this plot, lower is better: you would prefer that a lower fraction of building vibrations get through.

 

Of course, you'd probably prefer to avoid using these receiver-based vibration isolation systems in the first place: they are expensive; require at least a little maintenance; create elevation and/or footprint problems; and limit your flexibility when the system is “designed to the tool” or built into the foundation. What’s more, if you had to rely on an isolation system to meet a micro-vibration criterion for your nanotech lab, then what are you going to do when you buy or develop a new tool, with a more-demanding criterion? Anyway, quieter is almost always better, both for routine shared imaging suites as well as for lab groups who build or modify instruments.

From a technical perspective, though, the biggest thing to keep in mind is that these isolation schemes can only attenuate -- not eliminate -- floor vibrations. Furthermore, they aren’t equally effective at all frequencies: they universally work better at higher frequencies than at lower frequencies. Even the most-sophisticated receiver-based vibration isolation systems don't work very well below a few Hertz. This is important because many common imaging tools, like electron microscopes, are often more sensitive to micro-vibrations at lower frequencies than at higher frequencies.

 

 
As before, lower transmissibility means that less building vibration gets past the isolation system and into our microscope. Universally, isolation systems perform better at higher frequencies than at lower frequencies. This is important, because im…

As before, lower transmissibility means that less building vibration gets past the isolation system and into our microscope. Universally, isolation systems perform better at higher frequencies than at lower frequencies. This is important, because imaging tools are not equally sensitive to all frequencies, and also because lab environments do not exhibit uniform micro-vibration across the spectrum.

 

So, when you ask whether a marginal (but desirable) site could be made workable by putting an isolation pad under your SEM or TEM, you have to have some data describing the frequency content of that building vibration environment. If there’s a lot problematic floor vibration at very low frequencies, then your investment might not pay off the way you had hoped. On the other hand, if all the biggest problems are at middle and high frequencies, then an isolation system might be just the answer. 

What this all means is that tool-based vibration isolation schemes aren't silver bullets. They can't remedy all building vibration deficiencies. Of course, they can be very useful in the right situations and can even rescue an otherwise impossible laboratory site. Just be aware of these limitations, and make sure you are working with good data as you proceed with design of your lab.

How to Read Centile Statistical Vibration Data

I recently wrote about timescales and temporal variability in vibration environments

In that post, and in a related talk I gave at ESTECH, I presented a set of data broken down statistically. That is to say, I took long-term monitoring data and calculated centile statistics for the period. This plot illustrates how often various vibration levels might be expected. Here's an example:

The above data illustrate the likelihood of encountering different vibration levels in a laboratory. Each underlying data point is a 30-second linear average. In this case, the statistics are based on 960 observations over the course of 480 minutes …

The above data illustrate the likelihood of encountering different vibration levels in a laboratory. Each underlying data point is a 30-second linear average. In this case, the statistics are based on 960 observations over the course of 480 minutes between 9AM and 5PM. 

Like spatial statistics, temporal statistics are based on multiple observations. Unlike spatial statistics, however, far more data points may be collected. Considerably greater detail is available, and you can generate representations far more finely-grained than "min-max" ranges or averages.

For field or building-wide surveys, our practice is to supplement the spatial data gathered across the site with data from (at least) one location gathered over time. This really helps illustrate how much of the observed variability in the spatial data might actually be due to temporal variability. If vibrations from mechanical systems are present, and if they cycle on and off (like an air compressor), then it also helps you see those impacts.

But interpreting these data isn't intuitive for some people. So, I figured it would be helpful to explain a little bit about how to look at centile statistical vibration data. Returning to the example above: Each Ln curve illustrates the vibration level exceeded n% of the time. This is calculated for each frequency point in the spectrum. In other words, the L10 spectrum isn't the spectrum that was exceeded 10% of the time. Instead, the L10 spectrum shows the level that was exceeded at each individual frequency 10% of the time.

One of the most striking features of centile statistical data is the presence of "bulges" and "pinches". The large bulge between 8 and 10Hz is indicative of a very wide range of vibration levels at this frequency – the distribution is skewed, such that higher vibration levels are more likely than at other frequencies. The pinch at 63Hz indicates that the range of vibration levels has collapsed, perhaps due to the dominance of near-constant vibrations emitted by a continuously-operating nearby machine.

The figure may be read to say that, for a typical work day on the 30-sec timescale, this particular environment meets VC-D/E 99.7% of the time; VC-E 99% of the time; and VC-E/F 95% of the time. There are other interpretations, but this is the most straightforward way to think about it.

Of course, your input data (the monitoring period) has to be representative. If you do a 24-hour measurement, then the statistics won't make perfect sense in a lab that operates 9~5. And if you are looking for rare events, then you'd better collect enough data to be able to make credible statements about those rare events. 

How to think about footfall vibration from walkers in buildings

A lot of our work happens deep in underground basements, where high-end nanotech imaging tools are best-protected from both environmental as well as locally-generated vibrations. But even in buildings with cutting-edge imaging suites, there’s often tens or hundreds of square feet of laboratory and office space for every square foot of basement-level SEM/TEM Room space. Those labs and office workers aren’t nearly as sensitive to vibrations as the molecular and atomic-scale imaging going on downstairs, but they are still sensitive. And that means that our job isn’t finished when we’ve made the electron microscopes and scanning probes happy; we still need to make everyone upstairs comfortable and productive, too.

Footfall-induced vibration on structural floors

On the upper floors of most facilities, people moving around the building are the dominant source of floor vibrations: in good designs, walkers — rather than mechanical systems — should control.

Walker-induced vibration criteria are usually specified in terms of an overall velocity limit, say “8,000 uin/sec”. In real structures, though, the impact of footfall vibration scales non-linearly with walking pace. Walker weight matters a little bit (and shoes and floor finish matter far less than you’d expect), but the walker speed is by far the most important variable for a given structure.

This means that we need to do some thinking about what walker speed we should use in any evaluation, whether it’s a vibration test of an existing structure or a model analyzing a proposed structural vibration design. 

Of course, some people walk faster than others; furthermore, the sensitivities of people, animals, and laboratory instruments vary, too. So, how exactly should we think about floor vibration due to footfall impacts from people walking around in buildings?

outcome-based vibration criteria

These kinds of criteria are quasi-qualitative: we choose a sensitivity level (micro-vibration velocity; we’ll use micro-inches/sec, or uin/sec) and a walker speed (paces per minute, ppm), with the understanding that neither of these parameters is precisely applicable to all walkers and all sensitive receivers at all times. What this means is that any pair (velocity + pace) is attempting to guide the general result to a particular kind of outcome. 

Do we want the average person in a given setting -- office, bedroom, hospital room -- to frequently or infrequently notice nearby walkers? Do we want the work of laboratory users to be interrupted by only the highest-speed walkers? Just how often do “high-speed walkers” appear? And what are the consequences of that interruption

Distributional thinking about sources and receivers

All of these parameters and questions fall into some sort of continuum, so it would make sense to think statistically about these building vibration problems

We can start by accepting that both walker speeds and sensitivities aren’t single numbers; instead, they’re actually distributions:

Not everyone walks the same speed, and people can be more or less sensitive to vibrations. I don’t know what the actual distributions look like, but I think we can safely say that, on some level, a "normal" distribution is a decent first guess.

Not everyone walks the same speed, and people can be more or less sensitive to vibrations. I don’t know what the actual distributions look like, but I think we can safely say that, on some level, a "normal" distribution is a decent first guess.

When it comes to vibration sensitivities, the threshold of perception varies for all kinds of reasons, from biomechanics to body awareness to setting. But overall, most people’s thresholds probably fall somewhere between 4,000 and 8,000 uin/sec:

 
Again, I’m just guessing at the shape of the distribution here, but suffice to say that for most people, the threshold of perception falls somewhere between 4,000 and 8,000. 

Again, I’m just guessing at the shape of the distribution here, but suffice to say that for most people, the threshold of perception falls somewhere between 4,000 and 8,000. 

 

That means that only a few people even notice vibrations below 4,000 uin/sec. Conversely, relatively few people will fail to notice vibrations above 8,000 uin/sec.

In laboratories, a similar thing is going on with vibration-sensitive instruments. Of course, there’s a huge variety of tools and experiments out there, so the distribution is comparatively really, really wide. There are a few monuments along the way, as instrument vendors struggle to make their tools work in environments that meet a few standardized criteria. Even amongst tools, however, there is still some variation in vibration. Equipment manufacturers might state their criteria more or less conservatively, or sell instrument options and accessories that result in minor changes (whether or not the vendor tells customers). Sometimes, extraordinary conservatism might be warranted, such as in cases where even rare vibration events could have outsized consequences.

Even the different uses of a single instruments matters, as some scans or experiments push the tool “harder” than others:

 
I have no idea if this distribution is remotely accurate; the point is that there is indeed a distribution, even amongst classes of tools. Note that there are plenty of instruments far more sensitive to floor vibrations than VC-D/E, but you probably…

I have no idea if this distribution is remotely accurate; the point is that there is indeed a distribution, even amongst classes of tools. Note that there are plenty of instruments far more sensitive to floor vibrations than VC-D/E, but you probably shouldn’t be thinking of putting these on upper floors of buildings, to begin with.

 

It should now be clear that sensitivities are not singular numbers, but ranges. So, selecting a threshold means finding a place on the sensitivity curve that you can live with. A statistical perspective will help us think about these ideas from the sensitivity side. However, there’s still the matter of distributions in walker speeds, which is what determines how much vibration gets generated to begin with:

 
It’s no surprise that some people walk faster than others. In a given setting, the walker pace depends mostly on personal gait and just how anxious someone is to get somewhere else.

It’s no surprise that some people walk faster than others. In a given setting, the walker pace depends mostly on personal gait and just how anxious someone is to get somewhere else.

 

The distribution in walker speed depends a lot on setting. Since the vibration impacts of walkers scales strongly with walker speed, we should probably pay attention to this:

 
The absolute numbers might vary, but on average, people move more briskly in long, straight, open corridors as compared with small rooms. We can think of this as two different distributions, each centered around its own average.

The absolute numbers might vary, but on average, people move more briskly in long, straight, open corridors as compared with small rooms. We can think of this as two different distributions, each centered around its own average.

 

vibration design for Realistic walker speeds

Inside enclosed rooms, the majority of people will walk more slowly than 100 paces-per-minute. In corridors, speeds above 110 ppm aren’t unexpected. For some rooms, like laboratories with multiple parallel lab modules, most of the walking that happens is from one part of the bench to another. It’s not unreasonable to expect low average speeds for these walkers, since they’re not going far.

Of course, if all of the modules are tied together by a long pathway along one or both sides, then you should not be surprised to find that people moving between modules will walk considerably faster in these “ghost corridors” between benches. And in the corridor outside you’ll find the fastest walkers of all – although beware that “outside” in terms of partitions isn’t necessarily “outside” in terms of the structural grid. So, when we consider the distribution of walker speeds, we have to think about all of the walkers that we might encounter. This is true even if in the end we chose to ignore (or accept) the impacts of some of those walkers.

using outcome-based criteria

Understanding how sensitivities and walker speeds can vary, we now have the tools to be able to speak intelligently about walker-induced floor vibrations, whether in labs or office buildings or hotels.

If we look inside an office, far from the corridors, we might guess that the “average” walker moves at 75 ppm. Since that means that only occasionally will a walker will move at more than 90 ppm, and since most people can’t feel vibrations of 4,000 uin/sec or less, meeting a criterion like 4,000 uin/sec for walkers at 90 ppm effectively means that “most people won’t feel most walkers”

Conversely, since the average walker moves at 75 ppm, and since most people can at least barely feel vibrations of 8,000 uin/sec or more, a criterion like 8,000uin/sec for walkers at 75ppm means that “most people will indeed feel the average walker”.

With this kind of thinking, you can speak to everything from the comfort of hospital patients to the anxieties of laboratory researchers. Do you demand 99.999% reliability in your experimental apparatus? Is it OK if your patients feel the floor tremble when residents walk past? What if they are jolted awake when the nursing staff scrambles for a rare (but not that rare) emergency? Can our office workers tolerate occasionally feeling people walk past their desks? How “occasional” is acceptable, anyway? 

Byron has been measuring people walking around in buildings for more than 20 years. Contact him if you’re worried about floor vibrations in your building, whether it’s a laboratory, office, or medical center.

A quick note regarding vibration and noise units

Just a quick note regarding expressions of vibration and acoustical data. Every now and then we come upon a vexing problem related to full expressions of the units of a measurement (or criterion).

I'm not talking about gross errors, like confusion of "inches-vs-centimeters" or "pounds-vs-newtons". Instead, I'm referring to some of the other, more subtle parts of the expression, like scaling and bandwidth.

Take a look at the plots below; this is from a vibration measurement in a university electron microscopy suite. Note that all of the data shown in this blog post are completely identical; however, they are expressed in different terms. I've re-cast this same singular spectrum in different terms so that you can see how much it matters to have a full expression of the units we're talking about.

To start, we surely won't confuse big-picture terms, like the difference between acceleration, velocity, and displacement. Which one you work with doesn't matter much, but we'd better be sure we understand the difference between them:

Data above are from a single measurement, expressed in acceleration, velocity, and displacement. Obviously, these units are all different, so the curves look different, despite the fact that each spectrum relates exactly the same information.

Data above are from a single measurement, expressed in acceleration, velocity, and displacement. Obviously, these units are all different, so the curves look different, despite the fact that each spectrum relates exactly the same information.

Also, we probably won't get fooled by the physical units, like inches-vs-meters. At the very least, we have a good "order-of-magnitude" sense of where things ought to land; if you have an electron microscopy suite with vibrations in double-digits, then they'd better be micro-inches/sec (rather than micro-meters/sec), or else you're in trouble:

The same data from before, now expressed using two different sets of units of velocity. Obviously, if you have a criterion like "0.8um/sec" then you'd better compare against the curve expressed in um/sec rather than the one in uin/sec. But…

The same data from before, now expressed using two different sets of units of velocity. Obviously, if you have a criterion like "0.8um/sec" then you'd better compare against the curve expressed in um/sec rather than the one in uin/sec. But are we finished? Do we have a complete expression of the "units" yet?

We're not quite finished, though. Just because we've agreed on terms (velocity) and physical units (micro-meters/sec), we still have some work to do. We never said what the measurement bandwidth should be. We've been showing narrowband data, but what if the criterion is expressed in some other bandwidth? Maybe it's not even a constant bandwidth, but rather a proportional bandwidth, like (commonly-used) 1/3 octave bands:

This is still all the same data, only we are now showing it in narrowband (1Hz bandwidth) as well as in 1/3 octave bands. Note that widths of the 1/3 octave bands scale with frequency as f*0.23, so at low frequencies (below 4Hz) the 1/3 octave band …

This is still all the same data, only we are now showing it in narrowband (1Hz bandwidth) as well as in 1/3 octave bands. Note that widths of the 1/3 octave bands scale with frequency as f*0.23, so at low frequencies (below 4Hz) the 1/3 octave band is actually smaller than 1Hz.

OK, so now we have terms (velocity), physical units (micro-m/sec), and bandwidth (let's choose 1/3 octave band). But we're still not quite finished: we still need to say what signal scaling we're using. You might have seen this referred to using phrases like "RMS" or "Peak-to-Peak":

Again, these are the same data as above, but now we've chosen the 1/3 octave band velocity in micro-m/sec. But if we're supposed to compare against a criterion, which scaling do we use? There's a big difference between the RMS, zero-to-peak, and pea…

Again, these are the same data as above, but now we've chosen the 1/3 octave band velocity in micro-m/sec. But if we're supposed to compare against a criterion, which scaling do we use? There's a big difference between the RMS, zero-to-peak, and peak-to-peak values. 

If I told you that the limit was 0.8um/sec, then would you say that this room passes the test? As you might surmise, you can't answer that question if all I told you that the limit was 0.8um/sec. You need to know exactly what I mean by "0.8um/sec". I know it sounds funny, but just plain micro-meters-per-second is not a complete expression. You have to tell me whether we're talking 0.8um/sec RMS; or zero-to-peak; or peak-to-peak. You'll also have to tell me what bandwidth you want: PSD? 1/3 Octave Band? Narrowband, with some specific bandwidth? 

If you were to tell me that you need to meet 0.8um/sec RMS in 1/3 octave bands, then we can plot the data appropriately and make some intelligent statements:

Since we've been given a full expression of the criterion (0.8um/sec RMS in 1/3 octave bands, which happens to be IEST's VC-G curve), we can plot the data with those units and overlay the criterion. This room passes the test, but without a full expr…

Since we've been given a full expression of the criterion (0.8um/sec RMS in 1/3 octave bands, which happens to be IEST's VC-G curve), we can plot the data with those units and overlay the criterion. This room passes the test, but without a full expression, we couldn't say one way or the other.

We see this kind of problem all the time. Most notably, we see people comparing narrowband measurement data against a 1/3 octave band criterion like those in the VC curves. This is just plain wrong, because the measurement and criterion are literally expressed in different units. The VC-G criterion isn't simply "0.8um/sec"; instead, it is actually "0.8um/sec RMS in 1/3 octave bands from 1 to 80Hz". 

This is important, and it matters a lot!