Vibrasure | Vibration + Acoustical Consulting

View Original

How instrument criteria are developed: regimes of interference

We previously wrote about how realistic laboratory vibration criteria for sensitive tools like electron microscopes (SEMs/TEMs) could be developed from data relating performance to vibration level. Having real data is important for both customers and consultants, since hand-waving, non-physical criteria aren’t helpful to anyone. We introduced an example “error-vs-vibration” curve to help non-practitioners understand what those underlying data might look like: 

In this hypothetical example, the error rate for a sensitive electron microscope climbs with laboratory building vibration level. For simplicity, we have collapsed numerous parameters into a straightforward “error rate” and “vibration level”.

In this discussion, we talk about where a useful criterion might fall on that curve.

As the floor vibration level increases, the microscope’s error rate increases. However, there are some notable landmarks along the way. The fact that there is some “shape” to this curve should come as no surprise. Complex physical systems like lab instruments do not behave in a way that creates a bright line between “zero vibration impact” and “total vibration interference”. Anyway, in this example, the error rate doesn’t increase in lockstep with vibration; instead, there are several different regimes:

The error rate doesn’t rise linearly with vibration, and we can identify a few important regimes of error-vs-vibration in this example. 

In Regime D, building vibrations are so strong that the error rate has pegged to 100%. No commercial vendor would publish a criterion at which the tool is useless (or on the knife’s edge of uselessness). So, our exploration of realistic criteria will focus on the other Regimes:

  • In Regime A, vibrations don’t affect performance at all. This is because there is a lower bound to the error rate, based on parameters unrelated to floor vibration. For the customer, an interesting feature of this regime is that you don’t get any more out of the tool if you reduce vibrations below this level. Any effort quieting the environment is wasted.
  • In Regime B, vibration clearly matters: there’s a step up in the error rate. The error rate plateaus only slightly higher than in Regime A. If this higher error rate isn’t too off-putting to customers, the vendor might choose to put the criterion at the top of Regime B. This would be wise if it turns out that a criterion in Regime A is too stringent to be easily met in real buildings. While researchers would prefer to operate in Regime A, it might simply be uneconomical.
  • In Regime C, the error rate is climbing with increasing vibration level. Even if it is still low enough that most users are willing to put up with the hassle, individual customers still might reasonably choose to invest in a quieter environment. Within this regime, the tool can indeed perform better if levels were reduced.

For the lab groups and their consulting teams, knowing where the criterion lands in this error-vs-vibration plot would be useful. A criterion in Regime A would mean that no investment could improve performance, and that a slightly worse environment might bring only a slightly greater error rate. A criterion in Regime B would mean that error rates could be reduced slightly if you could sufficiently reduce levels; however, exceeding the limit would mean that errors would start climbing. A criterion near the top of Regime C would mean that any lower level would be preferable; however, exceeding the criterion could be devastating. This kind of information would help inform laboratory users spend limited resources most wisely.

The vendor’s decision as to where to set the criterion will depend as much on economics as it does on physics. If a criterion in Regime A or Regime B is impossible to achieve without extraordinary interventions, then the toolmaker might never make a sale. It is possible that any realistically-achievable criterion will fall within Regime C. In that case, the vendor will need to make some difficult sales and marketing decisions regarding acceptable error rates, and possibly scramble the engineering team to work on improving robustness.

Unfortunately, for off-the-shelf tools, we very rarely are given this kind of background data. Instead, we are given simple PASS/FAIL criteria. If we’re lucky, the vendor will offer a multi-tiered criterion (i.e. “good / OK / bad”) that acknowledges that these different regimes exist. We can sometimes intuit how conservative or realistic the toolmaker might be in choosing criteria, and in critical situations we can even help develop custom or specialty criteria for unique installations. While it’s a safe bet that “quieter is better”, our job as vibration consultants is to help researchers understand that there’s far more to sensitivities than a simple binary test.


Contact us if you need help developing or working with environmental criteria, whether for an off-the-shelf instrument or for a one-of-a-kind installation. We have consulting experience with toolmakers as well as research institutions, and we can help make your product or project more successful.