Jump to content

Dyno data –And what it tells us about how to tune a shim stack and control the shape of the damping force curve

Recommended Posts

MXScandinavia Shim Factor Dyno Testing

MXScandinavia set out with a simple example to see if shim factors could be used to scale the number of face shims on a stack. Results of this first test has covered a lot of ground:

  • MXScandinavia verified his dyno damping force measurements against factory Ohlins dyno data

  • Finger press stack stiffness measurements were verified against FEA calculations and the Ohlins factory dyno data at shaft velocities equivalent to hitting a four inch bump and 50 mph

  • Finger press data was verified against FEA analysis at high edge lifts of 7 h/t. That is equivalent to hitting a four inch bump at 200 mph

  • Shim stack stiffness measured up to a port edge lift of 7 h/t show no sudden stiffness increase or non-linearity as is commonly thought

  • Shim factors missed the low speed change in damping force by a factor of 4

  • Shim factors missed the high speed (500 in/sec) stack deflections by a factor of 10

This first test MXScandinavia did a lot more then test shim factors. The MXScandinavia data provides an important link between stack deflections measured on a finger press and damping force measured on a dyno. The finger press data measured stack stiffness at edge lifts far beyond what can be achieved with conventional dyno testing. That data dispels the common myth that the shim stiffness becomes nonlinear at high lift.

This first MXScandinavia dyno test provides a boat load of important suspension tuning info in terms of how shim stacks behave.

 

MXScandinavia runs some pretty cool dyno tests, imo.

21b-shimfac-data.png

Edited by Clicked

Share this post


Link to post
Share on other sites

The 4x40.3 stack was supposed to be 2% softer. The finger press data, at a stack lift of 1.2 mm, shows the stack was actually 21% softer. Shim factors missed the stack stiffness by a factor of 10 at high lift.

Shim factors missed the stack stiffness change by a factor of 10 at high lift, not the stack stiffness.  Big difference.

Share this post


Link to post
Share on other sites

Shim factors missed the stack stiffness change by a factor of 10 at high lift, not the stack stiffness.  Big difference.

Huge difference

  • Like 1

Share this post


Link to post
Share on other sites

I guess what people are trying to say is that the results of the dyno and the finger press both make sense, and are actually closer to each other than had been expected. The differences between the two measurements fall within the experiment's measurement error.

Let me present a simple example. Let's say, where is a ball bearing with the OD of exactly 50mm. First we measure it with a ruler, and it says 50mm. Then, we measure it with a digital caliper, and the caliper says 49.98mm. First, it would be erroneous is to conclude that the digital caliper is a million times more inaccurate than the wooden ruler. Second, both measurements are quite accurate and fall within a small fraction of the magnitude that was measured.

It may not be advisable to compare and convert the dyno measurements directly to the finger press measurements, but both kinds of measurements appear to be reasonably accurate and trustworthy.

Edited by gphilip
  • Like 1

Share this post


Link to post
Share on other sites

Measurement Error

What is interesting about this MXScandinavia data is instead of two measurements we had five.

 

On the theory side we had the thickness cubed “beam theory” of shim factors and FEA analysis.

 

On the test side we had low speed dyno data, finger press data and the high speed factory data.

  • Shim factor theory, after summing up all of the shims in the stack, showed the difference in stiffness for the two stacks was about 2%

  • At low speed the dyno data and FEA analysis both showed the stack stiffness difference was about 8%

  • At high speed the finger press data, FEA analysis and factory dyno data showed the stack stiffness increased and at the limit of the finger press data the difference was 20%

If all you had was a wooden ruler do you take the measurement at low speed where the stacks were about the same?

Or, take the measurement at high speed where the difference was about 20% and forget about the stack similarity at low speed?

 

If a wooden ruler can only measure at one speed, which measurement do you make to figure out how these stacks are going to “feel” on a test ride?

 

My thinking: Because those wooden rulers are only good to 1/16” you have to use a combination of theory, test and analysis to read between the lines of those coarse measurements to figure out what is going on.

Edited by Clicked

Share this post


Link to post
Share on other sites

It goes back to a statement that Kyle made, in that the "change" (to shim stack stiffness) is relatively small in MXScandinavia's test case.  Thus, potentially being obscured by measurement "error" and consistency of boundary conditions.  A better comparison would be with a larger change, however I don't think we have that data.

Share this post


Link to post
Share on other sites

It goes back to a statement that Kyle made, in that the "change" (to shim stack stiffness) is relatively small in MXScandinavia's test case.  Thus, potentially being obscured by measurement "error" and consistency of boundary conditions.  A better comparison would be with a larger change, however I don't think we have that data.

 

Non-Linear Shim Stack Stiffness Profile

You speak about the stacks tested here as if the “change in stiffness” was some subtle difference buried in the threshold of measurement uncertainty.

It is not. The data speaks for itself on that point:

21c-nonlinear.png

That stack deflection data unifies all of the peripheral data collected for these two shim stack configurations.  

  • At low deflections the stiffness difference is in the 2% range as expected by shim factors

  • At deflections of 0.01” tested by MXScandinavia the stiffness difference is in the 8% range as demonstrated by MXScandinavia dyno data

  • The roll over in stiffness for the 14x40.2 stack is consistent with the Ohlins factory data collected at stack deflections of 0.025” and shaft velocities up to 120 in/sec

Addressing bmwpowere's statement: Kyle's statement assumes the 2% stack stiffness change predicted by shim factors was going to be accurate. That assumption is reasonable since all MXScandinavia did was change the face shims. Kyle also assumed the difference in stack stiffness was going to remain constant over the entire velocity range tested. Nothing wrong with making an assumption, or forming a theory, and extrapolating that to draw conclusions. That is what testing is all about.

 

In this case the MXScandinavia data shows there is not a single thing wrong with Kyle Tarry's assumptions, except they only apply at low speed. As soon as the shim stack cracks open that “wave structure” causes distortions in the stack creating a nonlinear stiffness profile. The non-linearity is not huge. But it is enough that the 2% stack stiffness difference expected by shim factors increases to 8% in the range MXScandinavia tested, slightly higher in the range Ohlins tested and 20% at the limit of the tested range. That is an important effect and can not be dismissed as dyno measurement error. There is too much collaborating data and, at this point, no data disputing the effect measured by MXScandinavia.

 

  • Learned someth'n eh? That is what MXScandinavia was hoping for.

 

The other thing worth mentioning is you can only “see” that non-linearity when the stack is tested at ultra high speed. Down in the 0.01 to 0.02” stack deflections of conventional dyno testing the curvature is so slight it is barely noticeable in the shape of the stack stiffness profile. It's there, and it shows up in the force measurement of a dyno but it is hard to “see” the non-linearity until the stack is deflected further and that is outside the range of most dyno testing.

 

The other thing in this data that is interesting is the stiffness progression between the 14x40.2 stack and the 4x40.3 stack is different. The 14x40.2 stack got stiffer faster. Understanding that needs some more discussion and some more dyno data to figure out the why of that behavior.

 

 

bmwpowere: “A better comparison would be with a larger change, however I don't think we have that data.”

TT has plenty of dyno data over a very wide range of shim stack stiffness’s. Links to that data are at the top of this thread.

As already stated, those comparisons to shim factors are going to be disappointing.

21d-dyno-range.png

You should work through a couple of those cases for yourself and draw your own conclusions on the effectiveness of shim factors for the larger changes in shim stack stiffness.

Maybe you can find a form of the shim factor equation that works better?

Edited by Clicked

Share this post


Link to post
Share on other sites

I have actually gone through this recently with a ktm rebound ,I tried to roughly convert one stack to another (just in my head not using the program ) and I got a shock that was over damped and the clicker ended up fully open ,it could also be that I got my guess wrong lol

Share this post


Link to post
Share on other sites

You speak about the stacks tested here as if the “change in stiffness” was some subtle difference buried in the threshold of measurement uncertainty.

It is not.

...

At low deflections the stiffness difference is in the 2% range as expected by shim factors

...

At deflections of 0.01” tested by MXScandinavia the stiffness difference is in the 8% range as demonstrated by MXScandinavia dyno data

I guess we'll agree to disagree on this. I think that 2% is definitely well within measurement uncertainty, and 8% very well might be. Since we don't have any data about the repeatability of these tests, it's sort of hard to say, but it's not an extreme statement to say that a single measurement with a hand fixture (which requires a large correction to even have the right y-intercept) might not be accurate to within a few percent.

I remember a post from Kevin Stillwell where disassembling and reassembling a high speed compression adjuster resulted in a big change to its dyno results, and that was without any changes to the stack.

 

Kyle's statement assumes the 2% stack stiffness change predicted by shim factors was going to be accurate.

No, it does not.

 

Kyle also assumed the difference in stack stiffness was going to remain constant over the entire velocity range tested.

No, I did not.

 

bmwpowere: “A better comparison would be with a larger change, however I don't think we have that data.”

TT has plenty of dyno data over a very wide range of shim stack stiffness’s. Links to that data are at the top of this thread.

As already stated, those comparisons to shim factors are going to be disappointing.

You should work through a couple of those cases for yourself and draw your own conclusions on the effectiveness of shim factors for the larger changes in shim stack stiffness.

Maybe you can find a form of the shim factor equation that works better?

Dyno data is not the only variable in question here. Unless we have 2 stacks on the same damper, dyno data, shim factor data, FEA data, and finger press data, we can't really make a full comparison. I'm not aware of a complete set of that data for a different pair of stacks (with a larger difference between them).

IMO, dyno data at 500 in/sec is irrelevant, as that is MUCH faster than we will ever see in a real application.

Edited by Kyle Tarry

Share this post


Link to post
Share on other sites

I guess we'll agree to disagree on this. I think that 2% is definitely well within measurement uncertainty, and 8% very well might be. Since we don't have any data about the repeatability of these tests, it's sort of hard to say, but it's not an extreme statement to say that a single measurement with a hand fixture (which requires a large correction to even have the right y-intercept) might not be accurate to within a few percent.

I remember a post from Kevin Stillwell where disassembling and reassembling a high speed compression adjuster resulted in a big change to its dyno results, and that was without any changes to the stack.

 

 

Frankly, we have not discussed any of this data so it is not clear to me how we arrived at a disagreement.

Would you please state your disagreement without the might be, might not and hard to say phrases so we can all understand exactly what your disagreement with this data provided by MXScandinavia might be?

  • Like 1

Share this post


Link to post
Share on other sites

 

Frankly, we have not discussed any of this data so it is not clear to me how we arrived at a disagreement.

Would you please state your disagreement without the might be, might not and hard to say phrases so we can all understand exactly what your disagreement with this data provided by MXScandinavia might be?

 

 

Well, I've mentioned my concerns a couple of times and you've rebuffed or ignored them, so I interpreted that as disagreement.  Not a big deal, you can't reasonably expect to agree with everyone all the time. 

 

I put phrases like "might be" in there because we have insufficient data to be certain of the statistical significance of these differences.  I will not claim to be absolutely sure because I can't be.  All I know is that the differences you're calling significant (2%, 8%, etc) are really small, especially given the sample size and testing methodology.  I think a healthy dose of skepticism is important in experimentation, and standard scientific procedures require proof that differences between samples are statistically significant.

 

To me, it's obvious that differences between 2% and 8% are very questionable, when we are talking about all the variables in a test like this, not to mention the fact that some of the data (like the finger press data) needs to be "corrected" before it's even in the right ballpark.  I am NOT claiming outright that the data (or your conclusions) are wrong, but I am saying that we can't be certain that they are right, and that we need to better understand repeatability and reproducibility before we draw conclusions from the data.

 

Edit: It's not just the MXScandanavia data.  It's all of the data and the inherent inaccuracy (any test in inaccurate, it's just a matter of how much) of each test, which makes me question conclusion drawn from very small differences.

Edited by Kyle Tarry

Share this post


Link to post
Share on other sites

Kyle, you have consistently misquoted me on a couple of things.

Ordinarily I would let that go, but you have a tendency to pick up on some nuance, repeat it over and over again, and then re-quote it as fact. So lets straighten out some details here:

Well, I've mentioned my concerns a couple of times and you've rebuffed or ignored them, so I interpreted that as disagreement.  Not a big deal, you can't reasonably expect to agree with everyone all the time. 

 

I put phrases like "might be" in there because we have insufficient data to be certain of the statistical significance of these differences.  I will not claim to be absolutely sure because I can't be.  All I know is that the differences you're calling significant (2%, 8%, etc) are really small, especially given the sample size and testing methodology.  I think a healthy dose of skepticism is important in experimentation, and standard scientific procedures require proof that differences between samples are statistically significant.

 

To me, it's obvious that differences between 2% and 8% are very questionable, when we are talking about all the variables in a test like this, not to mention the fact that some of the data (like the finger press data) needs to be "corrected" before it's even in the right ballpark.  I am NOT claiming outright that the data (or your conclusions) are wrong, but I am saying that we can't be certain that they are right, and that we need to better understand repeatability and reproducibility before we draw conclusions from the data.

 

Edit: It's not just the MXScandanavia data.  It's all of the data and the inherent inaccuracy (any test in inaccurate, it's just a matter of how much) of each test, which makes me question conclusion drawn from very small differences.

 

The only guy calling 2% versus 8% significant is you. Do a search on this thread to verify that fact.

 

I termed the 2% versus 8% difference consistent.

The MXScandinavia finger press data shows the difference in stiffness progression between the 14x0.2 and 4x0.3 stacks creates a 20% difference in shim stack stiffness at high speed. That is significant. Trace that stiffness progression back to the shim stack lifts produced in the MXScandinavai dyno tests and the 8% difference is consistent with the dyno data as well as the factory dyno data measured by Ohlins.

Consistent and significant. Different words, different meanings, different concepts. Those differences have important implications for interpretation of this data.

21c-nonlinear.png

The other item you continuously misquote me on is your invention of a 4% difference in shim stack stiffness. That is a misquote and misinterpretation of my statements.

Shim factors show the difference in the face shim stiffness for the 14x0.2 and 4x0.3 stacks to be 4%. But that is only for the face shims.  

 

When the stiffness of the tapered section is added to the stiffness of the face shims the overall stack is obviously stiffer then the face shims alone. Adding the tapered section and face shim stiffness together reduces the difference in stiffness to 2%. So for the 14x40.2 and 4x40.3 stacks we have:

  • 4% difference in face shim stiffness

  • 2% difference in shim stack stiffness

The tapered shim factor equation estimated the difference in stack stiffness to be 2%. Not the 4% difference that you alternately bring up from time to time.

Those are small things. Unfortunately, it seems they need to get straighten out before they become big things.

Share this post


Link to post
Share on other sites

Well, I've mentioned my concerns a couple of times and you've rebuffed or ignored them, so I interpreted that as disagreement.  Not a big deal, you can't reasonably expect to agree with everyone all the time. 

 

I put phrases like "might be" in there because we have insufficient data to be certain of the statistical significance of these differences.  I will not claim to be absolutely sure because I can't be.  All I know is that the differences you're calling significant (2%, 8%, etc) are really small, especially given the sample size and testing methodology.  I think a healthy dose of skepticism is important in experimentation, and standard scientific procedures require proof that differences between samples are statistically significant.

 

To me, it's obvious that differences between 2% and 8% are very questionable, when we are talking about all the variables in a test like this, not to mention the fact that some of the data (like the finger press data) needs to be "corrected" before it's even in the right ballpark.  I am NOT claiming outright that the data (or your conclusions) are wrong, but I am saying that we can't be certain that they are right, and that we need to better understand repeatability and reproducibility before we draw conclusions from the data.

 

Edit: It's not just the MXScandanavia data.  It's all of the data and the inherent inaccuracy (any test in inaccurate, it's just a matter of how much) of each test, which makes me question conclusion drawn from very small differences.

 

So, your statement claims you actually have no disagreement with the MXScandinavia data at all !

Instead, you are simply talking about generation of more data to see if some disagreement might possibly arise in the future.

Sort of a preemptive disagreement, if you will.

 

So, no!

I can not agree to a preemptive disagreement.

That's ridiculous!

 

The only way to identify a discrepancy is to have some basis to compare to. This thread is about establishing that basis. There is a bunch of dyno data here on TT. This thread is pulling up that dyno data, inspecting it, comparing it and trying to figure out what it means – like categorize it.

 

That process forms a baseline and a basis for comparison of other data. Your cry for statistical validation of the baseline, when that baseline does not even exist, is premature imo.

 

So this is the critical thing for this thread: “I am NOT claiming outright that the data (or your conclusions) are wrong”.

That is the best possible outcome of this thread. Some baseline set of data, and some baseline set of theories that collectively makes sense along with the supporting data.

That forms a baseline to move forward from, a basis for comparison of other data and a baseline set of supporting or conflicting theories we can all use to tune shim stacks.

Share this post


Link to post
Share on other sites

 I was wrong on the 2%/4% thing, good clarification.  I was confused by your statement on page 1, but looking back again I understand now.

 

 

The only guy calling 2% versus 8% significant is you. Do a search on this thread to verify that fact.


You've been analyzing the differences between shim factors and finger press and dyno for 3 pages. I guess I just assumed you thought they were significant.  I mean, if you didn't think they were, why would you be dissecting it?

 

So, your statement claims you actually have no disagreement with the MXScandinavia data at all !
Instead, you are simply talking about generation of more data to see if some disagreement might possibly arise in the future.
Sort of a preemptive disagreement, if you will.
 
So, no!
I can not agree to a preemptive disagreement.
That's ridiculous!

 
Just standard scientific method...

Any time you are comparing two sets of data that show different results, the FIRST thing you do is test to see if the difference is significant (in a statistical sense), relative to the uncertainly and variation in the measurements.  I'm simply suggesting that we do this, before we dedicate 3 more pages to the analysis of questionable data.  You have some numbers, and the differences between them, or rather the significant or validity of the difference between them, is questionable.  I am simply pointing that out.  You don't seem to be interested in talking about the significance of the differences between your datapoints, and I've pointed this out a few times, so I'm going to bow out and just let this play out however it does, no sense in continuing to be argumentative.

 

If you were to compare any datasets without the due diligence of significance tests and present them in any technical arena, the same question would be asked.

Share this post


Link to post
Share on other sites

You've been analyzing the differences between shim factors and finger press and dyno for 3 pages. I guess I just assumed you thought they were significant.  I mean, if you didn't think they were, why would you be dissecting it?

 

Just standard scientific method...

Any time you are comparing two sets of data that show different results, the FIRST thing you do is test to see if the difference is significant (in a statistical sense), relative to the uncertainly and variation in the measurements.  I'm simply suggesting that we do this, before we dedicate 3 more pages to the analysis of questionable data.  You have some numbers, and the differences between them, or rather the significant or validity of the difference between them, is questionable.  I am simply pointing that out.  You don't seem to be interested in talking about the significance of the differences between your datapoints, and I've pointed this out a few times, so I'm going to bow out and just let this play out however it does, no sense in continuing to be argumentative.

 

If you were to compare any datasets without the due diligence of significance tests and present them in any technical arena, the same question would be asked.

Here we go again.

Kyle Tarry posting the same thing, over and over again, as if that makes it relevant.

 

Let's talk scientific method.......

MXScandinavia tested two shim stack configurations and demonstrated a difference in damping force. The FIRST thing we did was check that data against the theoretical difference. The data lines up, indicating MXScandinavia did a reasonably good job calibrating the instrumentation and qualifying his test technique.

 

The SECOND thing we did was check the data against the Ohlins factory dyno test data. The results line up. The “scientific method” calls that an independent verification of the tested configuration and independent validation of the test result. Testing at a separate test facility with a different crew reduces the possibility of repeating the same mistake over and over again. A common problem when trying to repeat tests on the same dyno.

 

The THIRD thing we did was check both the MXScandinavia and Ohlins factory data against the finger press measurements. Both tests line up with the finger press data. In the “scientific method” that is known as a third “independent validation” of the measurements obtained using a separate measurement technique. For the data here we have five measurements of two shim stack configurations.

 

If those validation and verification steps had shown some discrepancy, or anomaly, more testing would be needed to re-evaluate the test technique and re-qualify the instrumentation through parametric test to try and figure out what went wrong. Those parametric tests are the LAST thing you would do.

Kyle needs to re-write his statement to make that more clear. Parametric testing is the LAST thing you would do.  

 

What about due diligence?

"If you were to compare any datasets without the due diligence of significance tests and present them in any technical arena, the same question would be asked."

Multiple repeat tests using different dynos, at different test facilities and the additional verification using a finger press as a separate test technique clearly demonstrates “due diligence”. Asserting otherwise simply emphasizes due diligence was a vague concept in the first place.

 

What about data accuracy?

Kyle Tarry mentions my pandering on through page two comparing the two shim stacks, the dyno data, the finger press data and the comparison of finger press data to dyno data. Spending a whole page comparing test data and the differences between them was excessive by Kyle Tarry's opinion. Then:

"You don't seem to be interested in talking about the significance of the differences between your datapoints, and I've pointed this out a few times, so I'm going to bow out and just let this play out however it does, …...."

Heck, I am the only guy talking about measurement accuracy, comparison of data from multiple tests and different measurement techniques.  

 

What was Kyle Tarry point anyway? Far as I can figure, Kyle's point is dyno data uncertainty is in the range of 2% to 8%.

I think that 2% is definitely well within measurement uncertainty, and 8% very well might be. Since we don't have any data ....

 

You have to be careful interpreting dyno data. Page two was all about that.

Share this post


Link to post
Share on other sites

We spent page four trying to figure out if Kyle Tarry may, or may not, have a comment possibly directly addressing something about a future aspect, or component, that may clearly relate the direct evaluation, on an absolute basis, that might or might not be related to configuration of shim stacks succinctly designed for the potential unambiguous demonstration of approx .......

 

Nonsense has a remedy. In the future let's dispense with it more directly:

  • SHOW ME THE DATA!

    • Show me the data where we can see the discrepancies for different shim stack configurations

    • Show me the data where measurements become “questionable”

    • Show me the data demonstrating inconsistent measurements

  • Show me the data and let me interpret the data for myself

SHOW BE THE DATA! A thorough, concise, friendly and fair reply.

Edited by Clicked

Share this post


Link to post
Share on other sites

We started off the thread looking at shim factors. Here is where we are at:



  • MXScandinavia used shim factors to scale a stack of 14x40.2 face shims to an equivalent short stack of 4x40.3 shims




  • MXScandinavia dyno tested the two stacks




  • Dyno results for the 14x40.2 stack were verified against the factory dyno data




  • The stiffness differences measured for the two stacks were verified by a finger press



We have five measurements on these two shim stacks and made a bunch of comparisons between the measurements. Everything seems to line up.


21e-dyno-review.png


Here is what MXScandinavia was after: How well do the stiffness measurements for those two stacks line up against the basic theories for shim stack tuning?


Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Reply with:


×