Update regression approach and tolerance bands power law authored by Jamie Engelhardt Simon's avatar Jamie Engelhardt Simon
---
title: regression approach - powerlaw
title: regression approach and tolerance bands - power law
---
## Dependency
It is assumed the data can be described by a power law, expressed either as Y or X dependent (referring to the second and first axis on a fatigue curve).
......@@ -75,13 +76,13 @@ where $k$ is the number of points in the data-series.
Perform _minmax_ normalization. By doing so ensures that the features used by the model have similar scales, it also aids in faster convergence and accuracy.
```math
X_{d,s} = \frac{X_d - X_{\textrm{d,min}}}{X_{\textrm{d,max}} - X_{\textrm{d,min}}}, \quad
Y_{d,s} = \frac{Y_d - Y_{\textrm{d,min}}}{Y_{\textrm{d,max}} - Y_{\textrm{d,min}}}
X_{\textrm{d,s}} = \frac{X_\textrm{d} - X_{\textrm{d,min}}}{X_{\textrm{d,max}} - X_{\textrm{d,min}}}, \quad
Y_{\textrm{d,s}} = \frac{Y_\textrm{d} - Y_{\textrm{d,min}}}{Y_{\textrm{d,max}} - Y_{\textrm{d,min}}}
```
```math
X_{p,s} = \frac{X_p - X_{\textrm{d,min}}}{X_{\textrm{d,max}} - X_{\textrm{d,min}}}, \quad
Y_{p,s} = \frac{Y_p - Y_{\textrm{d,min}}}{Y_{\textrm{d,max}} - Y_{\textrm{d,min}}}
X_{\textrm{p,s}} = \frac{X_\textrm{p} - X_{\textrm{d,min}}}{X_{\textrm{d,max}} - X_{\textrm{d,min}}}, \quad
Y_{\textrm{p,s}} = \frac{Y_\textrm{p} - Y_{\textrm{d,min}}}{Y_{\textrm{d,max}} - Y_{\textrm{d,min}}}
```
where the lower index s is an abbreviation for scaled.
......@@ -94,37 +95,37 @@ The purpose is to sort the difference between the prediction and the data and de
![quantile_exmaple_error](uploads/f6950315da8dfc9c3d3054c0dfd95cb5/quantile_exmaple_error.png){width=35%}
</div>
Compute the error/difference between the model and the data
**a)** Compute the error/difference between the model and the data
```math
E = Y_{\textrm{p,s}} - Y_{\textrm{d,s}}
```
Extract the data with a positive error. We are only interested in the positive difference, as these are an indication of the points below the mean line.
**b)** Extract the data with a positive error. We are only interested in the positive difference, as these are an indication of the points below the mean line.
```math
E = E\left[E > 0\right], \quad X_{d,s} = X_{d,s}[E>0], \quad Y_{d,s} = Y_{d,s}[E>0]
E = E\left[E > 0\right], \quad X_{\textrm{d,s}} = X_{\textrm{d,s}}[E>0], \quad Y_{\textrm{d,s}} = Y_{\textrm{d,s}}[E>0]
```
Sort the data based on the error
**c)** Sort the data based on the error
```math
E = E\left[\textrm{argsort}\left(E\right)\right],
\quad X_{d,s} = X_{d,s}\left[\textrm{argsort}\left(E\right)\right],
\quad Y_{d,s} = Y_{d,s}\left[\textrm{argsort}\left(E\right)\right]
\quad X_{\textrm{d,s}} = X_{\textrm{d,s}}\left[\textrm{argsort}\left(E\right)\right],
\quad Y_{\textrm{d,s}} = Y_{\textrm{d,s}}\left[\textrm{argsort}\left(E\right)\right]
```
Extract the data within the desired quantile range
**d)** Extract the data within the desired quantile range
```math
E = E\left[:-I_{\alpha}\right],
\quad X_{d,s} = X_{d,s}\left[:-I_{\alpha}\right],
\quad Y_{d,s} = Y_{d,s}\left[:-I_{\alpha}\right]
\quad X_{\textrm{d,s}} = X_{\textrm{d,s}}\left[:-I_{\alpha}\right],
\quad Y_{\textrm{d,s}} = Y_{\textrm{d,s}}\left[:-I_{\alpha}\right]
```
**Step 6** <br>
Renormalize the data.
```math
X_{\Gamma} = X_{d,s} \left(X_{\textrm{d,max}} - X_{\textrm{d,min}}\right) + X_{\textrm{d,min}}, \quad
Y_{\Gamma} = Y_{d,s} \left(Y_{\textrm{d,max}} - Y_{\textrm{d,min}}\right) + Y_{\textrm{d,min}}
X_{\Gamma} = X_{\textrm{d,s}} \left(X_{\textrm{d,max}} - X_{\textrm{d,min}}\right) + X_{\textrm{d,min}}, \quad
Y_{\Gamma} = Y_{\textrm{d,s}} \left(Y_{\textrm{d,max}} - Y_{\textrm{d,min}}\right) + Y_{\textrm{d,min}}
```
Where the last entry is the exact quantile of interest
......@@ -133,7 +134,7 @@ X_{\alpha} = X_{\Gamma}[-1], \quad
Y_{\alpha} = Y_{\Gamma}[-1]
```
**Step 6** <br>
**Step 7** <br>
Compute the adjustment coefficient associated with the quantile of interest -
```math
\textrm{if}\;\textrm{Y-dependent}\;\Rightarrow C_{\alpha} = \frac{Y_{\alpha}}{X_{\alpha}^m}
......
......