Update regression approach powerlaw authored by Jamie Engelhardt Simon's avatar Jamie Engelhardt Simon
--- ---
title: regression approach - powerlaw title: regression approach - powerlaw
--- ---
## Dependency ## Dependency
It is assumed the data can be described by a power law, expressed either as Y or X dependent (referring to the second and first axis on a fatigue curve). It is assumed the data can be described by a power law, expressed either as Y or X dependent (referring to the second and first axis on a fatigue curve).
...@@ -57,7 +47,7 @@ There exist a multitude of methods as to identify these limits such as assuming ...@@ -57,7 +47,7 @@ There exist a multitude of methods as to identify these limits such as assuming
<br> <br>
Set quantile of interest Set quantile of interest
```math ```math
\alpha_{0.75} = 0.75, \quad \alpha_{0.95} = 0.95 \alpha = \alpha
``` ```
**Step 2** **Step 2**
...@@ -65,7 +55,7 @@ Set quantile of interest ...@@ -65,7 +55,7 @@ Set quantile of interest
Compute error limit Compute error limit
```math ```math
\Gamma_{0.75} = 1 - \alpha_{0.75}, \quad \Gamma_{0.95} = 1 - \alpha_{0.95} \Gamma = 1 - \alpha
``` ```
**Step 3** **Step 3**
...@@ -73,8 +63,7 @@ Compute error limit ...@@ -73,8 +63,7 @@ Compute error limit
Employ the interval on the active data-range Employ the interval on the active data-range
```math ```math
I_{0.75} = \textrm{int}\left(\textrm{round}\left(k\cdot\Gamma_{0.75}\right)\right), \quad I_{\alpha} = \textrm{int}\left(\textrm{round}\left(k\cdot\Gamma\right)\right), \quad
I_{0.95} = \textrm{int}\left(\textrm{round}\left(k\cdot\Gamma_{0.95}\right)\right)
``` ```
where $k$ is the number of points in the data-series. where $k$ is the number of points in the data-series.
...@@ -85,8 +74,13 @@ where $k$ is the number of points in the data-series. ...@@ -85,8 +74,13 @@ where $k$ is the number of points in the data-series.
Perform _minmax_ normalization. By doing so ensures that the features used by the model have similar scales and aid in faster convergence and accuracy. Perform _minmax_ normalization. By doing so ensures that the features used by the model have similar scales and aid in faster convergence and accuracy.
```math ```math
X_{s} = \frac{X_i - X_{\textrm{min}}}{X_{\textrm{max}} - X_{\textrm{min}}}, \quad X_{d,s} = \frac{X_d - X_{\textrm{d,min}}}{X_{\textrm{d,max}} - X_{\textrm{d,min}}}, \quad
Y_{s} = \frac{Y_i - Y_{\textrm{min}}}{Y_{\textrm{max}} - Y_{\textrm{min}}} Y_{d,s} = \frac{Y_d - Y_{\textrm{d,min}}}{Y_{\textrm{d,max}} - Y_{\textrm{d,min}}}
```
```math
X_{p,s} = \frac{X_p - X_{\textrm{d,min}}}{X_{\textrm{d,max}} - X_{\textrm{d,min}}}, \quad
Y_{p,s} = \frac{Y_p - Y_{\textrm{d,min}}}{Y_{\textrm{d,max}} - Y_{\textrm{d,min}}}
``` ```
where the lower index s is an abbreviation for scaled. where the lower index s is an abbreviation for scaled.
...@@ -102,34 +96,34 @@ The purpose is to sort the difference between the prediction and the data and de ...@@ -102,34 +96,34 @@ The purpose is to sort the difference between the prediction and the data and de
Compute the error/difference between the model and the data Compute the error/difference between the model and the data
```math ```math
E = Y_{\textrm{p}} - Y_{\textrm{d}} E = Y_{\textrm{p,s}} - Y_{\textrm{d,s}}
``` ```
Extract the data with a positive error. We are only interested in the positive difference, as these are an indication of the points below the mean line. Extract the data with a positive error. We are only interested in the positive difference, as these are an indication of the points below the mean line.
```math ```math
E = E\left[E > 0\right], \quad X_{s} = X_{s}[E>0], \quad Y_{s} = Y_{s}[E>0] E = E\left[E > 0\right], \quad X_{d,s} = X_{d,s}[E>0], \quad Y_{d,s} = Y_{d,s}[E>0]
``` ```
Sort the data based on the error Sort the data based on the error
```math ```math
E = E\left[\textrm{argsort}\left(E\right)\right], E = E\left[\textrm{argsort}\left(E\right)\right],
\quad X_{s} = X_{s}\left[\textrm{argsort}\left(E\right)\right], \quad X_{d,s} = X_{d,s}\left[\textrm{argsort}\left(E\right)\right],
\quad Y_{s} = Y_{s}\left[\textrm{argsort}\left(E\right)\right] \quad Y_{d,s} = Y_{d,s}\left[\textrm{argsort}\left(E\right)\right]
``` ```
Extract the data within the desired quantile range Extract the data within the desired quantile range
```math ```math
E = E\left[:-I_{\alpha}\right], E = E\left[:-I_{\alpha}\right],
\quad X_{s} = X_{s}\left[:-I_{\alpha}\right], \quad X_{d,s} = X_{d,s}\left[:-I_{\alpha}\right],
\quad Y_{s} = Y_{s}\left[:-I_{\alpha}\right] \quad Y_{d,s} = Y_{d,s}\left[:-I_{\alpha}\right]
``` ```
**Step 6** <br> **Step 6** <br>
Renormalize the data. Renormalize the data.
```math ```math
X_{\Gamma} = X_s \left(X_{\textrm{d,max}} - X_{\textrm{d,min}}\right) + X_{\textrm{d,min}}, \quad X_{\Gamma} = X_{d,s} \left(X_{\textrm{d,max}} - X_{\textrm{d,min}}\right) + X_{\textrm{d,min}}, \quad
Y_{\Gamma} = Y_s \left(Y_{\textrm{d,max}} - Y_{\textrm{d,min}}\right) + Y_{\textrm{d,min}} Y_{\Gamma} = Y_{d,s} \left(Y_{\textrm{d,max}} - Y_{\textrm{d,min}}\right) + Y_{\textrm{d,min}}
``` ```
Where the last entry is the exact quantile of interest Where the last entry is the exact quantile of interest
...@@ -140,16 +134,14 @@ Y_{\alpha} = Y_{\Gamma}[-1] ...@@ -140,16 +134,14 @@ Y_{\alpha} = Y_{\Gamma}[-1]
**Step 6** <br> **Step 6** <br>
Compute the adjustment coefficient associated with the quantile of interest - Compute the adjustment coefficient associated with the quantile of interest -
if Y dependent
```math ```math
C_{\alpha} = \frac{Y_{\alpha}}{X_{\alpha}^m} \textrm{if}\;\textrm{Y-dependent}\;\Rightarrow C_{\alpha} = \frac{Y_{\alpha}}{X_{\alpha}^m}
``` ```
if X dependent
```math ```math
C_{\alpha} = \frac{X_{\alpha}}{Y_{\alpha}^m} \textrm{if}\;\textrm{X-dependent}\;\Rightarrow C_{\alpha} = \frac{X_{\alpha}}{Y_{\alpha}^m}
``` ```
The end results for an X- or Y dependent fit using $\alpha=0.75$ and $\alpha=0.95$ are illustrated below