Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • toolbox/WindEnergyToolbox
  • tlbl/WindEnergyToolbox
  • cpav/WindEnergyToolbox
  • frza/WindEnergyToolbox
  • borg/WindEnergyToolbox
  • mmpe/WindEnergyToolbox
  • ozgo/WindEnergyToolbox
  • dave/WindEnergyToolbox
  • mmir/WindEnergyToolbox
  • wluo/WindEnergyToolbox
  • welad/WindEnergyToolbox
  • chpav/WindEnergyToolbox
  • rink/WindEnergyToolbox
  • shfe/WindEnergyToolbox
  • shfe1/WindEnergyToolbox
  • acdi/WindEnergyToolbox
  • angl/WindEnergyToolbox
  • wliang/WindEnergyToolbox
  • mimc/WindEnergyToolbox
  • wtlib/WindEnergyToolbox
  • cmos/WindEnergyToolbox
  • fabpi/WindEnergyToolbox
22 results
Show changes
Commits on Source (159)
Showing
with 561 additions and 124 deletions
before_script:
- apt-get update
# uncomment first time
#- rm -rf TestFiles
#- git submodule update --init
- git submodule sync
- git submodule update
test-3.4:
image: mmpe/wetb
......
[submodule "TestFiles"]
path = TestFiles
url = https://gitlab.windenergy.dtu.dk/toolbox/TestFiles.git
Contributions
-------------
If you make a change in the toolbox, that others can benefit from please make a merge request.
If you can, please submit a merge request with the fix or improvements including tests.
The workflow to make a merge request is as follows:
-Create a feature branch, branch away from master
-Write tests and code
-Push the commit(s) to your fork
-Submit a merge request (MR) to the master branch of
-Link any relevant issues in the merge request description and leave a comment on them with a link back to the MR
-Your tests should run as fast as possible, and if it uses test files, these files should be as small as possible.
-Please keep the change in a single MR as small as possible. Split the functionality if you can
\ No newline at end of file
Subproject commit cf64c48ffb671d35a457d0539e078bfe3eb07f0b
......@@ -15,9 +15,9 @@ This tool comes handy in the following scenarios:
The generator of the cases uses an input spreadsheet where the cases are defined
in a more compact way.
The tool is based on the "tags" concept that is used for the genetaion of the htc files.
The tool is based on the "tags" concept that is used for the generation of the htc files.
Main spreatsheet
Main spreadsheet
----------------
A main spreadsheet is used to defines all the DLC of the DLB. The file specifies the tags that are then required in the htc files.
......@@ -26,13 +26,38 @@ The file has:
* a Main sheet where some wind turbines parameters are defined, the tags are initialized, and the definitions of turbulence and gusts are given.
* a series of other sheets, each defining a DLC. In these sheets the tags that changes in that DLC are defined.
The tags are devided into three possible different categories:
The tags are divided into three possible different categories:
* Constants (C). Constants are tags that do not change in a DLC, e.g. simulation time, output format, ...;
* Variables (V). Variables are tags that define the number of cases in a DLC through their combinations, e.g. wind speed, number of turbilence seeds, wind direction, ..;
* Variables (V). Variables are tags that define the number of cases in a DLC through their combinations, e.g. wind speed, number of turbulence seeds, wind direction, ..;
* Functions (F). Functions are tags that depend on other tags through an expression, e.g. turbulence intensity, case name, ....
In each sheet the type of tag is defined in the line above the tag by typing one of the letters C, V, or F.
Functions (F) tags
------------------
* Numbers can be converted to strings (for example when a tag refers to a file name)
by using double quotes ```"``` for Functions (F):
* ```"wdir_[wdir]deg_wsp_[wsp]ms"``` will result in the tags ``` [wdir]```
and ```[wsp]``` being replaced with formatted text.
* following formatting rules are used:
* ```[wsp]```, ```[gridgustdelay]``` : ```02i```
* ```[wdir]```, ```[G_phi0]``` : ```03i```
* ```[Hs]```, ```[Tp]``` : ```05.02f```
* all other tags: ```04i```
* Only numbers in tags with double quotes are formatted. In all other cases
there is no formatting taking place and hence no loss of precision occurs.
* In this context, when using quotes, always use double quotes like ```"```.
Do not use single quotes ```'``` or any other quote character.
Variable (V) tags
-----------------
* ```[seed]``` and ```[wave_seed]``` are special variable tags. Instead of defining
a range of seeds, the user indicates the number of seeds to be used.
* ```[wsp]``` is a required variable tag
* ```[seed]``` should be placed in a column BEFORE ```[wsp]```
Generate the files
------------------
......
......@@ -263,7 +263,7 @@ When there is a new version of HAWC2, or when a new license manager is released,
you can update your local wine directory as follows:
```
g-000 $ cp /home/MET/hawc2exe/* /home/$USER/wine_exe/win32/
g-000 $ rsync -au /home/MET/hawc2exe/win32 /home/$USER/wine_exe/win32 --progress
```
The file ```hawc2-latest.exe``` will always be the latest HAWC2
......@@ -415,6 +415,21 @@ Optional
* ```[mooring_dir] = False```, all files and sub-folders copied to node
* ```[hydro_dir] = False```, all files and sub-folders copied to node
The mooring line dll has a fixed name init file that has to be in the root of
the HAWC2 folder. When you have to use various init files (e.g. when the water
depth is varying for different load cases) it would be convienent to be able
to control which init file is used for which case (e.g. water depth).
When running a load case for which the mooring lines will run in init mode:
* ```[copyback_f1] = 'ESYSMooring_init.dat'```
* ```[copyback_f1_rename] = 'mooringinits/ESYSMooring_init_vXYZ.dat'```
When using an a priory cacluated init file for the mooring lines:
* ```[copyto_f1] = 'mooringinits/ESYSMooring_init_vXYZ.dat'```
* ```[copyto_generic_f1] = 'ESYSMooring_init.dat'```
Replace ```vXYZ``` with an appropriate identifier for your case.
A zip file will be created which contains all files in the model root directory,
and all the contents (files and folders) of the following directories:
```[control_dir], [mooring_dir], [hydro_dir], 'externalforce/', [data_dir]```.
......@@ -424,7 +439,7 @@ during simulation time in the ```[log_dir]```, ```[res_dir]```,
```[animation_dir]```, and ```[eigenfreq_dir]``` will be copied back.
### Advanced configuration options
### Advanced configuration options by modifying dlctemplate.py
> Note that not all features are documented yet...
......@@ -432,8 +447,8 @@ Special tags: copy special result files from the compute node back to the HAWC2
working directory on the network drive, and optionally rename the file in case
it would otherwise be overwritten by other cases in your DLB:
* ```[copyback_files] = ['ESYSMooring_init.dat']```
* ```[copyback_frename] = ['path/to/ESYSMooring_init_vXYZ.dat']```, optionally specify
a different file path/name
* ```[copyback_frename] = ['path/to/ESYSMooring_init_vXYZ.dat']```, optionally
specify a different file path/name
Copy files from the HAWC2 working directory with a special name to the compute
node for which the a fixed file name is assumed
......@@ -470,10 +485,9 @@ tags:
* ```[hydro input name]```
* ```[wave_type]``` : see HAWC2 manual for options
* ```[wave_spectrum]``` : see HAWC2 manual for options
* ```[hydro_dir]```
* ```[wdepth]```
* ```[hs]``` : see HAWC2 manual for options
* ```[tp]``` : see HAWC2 manual for options
* ```[Hs]``` : see HAWC2 manual for options
* ```[Tp]``` : see HAWC2 manual for options
* ```[wave_seed]``` : see HAWC2 manual for options
And the corresponding section the htc master file:
......@@ -734,25 +748,48 @@ g-000 $ qsub-wrap.py -f ../myturbine.py --years=25 --neq=1e7 --stats --check_log
Other options for the original ```dlctemplate.py``` script:
```
usage: dlctemplate.py [-h] [--prep] [--check_logs] [--stats] [--fatigue]
[--csv] [--years YEARS] [--no_bins NO_BINS] [--neq NEQ]
[--envelopeblade] [--envelopeturbine]
(wetb_py3) [dave@jess]$ python dlctemplate.py --help
usage: dlctemplate.py [-h] [--prep] [--check_logs]
[--pbs_failed_path PBS_FAILED_PATH] [--stats]
[--fatigue] [--AEP] [--csv] [--years YEARS]
[--no_bins NO_BINS] [--neq NEQ] [--rotarea ROTAREA]
[--save_new_sigs] [--dlcplot] [--envelopeblade]
[--envelopeturbine] [--zipchunks] [--pbs_turb]
[--walltime WALLTIME]
pre- or post-processes DLC's
optional arguments:
-h, --help show this help message and exit
--prep create htc, pbs, files (default=False)
--check_logs check the log files (default=False)
--stats calculate statistics and 1Hz equivalent loads (default=False)
--fatigue calculate Leq for a full DLC (default=False)
--csv Save data also as csv file (default=False)
--years YEARS Total life time in years (default=20)
--no_bins NO_BINS Number of bins for fatigue loads (default=46)
--neq NEQ Equivalent cycles neq, default 1 Hz equivalent load
(neq = simulation duration in seconds)
--envelopeblade calculate the load envelope for sensors on the blades
--envelopeturbine calculate the load envelope for sensors on the turbine
-h, --help show this help message and exit
--prep create htc, pbs, files
--check_logs check the log files
--pbs_failed_path PBS_FAILED_PATH
Copy pbs launch files of the failed cases to a new
directory in order to prepare a re-run. Default value:
pbs_in_failed.
--stats calculate statistics and 1Hz equivalent loads
--fatigue calculate Leq for a full DLC
--AEP calculate AEP, requires htc/DLCs/dlc_config.xlsx
--csv Save data also as csv file
--years YEARS Total life time in years
--no_bins NO_BINS Number of bins for fatigue loads
--neq NEQ Equivalent cycles Neq used for Leq fatigue lifetime
calculations.
--rotarea ROTAREA Rotor area for C_T, C_P
--save_new_sigs Save post-processed sigs
--dlcplot Plot DLC load basis results
--envelopeblade Compute envelopeblade
--envelopeturbine Compute envelopeturbine
--zipchunks Create PBS launch files forrunning in zip-chunk
find+xargs mode.
--pbs_turb Create PBS launch files to create the turbulence boxes
in stand alone mode using the 64-bit Mann turbulence
box generator. This can be usefull if your turbulence
boxes are too big for running in HAWC2 32-bit mode.
Only works on Jess.
--walltime WALLTIME Queue walltime for each case/pbs file, format:
HH:MM:SS Default: 04:00:00
```
The load envelopes are computed for sensors specified in the
......
......@@ -4,9 +4,9 @@ The Wind Energy Toolbox has a workflow for automatically running design load
bases (DLBs) on Gorm.
This workflow has the following steps:
1. Create a master Excel sheet defining each case in the DLB
2. Create subordinate Excel sheets from each tab in the master Excel sheet
3. Create htc files and PBS job scripts for each requisite simulation using
the subordinate Excel files and a master htc file.
2. [Create subordinate Excel sheets from each tab in the master Excel sheet](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/2-creating-subordinate-excels.md)
3. [Create htc files and PBS job scripts for each requisite simulation using
the subordinate Excel files and a master htc file.](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/3-creating-htc-pbs-files.md)
4. Submit all PBS job scripts to the cluster
5. Post-process results
6. Visualize results
......@@ -86,13 +86,15 @@ Generate the master Excel file in a few easy steps:
the Wind Energy Toolbox tutorials directory.
3. From a terminal/command window, run the code to generate the Excel file
from a folder of text files:
* Windows:
* Windows (from the wetb tutorials folder):
```python ..\..\wetb\prepost\write_master.py --folder data\DLCs_onshore --filename DLCs_onshore.xlsx```
* Mac/Linux:
* Mac/Linux (from the wetb tutorials folder):
```python ../../wetb/prepost/write_master.py --folder data/DLCs_onshore --filename DLCs_onshore.xlsx```
* Gorm:
```python /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/write_master.py --folder /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/docs/tutoria
ls/data/DLCs_onshore --filename DLCs_onshore.xlsx```
* Gorm (from any folder that contains a subfolder with your text files. Note
you must activate the wetb environment (see Step 5 [here](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/getting-started-with-dlbs.md)
) before this command will work. This command also assumes the folder with your
text files is called "DLCs_onshore" and is located in the working directory.):
```python /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/write_master.py --folder ./DLCs_onshore --filename ./DLCs_onshore.xlsx```
The master Excel file "DLCs_onshore.xlsx" should now be in the your current
directory.
......
# Tutorial 2: Creating subordinate Excel files
The Wind Energy Toolbox has a workflow for automatically running design load
bases (DLBs) on Gorm.
This workflow has the following steps:
1. [Create a master Excel sheet defining each case in the DLB](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/1-creating-master-excel.md)
2. Create subordinate Excel sheets from each tab in the master Excel sheet
3. [Create htc files and PBS job scripts for each requisite simulation using
the subordinate Excel files and a master htc file.](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/3-creating-htc-pbs-files.md)
4. Submit all PBS job scripts to the cluster
5. Post-process results
6. Visualize results
This tutorial presents how to accomplish Step 2.
Note that it is possible to customize your simulations by skipping/modifying
steps.
Such a procedure will be discussed in a later tutorial.
If there are any problems with this tutorial, please [submit an issue](
https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues).
## 1. Background: Subordinate Excel Files
The subordinate Excel files are a series of basic Excel files that are
generated from the master Excel file. (See our tutorial on generating the
master Excel file [here]( https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tut orials/1-creating-master-excel.md).)
There is a different subordinate Excel file for every tab in the master Excel
file, except for the "Main" tab, one for each case to simulate (e.g., design
load case 1.2 from IEC-61400-1).
Each subordinate Excel file has a single tab that lists the different tag
values for the htc master file in the column, and each row corresponds to a
different htc file to be generated.
The generation of the htc files from the subordinate Excel files is discused
in the next tutorial.
## 2. Tutorial
The generation of the subordinate Excel files is done using the
[GenerateDLS.py](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/wetb/prepost/GenerateDLCs.py)
function in the Wind Energy Toolbox.
On Gorm, the command can be executed from the htc directory with the master
Excel file as follows:
```
export PATH=/home/python/miniconda3/bin:$PATH
source activate wetb_py3
python /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/GenerateDLCs.py [--folder=$FOLDER_NAME] [--master=$MASTER_NAME]
source deactivate
```
The ```export PATH``` command adds the miniconda bin directory to the path,
which is necessary for the toolbox.
The ```source activate wetb_py3``` and ```source deactivate``` are
Gorm-specific commands to activate the Wind Energy Toolbox Python environment.
The ```--folder``` and ```--master``` flags are optional flags to specify,
respectively, the name of the folder to which the subordinate Excel files
should be written to and the name of the master Excel file.
The default values for these two options are './' (i.e., the current
directory) and 'DLCs.xlsx', respectively.
After running the commands in the above box on Gorm, you should see a folder in
your htc directory with all of your subordinate Excel files.
## 3. Issues
If there are any problems with this tutorial, please [submit an issue](
https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues).
We will try to fix it as soon as possible.
# Tutorial 3: Creating htc and PBS files
The Wind Energy Toolbox has a workflow for automatically running design load
bases (DLBs) on Gorm.
This workflow has the following steps:
1. [Create a master Excel sheet defining each case in the DLB](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/1-creating-master-excel.md)
2. [Create subordinate Excel sheets from each tab in the master Excel sheet](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/2-creating-subordinate-excels.md)
3. Create htc files and PBS job scripts for each requisite simulation using
the subordinate Excel files and a master htc file.
4. Submit all PBS job scripts to the cluster
5. Post-process results
6. Visualize results
This tutorial presents how to accomplish Step 3.
Note that it is possible to customize your simulations by skipping/modifying
steps.
Such a procedure will be discussed in a later tutorial.
If there are any problems with this tutorial, please [submit an issue](
https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues).
## 1. Background: htc and PBS file creation
The main function used in this tutorial is [dlctemplate.py](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/wetb/prepost/dlctemplate.py),
which creates all htc and PBS job scripts for the cases specified in the
subordinate Excel file folder.
The htc files are the main input files for HAWC2 simulations.
They are created by copying the master htc file in the ```_master/``` folder in
your htc directory and replacing all of the tags with the values specified in
the subordinate Excel files.
All of htc files for a single case are saved in a case-specific folder in your
htc folder.
Thus, if you were running a standard DLB calculation for IEC 61400-1, your
folder structure after generating your htc files might look like this:
```
|-- $TURB_NAME/
| |-- $SET_ID/
| | |-- DLCs.xlsx
| | |-- _master/
| | | |-- $MASTER_NAME.htc
| | |-- DLCs/
| | |-- htc/
| | | |-- dlc12_iec61400-1ed3/
| | | | |-- dlc12_wsp04_wdir000_s2001.htc
| | | | |-- dlc12_wsp04_wdir000_s2002.htc
| | | | |-- ...
| | | |-- dlc13_iec61400-1ed3/
| | | |-- ...
```
The PBS job scripts are a series of text files that are used to tell the job
scheduler on the high-performance computing (HPC) cluster how to run each job.
These files end with ".p", and are saved to a folder ```pbs_in/``` that is
created in the main set ID folder on Gorm.
## 2. Tutorial
There are two ways to call ```dlctemplate.py```.
The first is to call the function directly.
The second is to wrap it in a job scheduler to submit the job to the HPC cluster.
The first option is fine if you have only a few htc files or if the job
scheduler is not working for some reason.
The second option is generally preferred.
### 2.1 Directly generate htc files
The htc and PBS files can be directly generated by running the following
commands from the set ID directory:
```
export PATH=/home/python/miniconda3/bin:$PATH
source activate wetb_py3
python /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/dlctemplate.py --prep
source deactivate
```
The ```export PATH``` command adds the miniconda bin directory to the path,
which is necessary for the toolbox.
The ```source activate wetb_py3``` and ```source deactivate``` are
Gorm-specific commands to activate the Wind Energy Toolbox Python environment.
The ```--prep``` option tells the script to run in preparation mode, in which
case it creates the htc and pbs files.
After running the commands in the above box on Gorm, you should have all of your
PBS input files in ```pbs_in/``` and all of your htc files in ```htc```.
### 2.2 Generate files using job scheduler
From the set ID folder, run the following code:
```
qsub-wrap.py -f /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/dlctemplate.py --prep
```
## 3. Issues
If there are any problems with this tutorial, please [submit an issue](
https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues).
We will try to fix it as soon as possible.
......@@ -12,7 +12,7 @@ Default constants: [ref_ti] [ref_wind_speed] [tsr] [hub_height] [diameter] [t0]
Default functions: [Turb base name] [time stop] [turb_dx] [wsp factor] [wind_ramp_t1] [wind_ramp_factor1] [time_start]
"""turb_wsp[wsp]_s[seed]""" [t0]+[sim_time] "[wsp]*[time stop]/8192,0" [tsr]/[wsp] [t0] [wsp factor] [t0]
"""turb_wsp[wsp]_s[seed]""" [t0]+[sim_time] "[wsp]*[sim_time]/8192,0" [tsr]/[wsp] [t0] [wsp factor] [t0]
......
# Add your requirements here like:
six
cython
numpy>=1.4
scipy>=0.9
matplotlib
......@@ -17,4 +18,5 @@ pbr
PyScaffold
pytest-cov
sshtunnel
xlsxwriter
......@@ -236,16 +236,20 @@ class DLCHighLevel(object):
pass
elif not hasattr(self, "res_folder") or self.res_folder == "":
files = glob.glob(os.path.join(self.res_path, "*"+ext)) + glob.glob(os.path.join(self.res_path, "*/*"+ext))
if len(files)==0:
raise Exception('No *%s files found in:\n%s or\n%s'%(ext, self.res_path, os.path.join(self.res_path, "*/")))
else:
files = []
for dlc_id in fatigue_dlcs:
dlc_id = str(dlc_id)
if "%" in self.res_folder:
folder = self.res_folder % dlc_id
else:
folder = self.res_folder
files.extend(glob.glob(os.path.join(self.res_path , folder, "*"+ext)))
dlc_files = (glob.glob(os.path.join(self.res_path , folder, "*"+ext)))
if len(dlc_files)==0:
raise Exception('DLC%s included in fatigue analysis, but no *%s files found in:\n%s'%(dlc_id, ext, os.path.join(self.res_path , folder)))
files.extend(dlc_files)
keys = list(zip(*self.dist_value_keys))[1]
fmt = self.format_tag_value
tags = [[fmt(tag.replace(key, "")) for tag, key in zip(os.path.basename(f).split("_"), keys)] for f in files]
......
......@@ -19,6 +19,7 @@ from __future__ import print_function
from __future__ import unicode_literals
from __future__ import absolute_import
from future import standard_library
import warnings
standard_library.install_aliases()
import numpy as np
from wetb.fatigue_tools.rainflowcounting import rainflowcount
......@@ -104,7 +105,9 @@ def eq_load_and_cycles(signals, no_bins=46, m=[3, 4, 6, 8, 10, 12], neq=[10 ** 6
if 0: #to be similar to windap
ampl_bin_mean = (ampl_bin_edges[:-1] + ampl_bin_edges[1:]) / 2
cycles, ampl_bin_mean = cycles.flatten(), ampl_bin_mean.flatten()
eq_loads = [[((np.nansum(cycles * ampl_bin_mean ** _m) / _neq) ** (1. / _m)) for _m in np.atleast_1d(m)] for _neq in np.atleast_1d(neq)]
with warnings.catch_warnings():
warnings.simplefilter("ignore")
eq_loads = [[((np.nansum(cycles * ampl_bin_mean ** _m) / _neq) ** (1. / _m)) for _m in np.atleast_1d(m)] for _neq in np.atleast_1d(neq)]
return eq_loads, cycles, ampl_bin_mean, ampl_bin_edges
......@@ -156,11 +159,12 @@ def cycle_matrix(signals, ampl_bins=10, mean_bins=10, rainflow_func=rainflow_win
if isinstance(ampl_bins, int):
ampl_bins = np.linspace(0, 1, num=ampl_bins + 1) * ampls[weights>0].max()
cycles, ampl_edges, mean_edges = np.histogram2d(ampls, means, [ampl_bins, mean_bins], weights=weights)
ampl_bin_sum = np.histogram2d(ampls, means, [ampl_bins, mean_bins], weights=weights * ampls)[0]
ampl_bin_mean = np.nanmean(ampl_bin_sum / np.where(cycles,cycles,np.nan),1)
mean_bin_sum = np.histogram2d(ampls, means, [ampl_bins, mean_bins], weights=weights * means)[0]
mean_bin_mean = np.nanmean(mean_bin_sum / np.where(cycles, cycles, np.nan), 1)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
ampl_bin_sum = np.histogram2d(ampls, means, [ampl_bins, mean_bins], weights=weights * ampls)[0]
ampl_bin_mean = np.nanmean(ampl_bin_sum / np.where(cycles,cycles,np.nan),1)
mean_bin_sum = np.histogram2d(ampls, means, [ampl_bins, mean_bins], weights=weights * means)[0]
mean_bin_mean = np.nanmean(mean_bin_sum / np.where(cycles, cycles, np.nan), 1)
cycles = cycles / 2 # to get full cycles
return cycles, ampl_bin_mean, ampl_edges, mean_bin_mean, mean_edges
......
......@@ -64,7 +64,8 @@ def rainflow_windap(signal, levels=255., thresshold=(255 / 50)):
check_signal(signal)
#type <double> is required by <find_extreme> and <rainflow>
signal = signal.astype(np.double)
if np.all(np.isnan(signal)):
return None
offset = np.nanmin(signal)
signal -= offset
if np.nanmax(signal) > 0:
......
......@@ -23,6 +23,36 @@ testfilepath = os.path.join(os.path.dirname(__file__), 'test_files/') # test fi
class TestFatigueTools(unittest.TestCase):
def test_leq_1hz(self):
"""Simple test of wetb.fatigue_tools.fatigue.eq_load using a sine
signal.
"""
amplitude = 1
m = 1
point_per_deg = 100
# sine signal with 10 periods (20 peaks)
nr_periods = 10
time = np.linspace(0, nr_periods*2*np.pi, point_per_deg*180)
neq = time[-1]
# mean value of the signal shouldn't matter
signal = amplitude * np.sin(time) + 5
r_eq_1hz = eq_load(signal, no_bins=1, m=m, neq=neq)[0]
r_eq_1hz_expected = ((2*nr_periods*amplitude**m)/neq)**(1/m)
np.testing.assert_allclose(r_eq_1hz, r_eq_1hz_expected)
# sine signal with 20 periods (40 peaks)
nr_periods = 20
time = np.linspace(0, nr_periods*2*np.pi, point_per_deg*180)
neq = time[-1]
# mean value of the signal shouldn't matter
signal = amplitude * np.sin(time) + 9
r_eq_1hz2 = eq_load(signal, no_bins=1, m=m, neq=neq)[0]
r_eq_1hz_expected2 = ((2*nr_periods*amplitude**m)/neq)**(1/m)
np.testing.assert_allclose(r_eq_1hz2, r_eq_1hz_expected2)
# 1hz equivalent load should be independent of the length of the signal
np.testing.assert_allclose(r_eq_1hz, r_eq_1hz2)
def test_astm1(self):
......@@ -59,7 +89,7 @@ class TestFatigueTools(unittest.TestCase):
# def test_windap3(self):
# data = Hawc2io.ReadHawc2(testfilepath + "test").ReadBinary([2]).flatten()
# from wetb.fatigue_tools.rainflowcounting import peak_trough
# self.assertTrue(peak_trough.__file__.lower()[-4:] == ".pyd" or peak_trough.__file__.lower()[-3:] == ".so",
# self.assertTrue(peak_trough.__file__.lower()[-4:] == ".pyd" or peak_trough.__file__.lower()[-3:] == ".so",
# "not compiled, %s, %s\n%s"%(sys.executable, peak_trough.__file__, os.listdir(os.path.dirname(peak_trough.__file__))))
# np.testing.assert_array_equal(cycle_matrix(data, 4, 4, rainflow_func=rainflow_windap)[0], np.array([[ 14., 65., 39., 24.],
# [ 0., 1., 4., 0.],
......@@ -73,7 +103,7 @@ class TestFatigueTools(unittest.TestCase):
[ 0., 1., 4., 0.],
[ 0., 0., 0., 0.],
[ 0., 1., 2., 0.]]) / 2, 0.001)
def test_astm_weighted(self):
data = Hawc2io.ReadHawc2(testfilepath + "test").ReadBinary([2]).flatten()
np.testing.assert_allclose(cycle_matrix([(1, data),(1,data)], 4, 4, rainflow_func=rainflow_astm)[0], np.array([[ 24., 83., 53., 26.],
......
......@@ -38,9 +38,13 @@ from .gtsdf import save
from .gtsdf import load
from .gtsdf import append_block
from .gtsdf import load_pandas
from .gtsdf import add_statistic
from .gtsdf import load_statistic
from .gtsdf import compress2statistics
class Dataset(object):
def __init__(self, filename):
self.filename = filename
self.time, self.data, self.info = load(filename)
def __call__(self, id):
if isinstance(id, str):
......@@ -64,8 +68,8 @@ class Dataset(object):
try:
return self(name)
except:
for i, n in enumerate(self.info['attribute_names']):
print (i,n)
# for i, n in enumerate(self.info['attribute_names']):
# print (i,n)
raise e
def __contains__(self, name):
......
......@@ -3,6 +3,7 @@ from builtins import zip
from builtins import range
from builtins import str
from future import standard_library
from wetb.fatigue_tools.fatigue import eq_load
standard_library.install_aliases()
import warnings
from wetb.gtsdf.unix_time import from_unix
......@@ -13,6 +14,7 @@ except ImportError as e:
import os
import numpy as np
import numpy.ma as ma
import pandas as pd
block_name_fmt = "block%04d"
def load(filename, dtype=None):
......@@ -89,80 +91,95 @@ def load(filename, dtype=None):
'type': 'General time series data format',
'description': 'MyDatasetDescription'}
"""
f = _open_h5py_file(filename)
try:
info = _load_info(f)
time, data = _load_timedata(f,dtype)
return time, data, info
finally:
try:
f.close()
except:
pass
def _open_h5py_file(filename):
if isinstance(filename, h5py.File):
f = filename
filename = f.filename
else:
assert os.path.isfile(filename), "File, %s, does not exists" % filename
f = h5py.File(filename, 'r')
try:
def decode(v):
if isinstance(v, bytes):
return v.decode('latin1')
return v
info = {k: decode(v) for k, v in f.attrs.items()}
check_type(f)
if (block_name_fmt % 0) not in f:
raise ValueError("HDF5 file must contain a group named '%s'" % (block_name_fmt % 0))
block0 = f[block_name_fmt % 0]
if 'data' not in block0:
raise ValueError("group %s must contain a dataset called 'data'" % (block_name_fmt % 0))
_, no_attributes = block0['data'].shape
if 'name' not in info:
info['name'] = os.path.splitext(os.path.basename(filename))[0]
if 'attribute_names' in f:
info['attribute_names'] = [v.decode('latin1') for v in f['attribute_names']]
if 'attribute_units' in f:
info['attribute_units'] = [v.decode('latin1') for v in f['attribute_units']]
if 'attribute_descriptions' in f:
info['attribute_descriptions'] = [v.decode('latin1') for v in f['attribute_descriptions']]
no_blocks = f.attrs['no_blocks']
if dtype is None:
file_dtype = f[block_name_fmt % 0]['data'].dtype
if "float" in str(file_dtype):
dtype = file_dtype
elif file_dtype in [np.int8, np.uint8, np.int16, np.uint16]:
dtype = np.float32
else:
dtype = np.float64
time = []
data = []
for i in range(no_blocks):
try:
block = f[block_name_fmt % i]
except KeyError:
continue
no_observations, no_attributes = block['data'].shape
block_time = (block.get('time', np.arange(no_observations))[:]).astype(np.float64)
if 'time_step' in block.attrs:
block_time *= block.attrs['time_step']
if 'time_start' in block.attrs:
block_time += block.attrs['time_start']
time.extend(block_time)
block_data = block['data'][:].astype(dtype)
if "int" in str(block['data'].dtype):
block_data[block_data == np.iinfo(block['data'].dtype).max] = np.nan
if 'gains' in block:
block_data *= block['gains'][:]
if 'offsets' in block:
block_data += block['offsets'][:]
data.append(block_data)
f.close()
if no_blocks > 0:
data = np.vstack(data)
return np.array(time).astype(np.float64), np.array(data).astype(dtype), info
except (ValueError, AssertionError):
f.close()
raise
return f
def decode(v):
if isinstance(v, bytes):
return v.decode('latin1')
elif hasattr(v,'len'):
return [decode(v_) for v_ in v]
return v
def _load_info(f):
info = {k: decode(v) for k, v in f.attrs.items()}
check_type(f)
if 'name' not in info:
info['name'] = os.path.splitext(os.path.basename(f.filename))[0]
if 'attribute_names' in f:
info['attribute_names'] = [v.decode('latin1') for v in f['attribute_names']]
if 'attribute_units' in f:
info['attribute_units'] = [v.decode('latin1') for v in f['attribute_units']]
if 'attribute_descriptions' in f:
info['attribute_descriptions'] = [v.decode('latin1') for v in f['attribute_descriptions']]
return info
def _load_timedata(f, dtype):
no_blocks = f.attrs['no_blocks']
if (block_name_fmt % 0) not in f:
raise ValueError("HDF5 file must contain a group named '%s'" % (block_name_fmt % 0))
block0 = f[block_name_fmt % 0]
if 'data' not in block0:
raise ValueError("group %s must contain a dataset called 'data'" % (block_name_fmt % 0))
_, no_attributes = block0['data'].shape
if dtype is None:
file_dtype = f[block_name_fmt % 0]['data'].dtype
if "float" in str(file_dtype):
dtype = file_dtype
elif file_dtype in [np.int8, np.uint8, np.int16, np.uint16]:
dtype = np.float32
else:
dtype = np.float64
time = []
data = []
for i in range(no_blocks):
try:
block = f[block_name_fmt % i]
except KeyError:
continue
no_observations, no_attributes = block['data'].shape
block_time = (block.get('time', np.arange(no_observations))[:]).astype(np.float64)
if 'time_step' in block.attrs:
block_time *= block.attrs['time_step']
if 'time_start' in block.attrs:
block_time += block.attrs['time_start']
time.extend(block_time)
block_data = block['data'][:].astype(dtype)
if "int" in str(block['data'].dtype):
block_data[block_data == np.iinfo(block['data'].dtype).max] = np.nan
if 'gains' in block:
block_data *= block['gains'][:]
if 'offsets' in block:
block_data += block['offsets'][:]
data.append(block_data)
if no_blocks > 0:
data = np.vstack(data)
return np.array(time).astype(np.float64), np.array(data).astype(dtype)
def save(filename, data, **kwargs):
"""Save a 'General Time Series Data Format'-hdf5 datafile
......@@ -226,36 +243,44 @@ def save(filename, data, **kwargs):
time_step=2,
dtype=np.float64)
"""
if not filename.lower().endswith('.hdf5'):
filename += ".hdf5"
# exist_ok does not exist in Python27
if not os.path.exists(os.path.dirname(os.path.abspath(filename))):
os.makedirs(os.path.dirname(os.path.abspath(filename))) #, exist_ok=True)
_save_info(filename, data.shape, **kwargs)
append_block(filename, data, **kwargs)
def _save_info(filename, data_shape, **kwargs):
f = h5py.File(filename, "w")
try:
f.attrs["type"] = "General time series data format"
no_observations, no_attributes = data.shape
no_observations, no_attributes = data_shape
if 'name' in kwargs:
f.attrs['name'] = kwargs['name']
if 'description' in kwargs:
f.attrs['description'] = kwargs['description']
f.attrs['no_attributes'] = no_attributes
if 'attribute_names' in kwargs:
assert len(kwargs['attribute_names']) == no_attributes, "len(attribute_names)=%d but data shape is %s" % (len(kwargs['attribute_names']), data.shape)
if no_attributes:
assert len(kwargs['attribute_names']) == no_attributes, "len(attribute_names)=%d but data shape is %s" % (len(kwargs['attribute_names']), data_shape)
f.create_dataset("attribute_names", data=np.array([v.encode('utf-8') for v in kwargs['attribute_names']]))
if 'attribute_units' in kwargs:
assert(len(kwargs['attribute_units']) == no_attributes)
if no_attributes:
assert(len(kwargs['attribute_units']) == no_attributes)
f.create_dataset("attribute_units", data=np.array([v.encode('utf-8') for v in kwargs['attribute_units']]))
if 'attribute_descriptions' in kwargs:
assert(len(kwargs['attribute_descriptions']) == no_attributes)
if no_attributes:
assert(len(kwargs['attribute_descriptions']) == no_attributes)
f.create_dataset("attribute_descriptions", data=np.array([v.encode('utf-8') for v in kwargs['attribute_descriptions']]))
f.attrs['no_blocks'] = 0
except Exception:
raise
finally:
f.close()
append_block(filename, data, **kwargs)
def append_block(filename, data, **kwargs):
"""Append a data block and corresponding time data to already existing file
......@@ -398,3 +423,42 @@ def check_type(f):
raise ValueError("HDF5 file must contain a 'type'-attribute with the value 'General time series data format'")
if 'no_blocks' not in f.attrs:
raise ValueError("HDF5 file must contain an attribute named 'no_blocks'")
def _get_statistic(time, data, statistics=['min','mean','max','std','eq3','eq4','eq6','eq8','eq10','eq12']):
def get_stat(stat):
if hasattr(np, stat):
return getattr(np,stat)(data,0)
elif (stat.startswith("eq") and stat[2:].isdigit()):
m = float(stat[2:])
return [eq_load(sensor, 46, m, time[-1]-time[0]+time[1]-time[0])[0][0] for sensor in data.T]
return np.array([get_stat(stat) for stat in statistics]).T
def _add_statistic_data(file, stat_data, statistics=['min','mean','max','std','eq3','eq4','eq6','eq8','eq10','eq12']):
f = h5py.File(file, "a")
stat_grp = f.create_group("Statistic")
stat_grp.create_dataset("statistic_names", data=np.array([v.encode('utf-8') for v in statistics]))
stat_grp.create_dataset("statistic_data", data=stat_data.astype(np.float))
f.close()
def add_statistic(file, statistics=['min','mean','max','std','eq3','eq4','eq6','eq8','eq10','eq12']):
time, data, info = load(file)
stat_data = _get_statistic(time, data, statistics)
_add_statistic_data(file, stat_data, statistics)
def load_statistic(filename):
f = _open_h5py_file(filename)
info = _load_info(f)
names = decode(f['Statistic']['statistic_names'])
data =np.array(f['Statistic']['statistic_data'])
return pd.DataFrame(data, columns=names), info
def compress2statistics(filename, statistics=['min','mean','max','std','eq3','eq4','eq6','eq8','eq10','eq12']):
time, data, info = load(filename)
stat_data = _get_statistic(time, data, statistics)
_save_info(filename, data.shape, **info)
_add_statistic_data(filename, stat_data, statistics)
'''
Created on 12/09/2013
@author: mmpe
'''
from __future__ import division
from __future__ import unicode_literals
from __future__ import print_function
from __future__ import absolute_import
from builtins import super
from builtins import range
from future import standard_library
standard_library.install_aliases()
import h5py
import numpy as np
from wetb import gtsdf
import unittest
import os
tmp_path = os.path.dirname(__file__) + "/tmp/"
tfp = os.path.dirname(os.path.abspath(__file__)) + "/test_files/"
class Test_gsdf(unittest.TestCase):
def setUp(self):
unittest.TestCase.setUp(self)
if not os.path.isdir(tmp_path):
os.makedirs(tmp_path)
@classmethod
def tearDownClass(cls):
super(Test_gsdf, cls).tearDownClass()
#shutil.rmtree(tmp_path)
def test_gtsdf_stat(self):
time, data, info = gtsdf.load(tfp+'test.hdf5')
print (data.shape)
fn = tmp_path + "test_stat.hdf5"
gtsdf.save(fn, data, time=time, **info)
gtsdf.add_statistic(fn)
stat_data,info = gtsdf.load_statistic(fn)
self.assertEqual(data[:,0].min(), stat_data.values[0,0])
self.assertEqual(stat_data.shape, (49,10))
def test_gtsdf_compress2stat(self):
time, data, info = gtsdf.load(tfp+'test.hdf5')
fn = tmp_path + "test_compress2stat.hdf5"
gtsdf.save(fn, data, time=time, **info)
gtsdf.save(tmp_path + "test_compress2stat2.hdf5", data, time=time, dtype=np.float, **info)
gtsdf.compress2statistics(fn)
self.assertLess(os.path.getsize(fn)*50, os.path.getsize(tfp+'test.hdf5'))
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
......@@ -67,10 +67,10 @@ class AEFile(object):
ae_data = self.ae_sets[set_nr]
index = np.searchsorted(ae_data[:, 0], radius)
index = max(1, index)
setnr1, setnr2 = ae_data[index - 1:index + 1, 3]
if setnr1 != setnr2:
setnrs = ae_data[index - 1:index + 1, 3]
if setnrs[0] != setnrs[-1]:
raise NotImplementedError
return setnr1
return setnrs[0]
......