Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision
  • 41-dlc-highlevel-fails-on-empty-line-in-dlc-sheet
  • master
  • py23
  • v0.0.1
  • v0.0.2
  • v0.0.5
  • v0.0.6
  • v0.0.7
  • v0.0.8
  • v0.0.9
10 results

Target

Select target project
  • toolbox/WindEnergyToolbox
  • tlbl/WindEnergyToolbox
  • cpav/WindEnergyToolbox
  • frza/WindEnergyToolbox
  • borg/WindEnergyToolbox
  • mmpe/WindEnergyToolbox
  • ozgo/WindEnergyToolbox
  • dave/WindEnergyToolbox
  • mmir/WindEnergyToolbox
  • wluo/WindEnergyToolbox
  • welad/WindEnergyToolbox
  • chpav/WindEnergyToolbox
  • rink/WindEnergyToolbox
  • shfe/WindEnergyToolbox
  • shfe1/WindEnergyToolbox
  • acdi/WindEnergyToolbox
  • angl/WindEnergyToolbox
  • wliang/WindEnergyToolbox
  • mimc/WindEnergyToolbox
  • wtlib/WindEnergyToolbox
  • cmos/WindEnergyToolbox
  • fabpi/WindEnergyToolbox
22 results
Select Git revision
  • 41-dlc-highlevel-fails-on-empty-line-in-dlc-sheet
  • master
  • py23
  • v0.0.1
  • v0.0.2
  • v0.0.5
  • v0.0.6
  • v0.0.7
  • v0.0.8
  • v0.0.9
10 results
Show changes
Commits on Source (124)
Showing
with 698 additions and 222 deletions
before_script: before_script:
- apt-get update - apt-get update
# uncomment first time
#- rm -rf TestFiles
#- git submodule update --init
- git submodule sync
- git submodule update
test-3.4: test-3.4:
image: mmpe/wetb image: mmpe/wetb
......
[submodule "TestFiles"]
path = TestFiles
url = https://gitlab.windenergy.dtu.dk/toolbox/TestFiles.git
Subproject commit cf64c48ffb671d35a457d0539e078bfe3eb07f0b
...@@ -15,9 +15,9 @@ This tool comes handy in the following scenarios: ...@@ -15,9 +15,9 @@ This tool comes handy in the following scenarios:
The generator of the cases uses an input spreadsheet where the cases are defined The generator of the cases uses an input spreadsheet where the cases are defined
in a more compact way. in a more compact way.
The tool is based on the "tags" concept that is used for the genetaion of the htc files. The tool is based on the "tags" concept that is used for the generation of the htc files.
Main spreatsheet Main spreadsheet
---------------- ----------------
A main spreadsheet is used to defines all the DLC of the DLB. The file specifies the tags that are then required in the htc files. A main spreadsheet is used to defines all the DLC of the DLB. The file specifies the tags that are then required in the htc files.
...@@ -26,13 +26,38 @@ The file has: ...@@ -26,13 +26,38 @@ The file has:
* a Main sheet where some wind turbines parameters are defined, the tags are initialized, and the definitions of turbulence and gusts are given. * a Main sheet where some wind turbines parameters are defined, the tags are initialized, and the definitions of turbulence and gusts are given.
* a series of other sheets, each defining a DLC. In these sheets the tags that changes in that DLC are defined. * a series of other sheets, each defining a DLC. In these sheets the tags that changes in that DLC are defined.
The tags are devided into three possible different categories: The tags are divided into three possible different categories:
* Constants (C). Constants are tags that do not change in a DLC, e.g. simulation time, output format, ...; * Constants (C). Constants are tags that do not change in a DLC, e.g. simulation time, output format, ...;
* Variables (V). Variables are tags that define the number of cases in a DLC through their combinations, e.g. wind speed, number of turbilence seeds, wind direction, ..; * Variables (V). Variables are tags that define the number of cases in a DLC through their combinations, e.g. wind speed, number of turbulence seeds, wind direction, ..;
* Functions (F). Functions are tags that depend on other tags through an expression, e.g. turbulence intensity, case name, .... * Functions (F). Functions are tags that depend on other tags through an expression, e.g. turbulence intensity, case name, ....
In each sheet the type of tag is defined in the line above the tag by typing one of the letters C, V, or F. In each sheet the type of tag is defined in the line above the tag by typing one of the letters C, V, or F.
Functions (F) tags
------------------
* Numbers can be converted to strings (for example when a tag refers to a file name)
by using double quotes ```"``` for Functions (F):
* ```"wdir_[wdir]deg_wsp_[wsp]ms"``` will result in the tags ``` [wdir]```
and ```[wsp]``` being replaced with formatted text.
* following formatting rules are used:
* ```[wsp]```, ```[gridgustdelay]``` : ```02i```
* ```[wdir]```, ```[G_phi0]``` : ```03i```
* ```[Hs]```, ```[Tp]``` : ```05.02f```
* all other tags: ```04i```
* Only numbers in tags with double quotes are formatted. In all other cases
there is no formatting taking place and hence no loss of precision occurs.
* In this context, when using quotes, always use double quotes like ```"```.
Do not use single quotes ```'``` or any other quote character.
Variable (V) tags
-----------------
* ```[seed]``` and ```[wave_seed]``` are special variable tags. Instead of defining
a range of seeds, the user indicates the number of seeds to be used.
* ```[wsp]``` is a required variable tag
* ```[seed]``` should be placed in a column BEFORE ```[wsp]```
Generate the files Generate the files
------------------ ------------------
......
...@@ -263,7 +263,7 @@ When there is a new version of HAWC2, or when a new license manager is released, ...@@ -263,7 +263,7 @@ When there is a new version of HAWC2, or when a new license manager is released,
you can update your local wine directory as follows: you can update your local wine directory as follows:
``` ```
g-000 $ cp /home/MET/hawc2exe/* /home/$USER/wine_exe/win32/ g-000 $ rsync -au /home/MET/hawc2exe/win32 /home/$USER/wine_exe/win32 --progress
``` ```
The file ```hawc2-latest.exe``` will always be the latest HAWC2 The file ```hawc2-latest.exe``` will always be the latest HAWC2
...@@ -421,12 +421,12 @@ depth is varying for different load cases) it would be convienent to be able ...@@ -421,12 +421,12 @@ depth is varying for different load cases) it would be convienent to be able
to control which init file is used for which case (e.g. water depth). to control which init file is used for which case (e.g. water depth).
When running a load case for which the mooring lines will run in init mode: When running a load case for which the mooring lines will run in init mode:
* ```[copyback_f1]``` = 'ESYSMooring_init.dat' * ```[copyback_f1] = 'ESYSMooring_init.dat'```
* ```[copyback_f1_rename]``` = 'mooringinits/ESYSMooring_init_vXYZ.dat' * ```[copyback_f1_rename] = 'mooringinits/ESYSMooring_init_vXYZ.dat'```
When using an a priory cacluated init file for the mooring lines: When using an a priory cacluated init file for the mooring lines:
* ```[copyto_generic_f1]``` = 'mooringinits/ESYSMooring_init_vXYZ.dat' * ```[copyto_f1] = 'mooringinits/ESYSMooring_init_vXYZ.dat'```
* ```[copyto_f1]``` = 'ESYSMooring_init.dat' * ```[copyto_generic_f1] = 'ESYSMooring_init.dat'```
Replace ```vXYZ``` with an appropriate identifier for your case. Replace ```vXYZ``` with an appropriate identifier for your case.
...@@ -748,25 +748,48 @@ g-000 $ qsub-wrap.py -f ../myturbine.py --years=25 --neq=1e7 --stats --check_log ...@@ -748,25 +748,48 @@ g-000 $ qsub-wrap.py -f ../myturbine.py --years=25 --neq=1e7 --stats --check_log
Other options for the original ```dlctemplate.py``` script: Other options for the original ```dlctemplate.py``` script:
``` ```
usage: dlctemplate.py [-h] [--prep] [--check_logs] [--stats] [--fatigue] (wetb_py3) [dave@jess]$ python dlctemplate.py --help
[--csv] [--years YEARS] [--no_bins NO_BINS] [--neq NEQ] usage: dlctemplate.py [-h] [--prep] [--check_logs]
[--envelopeblade] [--envelopeturbine] [--pbs_failed_path PBS_FAILED_PATH] [--stats]
[--fatigue] [--AEP] [--csv] [--years YEARS]
[--no_bins NO_BINS] [--neq NEQ] [--rotarea ROTAREA]
[--save_new_sigs] [--dlcplot] [--envelopeblade]
[--envelopeturbine] [--zipchunks] [--pbs_turb]
[--walltime WALLTIME]
pre- or post-processes DLC's pre- or post-processes DLC's
optional arguments: optional arguments:
-h, --help show this help message and exit -h, --help show this help message and exit
--prep create htc, pbs, files (default=False) --prep create htc, pbs, files
--check_logs check the log files (default=False) --check_logs check the log files
--stats calculate statistics and 1Hz equivalent loads (default=False) --pbs_failed_path PBS_FAILED_PATH
--fatigue calculate Leq for a full DLC (default=False) Copy pbs launch files of the failed cases to a new
--csv Save data also as csv file (default=False) directory in order to prepare a re-run. Default value:
--years YEARS Total life time in years (default=20) pbs_in_failed.
--no_bins NO_BINS Number of bins for fatigue loads (default=46) --stats calculate statistics and 1Hz equivalent loads
--neq NEQ Equivalent cycles neq, default 1 Hz equivalent load --fatigue calculate Leq for a full DLC
(neq = simulation duration in seconds) --AEP calculate AEP, requires htc/DLCs/dlc_config.xlsx
--envelopeblade calculate the load envelope for sensors on the blades --csv Save data also as csv file
--envelopeturbine calculate the load envelope for sensors on the turbine --years YEARS Total life time in years
--no_bins NO_BINS Number of bins for fatigue loads
--neq NEQ Equivalent cycles Neq used for Leq fatigue lifetime
calculations.
--rotarea ROTAREA Rotor area for C_T, C_P
--save_new_sigs Save post-processed sigs
--dlcplot Plot DLC load basis results
--envelopeblade Compute envelopeblade
--envelopeturbine Compute envelopeturbine
--zipchunks Create PBS launch files forrunning in zip-chunk
find+xargs mode.
--pbs_turb Create PBS launch files to create the turbulence boxes
in stand alone mode using the 64-bit Mann turbulence
box generator. This can be usefull if your turbulence
boxes are too big for running in HAWC2 32-bit mode.
Only works on Jess.
--walltime WALLTIME Queue walltime for each case/pbs file, format:
HH:MM:SS Default: 04:00:00
``` ```
The load envelopes are computed for sensors specified in the The load envelopes are computed for sensors specified in the
......
...@@ -4,9 +4,9 @@ The Wind Energy Toolbox has a workflow for automatically running design load ...@@ -4,9 +4,9 @@ The Wind Energy Toolbox has a workflow for automatically running design load
bases (DLBs) on Gorm. bases (DLBs) on Gorm.
This workflow has the following steps: This workflow has the following steps:
1. Create a master Excel sheet defining each case in the DLB 1. Create a master Excel sheet defining each case in the DLB
2. Create subordinate Excel sheets from each tab in the master Excel sheet 2. [Create subordinate Excel sheets from each tab in the master Excel sheet](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/2-creating-subordinate-excels.md)
3. Create htc files and PBS job scripts for each requisite simulation using 3. [Create htc files and PBS job scripts for each requisite simulation using
the subordinate Excel files and a master htc file. the subordinate Excel files and a master htc file.](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/3-creating-htc-pbs-files.md)
4. Submit all PBS job scripts to the cluster 4. Submit all PBS job scripts to the cluster
5. Post-process results 5. Post-process results
6. Visualize results 6. Visualize results
......
# Tutorial 2: Creating subordinate Excel files
The Wind Energy Toolbox has a workflow for automatically running design load
bases (DLBs) on Gorm.
This workflow has the following steps:
1. [Create a master Excel sheet defining each case in the DLB](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/1-creating-master-excel.md)
2. Create subordinate Excel sheets from each tab in the master Excel sheet
3. [Create htc files and PBS job scripts for each requisite simulation using
the subordinate Excel files and a master htc file.](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/3-creating-htc-pbs-files.md)
4. Submit all PBS job scripts to the cluster
5. Post-process results
6. Visualize results
This tutorial presents how to accomplish Step 2.
Note that it is possible to customize your simulations by skipping/modifying
steps.
Such a procedure will be discussed in a later tutorial.
If there are any problems with this tutorial, please [submit an issue](
https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues).
## 1. Background: Subordinate Excel Files
The subordinate Excel files are a series of basic Excel files that are
generated from the master Excel file. (See our tutorial on generating the
master Excel file [here]( https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tut orials/1-creating-master-excel.md).)
There is a different subordinate Excel file for every tab in the master Excel
file, except for the "Main" tab, one for each case to simulate (e.g., design
load case 1.2 from IEC-61400-1).
Each subordinate Excel file has a single tab that lists the different tag
values for the htc master file in the column, and each row corresponds to a
different htc file to be generated.
The generation of the htc files from the subordinate Excel files is discused
in the next tutorial.
## 2. Tutorial
The generation of the subordinate Excel files is done using the
[GenerateDLS.py](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/wetb/prepost/GenerateDLCs.py)
function in the Wind Energy Toolbox.
On Gorm, the command can be executed from the htc directory with the master
Excel file as follows:
```
export PATH=/home/python/miniconda3/bin:$PATH
source activate wetb_py3
python /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/GenerateDLCs.py [--folder=$FOLDER_NAME] [--master=$MASTER_NAME]
source deactivate
```
The ```export PATH``` command adds the miniconda bin directory to the path,
which is necessary for the toolbox.
The ```source activate wetb_py3``` and ```source deactivate``` are
Gorm-specific commands to activate the Wind Energy Toolbox Python environment.
The ```--folder``` and ```--master``` flags are optional flags to specify,
respectively, the name of the folder to which the subordinate Excel files
should be written to and the name of the master Excel file.
The default values for these two options are './' (i.e., the current
directory) and 'DLCs.xlsx', respectively.
After running the commands in the above box on Gorm, you should see a folder in
your htc directory with all of your subordinate Excel files.
## 3. Issues
If there are any problems with this tutorial, please [submit an issue](
https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues).
We will try to fix it as soon as possible.
# Tutorial 3: Creating htc and PBS files
The Wind Energy Toolbox has a workflow for automatically running design load
bases (DLBs) on Gorm.
This workflow has the following steps:
1. [Create a master Excel sheet defining each case in the DLB](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/1-creating-master-excel.md)
2. [Create subordinate Excel sheets from each tab in the master Excel sheet](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/2-creating-subordinate-excels.md)
3. Create htc files and PBS job scripts for each requisite simulation using
the subordinate Excel files and a master htc file.
4. Submit all PBS job scripts to the cluster
5. Post-process results
6. Visualize results
This tutorial presents how to accomplish Step 3.
Note that it is possible to customize your simulations by skipping/modifying
steps.
Such a procedure will be discussed in a later tutorial.
If there are any problems with this tutorial, please [submit an issue](
https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues).
## 1. Background: htc and PBS file creation
The main function used in this tutorial is [dlctemplate.py](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/wetb/prepost/dlctemplate.py),
which creates all htc and PBS job scripts for the cases specified in the
subordinate Excel file folder.
The htc files are the main input files for HAWC2 simulations.
They are created by copying the master htc file in the ```_master/``` folder in
your htc directory and replacing all of the tags with the values specified in
the subordinate Excel files.
All of htc files for a single case are saved in a case-specific folder in your
htc folder.
Thus, if you were running a standard DLB calculation for IEC 61400-1, your
folder structure after generating your htc files might look like this:
```
|-- $TURB_NAME/
| |-- $SET_ID/
| | |-- DLCs.xlsx
| | |-- _master/
| | | |-- $MASTER_NAME.htc
| | |-- DLCs/
| | |-- htc/
| | | |-- dlc12_iec61400-1ed3/
| | | | |-- dlc12_wsp04_wdir000_s2001.htc
| | | | |-- dlc12_wsp04_wdir000_s2002.htc
| | | | |-- ...
| | | |-- dlc13_iec61400-1ed3/
| | | |-- ...
```
The PBS job scripts are a series of text files that are used to tell the job
scheduler on the high-performance computing (HPC) cluster how to run each job.
These files end with ".p", and are saved to a folder ```pbs_in/``` that is
created in the main set ID folder on Gorm.
## 2. Tutorial
There are two ways to call ```dlctemplate.py```.
The first is to call the function directly.
The second is to wrap it in a job scheduler to submit the job to the HPC cluster.
The first option is fine if you have only a few htc files or if the job
scheduler is not working for some reason.
The second option is generally preferred.
### 2.1 Directly generate htc files
The htc and PBS files can be directly generated by running the following
commands from the set ID directory:
```
export PATH=/home/python/miniconda3/bin:$PATH
source activate wetb_py3
python /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/dlctemplate.py --prep
source deactivate
```
The ```export PATH``` command adds the miniconda bin directory to the path,
which is necessary for the toolbox.
The ```source activate wetb_py3``` and ```source deactivate``` are
Gorm-specific commands to activate the Wind Energy Toolbox Python environment.
The ```--prep``` option tells the script to run in preparation mode, in which
case it creates the htc and pbs files.
After running the commands in the above box on Gorm, you should have all of your
PBS input files in ```pbs_in/``` and all of your htc files in ```htc```.
### 2.2 Generate files using job scheduler
From the set ID folder, run the following code:
```
qsub-wrap.py -f /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/dlctemplate.py --prep
```
## 3. Issues
If there are any problems with this tutorial, please [submit an issue](
https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues).
We will try to fix it as soon as possible.
...@@ -12,7 +12,7 @@ Default constants: [ref_ti] [ref_wind_speed] [tsr] [hub_height] [diameter] [t0] ...@@ -12,7 +12,7 @@ Default constants: [ref_ti] [ref_wind_speed] [tsr] [hub_height] [diameter] [t0]
Default functions: [Turb base name] [time stop] [turb_dx] [wsp factor] [wind_ramp_t1] [wind_ramp_factor1] [time_start] Default functions: [Turb base name] [time stop] [turb_dx] [wsp factor] [wind_ramp_t1] [wind_ramp_factor1] [time_start]
"""turb_wsp[wsp]_s[seed]""" [t0]+[sim_time] "[wsp]*[time stop]/8192,0" [tsr]/[wsp] [t0] [wsp factor] [t0] """turb_wsp[wsp]_s[seed]""" [t0]+[sim_time] "[wsp]*[sim_time]/8192,0" [tsr]/[wsp] [t0] [wsp factor] [t0]
......
# Add your requirements here like: # Add your requirements here like:
six six
cython
numpy>=1.4 numpy>=1.4
scipy>=0.9 scipy>=0.9
matplotlib matplotlib
......
...@@ -236,16 +236,20 @@ class DLCHighLevel(object): ...@@ -236,16 +236,20 @@ class DLCHighLevel(object):
pass pass
elif not hasattr(self, "res_folder") or self.res_folder == "": elif not hasattr(self, "res_folder") or self.res_folder == "":
files = glob.glob(os.path.join(self.res_path, "*"+ext)) + glob.glob(os.path.join(self.res_path, "*/*"+ext)) files = glob.glob(os.path.join(self.res_path, "*"+ext)) + glob.glob(os.path.join(self.res_path, "*/*"+ext))
if len(files)==0:
raise Exception('No *%s files found in:\n%s or\n%s'%(ext, self.res_path, os.path.join(self.res_path, "*/")))
else: else:
files = [] files = []
for dlc_id in fatigue_dlcs: for dlc_id in fatigue_dlcs:
dlc_id = str(dlc_id) dlc_id = str(dlc_id)
if "%" in self.res_folder: if "%" in self.res_folder:
folder = self.res_folder % dlc_id folder = self.res_folder % dlc_id
else: else:
folder = self.res_folder folder = self.res_folder
files.extend(glob.glob(os.path.join(self.res_path , folder, "*"+ext))) dlc_files = (glob.glob(os.path.join(self.res_path , folder, "*"+ext)))
if len(dlc_files)==0:
raise Exception('DLC%s included in fatigue analysis, but no *%s files found in:\n%s'%(dlc_id, ext, os.path.join(self.res_path , folder)))
files.extend(dlc_files)
keys = list(zip(*self.dist_value_keys))[1] keys = list(zip(*self.dist_value_keys))[1]
fmt = self.format_tag_value fmt = self.format_tag_value
tags = [[fmt(tag.replace(key, "")) for tag, key in zip(os.path.basename(f).split("_"), keys)] for f in files] tags = [[fmt(tag.replace(key, "")) for tag, key in zip(os.path.basename(f).split("_"), keys)] for f in files]
......
...@@ -19,6 +19,7 @@ from __future__ import print_function ...@@ -19,6 +19,7 @@ from __future__ import print_function
from __future__ import unicode_literals from __future__ import unicode_literals
from __future__ import absolute_import from __future__ import absolute_import
from future import standard_library from future import standard_library
import warnings
standard_library.install_aliases() standard_library.install_aliases()
import numpy as np import numpy as np
from wetb.fatigue_tools.rainflowcounting import rainflowcount from wetb.fatigue_tools.rainflowcounting import rainflowcount
...@@ -104,7 +105,9 @@ def eq_load_and_cycles(signals, no_bins=46, m=[3, 4, 6, 8, 10, 12], neq=[10 ** 6 ...@@ -104,7 +105,9 @@ def eq_load_and_cycles(signals, no_bins=46, m=[3, 4, 6, 8, 10, 12], neq=[10 ** 6
if 0: #to be similar to windap if 0: #to be similar to windap
ampl_bin_mean = (ampl_bin_edges[:-1] + ampl_bin_edges[1:]) / 2 ampl_bin_mean = (ampl_bin_edges[:-1] + ampl_bin_edges[1:]) / 2
cycles, ampl_bin_mean = cycles.flatten(), ampl_bin_mean.flatten() cycles, ampl_bin_mean = cycles.flatten(), ampl_bin_mean.flatten()
eq_loads = [[((np.nansum(cycles * ampl_bin_mean ** _m) / _neq) ** (1. / _m)) for _m in np.atleast_1d(m)] for _neq in np.atleast_1d(neq)] with warnings.catch_warnings():
warnings.simplefilter("ignore")
eq_loads = [[((np.nansum(cycles * ampl_bin_mean ** _m) / _neq) ** (1. / _m)) for _m in np.atleast_1d(m)] for _neq in np.atleast_1d(neq)]
return eq_loads, cycles, ampl_bin_mean, ampl_bin_edges return eq_loads, cycles, ampl_bin_mean, ampl_bin_edges
...@@ -156,11 +159,12 @@ def cycle_matrix(signals, ampl_bins=10, mean_bins=10, rainflow_func=rainflow_win ...@@ -156,11 +159,12 @@ def cycle_matrix(signals, ampl_bins=10, mean_bins=10, rainflow_func=rainflow_win
if isinstance(ampl_bins, int): if isinstance(ampl_bins, int):
ampl_bins = np.linspace(0, 1, num=ampl_bins + 1) * ampls[weights>0].max() ampl_bins = np.linspace(0, 1, num=ampl_bins + 1) * ampls[weights>0].max()
cycles, ampl_edges, mean_edges = np.histogram2d(ampls, means, [ampl_bins, mean_bins], weights=weights) cycles, ampl_edges, mean_edges = np.histogram2d(ampls, means, [ampl_bins, mean_bins], weights=weights)
with warnings.catch_warnings():
ampl_bin_sum = np.histogram2d(ampls, means, [ampl_bins, mean_bins], weights=weights * ampls)[0] warnings.simplefilter("ignore")
ampl_bin_mean = np.nanmean(ampl_bin_sum / np.where(cycles,cycles,np.nan),1) ampl_bin_sum = np.histogram2d(ampls, means, [ampl_bins, mean_bins], weights=weights * ampls)[0]
mean_bin_sum = np.histogram2d(ampls, means, [ampl_bins, mean_bins], weights=weights * means)[0] ampl_bin_mean = np.nanmean(ampl_bin_sum / np.where(cycles,cycles,np.nan),1)
mean_bin_mean = np.nanmean(mean_bin_sum / np.where(cycles, cycles, np.nan), 1) mean_bin_sum = np.histogram2d(ampls, means, [ampl_bins, mean_bins], weights=weights * means)[0]
mean_bin_mean = np.nanmean(mean_bin_sum / np.where(cycles, cycles, np.nan), 1)
cycles = cycles / 2 # to get full cycles cycles = cycles / 2 # to get full cycles
return cycles, ampl_bin_mean, ampl_edges, mean_bin_mean, mean_edges return cycles, ampl_bin_mean, ampl_edges, mean_bin_mean, mean_edges
......
...@@ -64,7 +64,8 @@ def rainflow_windap(signal, levels=255., thresshold=(255 / 50)): ...@@ -64,7 +64,8 @@ def rainflow_windap(signal, levels=255., thresshold=(255 / 50)):
check_signal(signal) check_signal(signal)
#type <double> is required by <find_extreme> and <rainflow> #type <double> is required by <find_extreme> and <rainflow>
signal = signal.astype(np.double) signal = signal.astype(np.double)
if np.all(np.isnan(signal)):
return None
offset = np.nanmin(signal) offset = np.nanmin(signal)
signal -= offset signal -= offset
if np.nanmax(signal) > 0: if np.nanmax(signal) > 0:
......
...@@ -38,9 +38,13 @@ from .gtsdf import save ...@@ -38,9 +38,13 @@ from .gtsdf import save
from .gtsdf import load from .gtsdf import load
from .gtsdf import append_block from .gtsdf import append_block
from .gtsdf import load_pandas from .gtsdf import load_pandas
from .gtsdf import add_statistic
from .gtsdf import load_statistic
from .gtsdf import compress2statistics
class Dataset(object): class Dataset(object):
def __init__(self, filename): def __init__(self, filename):
self.filename = filename
self.time, self.data, self.info = load(filename) self.time, self.data, self.info = load(filename)
def __call__(self, id): def __call__(self, id):
if isinstance(id, str): if isinstance(id, str):
...@@ -64,8 +68,8 @@ class Dataset(object): ...@@ -64,8 +68,8 @@ class Dataset(object):
try: try:
return self(name) return self(name)
except: except:
for i, n in enumerate(self.info['attribute_names']): # for i, n in enumerate(self.info['attribute_names']):
print (i,n) # print (i,n)
raise e raise e
def __contains__(self, name): def __contains__(self, name):
......
...@@ -3,6 +3,7 @@ from builtins import zip ...@@ -3,6 +3,7 @@ from builtins import zip
from builtins import range from builtins import range
from builtins import str from builtins import str
from future import standard_library from future import standard_library
from wetb.fatigue_tools.fatigue import eq_load
standard_library.install_aliases() standard_library.install_aliases()
import warnings import warnings
from wetb.gtsdf.unix_time import from_unix from wetb.gtsdf.unix_time import from_unix
...@@ -13,6 +14,7 @@ except ImportError as e: ...@@ -13,6 +14,7 @@ except ImportError as e:
import os import os
import numpy as np import numpy as np
import numpy.ma as ma import numpy.ma as ma
import pandas as pd
block_name_fmt = "block%04d" block_name_fmt = "block%04d"
def load(filename, dtype=None): def load(filename, dtype=None):
...@@ -89,80 +91,95 @@ def load(filename, dtype=None): ...@@ -89,80 +91,95 @@ def load(filename, dtype=None):
'type': 'General time series data format', 'type': 'General time series data format',
'description': 'MyDatasetDescription'} 'description': 'MyDatasetDescription'}
""" """
f = _open_h5py_file(filename)
try:
info = _load_info(f)
time, data = _load_timedata(f,dtype)
return time, data, info
finally:
try:
f.close()
except:
pass
def _open_h5py_file(filename):
if isinstance(filename, h5py.File): if isinstance(filename, h5py.File):
f = filename f = filename
filename = f.filename filename = f.filename
else: else:
assert os.path.isfile(filename), "File, %s, does not exists" % filename assert os.path.isfile(filename), "File, %s, does not exists" % filename
f = h5py.File(filename, 'r') f = h5py.File(filename, 'r')
try: return f
def decode(v):
if isinstance(v, bytes): def decode(v):
return v.decode('latin1') if isinstance(v, bytes):
return v return v.decode('latin1')
elif hasattr(v,'len'):
return [decode(v_) for v_ in v]
info = {k: decode(v) for k, v in f.attrs.items()} return v
check_type(f)
if (block_name_fmt % 0) not in f: def _load_info(f):
raise ValueError("HDF5 file must contain a group named '%s'" % (block_name_fmt % 0))
block0 = f[block_name_fmt % 0]
if 'data' not in block0: info = {k: decode(v) for k, v in f.attrs.items()}
raise ValueError("group %s must contain a dataset called 'data'" % (block_name_fmt % 0)) check_type(f)
_, no_attributes = block0['data'].shape if 'name' not in info:
if 'name' not in info: info['name'] = os.path.splitext(os.path.basename(f.filename))[0]
info['name'] = os.path.splitext(os.path.basename(filename))[0] if 'attribute_names' in f:
if 'attribute_names' in f: info['attribute_names'] = [v.decode('latin1') for v in f['attribute_names']]
info['attribute_names'] = [v.decode('latin1') for v in f['attribute_names']] if 'attribute_units' in f:
if 'attribute_units' in f: info['attribute_units'] = [v.decode('latin1') for v in f['attribute_units']]
info['attribute_units'] = [v.decode('latin1') for v in f['attribute_units']] if 'attribute_descriptions' in f:
if 'attribute_descriptions' in f: info['attribute_descriptions'] = [v.decode('latin1') for v in f['attribute_descriptions']]
info['attribute_descriptions'] = [v.decode('latin1') for v in f['attribute_descriptions']] return info
no_blocks = f.attrs['no_blocks']
def _load_timedata(f, dtype):
if dtype is None: no_blocks = f.attrs['no_blocks']
file_dtype = f[block_name_fmt % 0]['data'].dtype if (block_name_fmt % 0) not in f:
if "float" in str(file_dtype): raise ValueError("HDF5 file must contain a group named '%s'" % (block_name_fmt % 0))
dtype = file_dtype block0 = f[block_name_fmt % 0]
elif file_dtype in [np.int8, np.uint8, np.int16, np.uint16]: if 'data' not in block0:
dtype = np.float32 raise ValueError("group %s must contain a dataset called 'data'" % (block_name_fmt % 0))
else: _, no_attributes = block0['data'].shape
dtype = np.float64
time = []
data = [] if dtype is None:
for i in range(no_blocks): file_dtype = f[block_name_fmt % 0]['data'].dtype
if "float" in str(file_dtype):
try: dtype = file_dtype
block = f[block_name_fmt % i] elif file_dtype in [np.int8, np.uint8, np.int16, np.uint16]:
except KeyError: dtype = np.float32
continue else:
no_observations, no_attributes = block['data'].shape dtype = np.float64
block_time = (block.get('time', np.arange(no_observations))[:]).astype(np.float64) time = []
if 'time_step' in block.attrs: data = []
block_time *= block.attrs['time_step'] for i in range(no_blocks):
if 'time_start' in block.attrs:
block_time += block.attrs['time_start']
time.extend(block_time)
block_data = block['data'][:].astype(dtype)
if "int" in str(block['data'].dtype):
block_data[block_data == np.iinfo(block['data'].dtype).max] = np.nan
if 'gains' in block:
block_data *= block['gains'][:]
if 'offsets' in block:
block_data += block['offsets'][:]
data.append(block_data)
f.close()
if no_blocks > 0:
data = np.vstack(data)
return np.array(time).astype(np.float64), np.array(data).astype(dtype), info
except (ValueError, AssertionError):
f.close()
raise
try:
block = f[block_name_fmt % i]
except KeyError:
continue
no_observations, no_attributes = block['data'].shape
block_time = (block.get('time', np.arange(no_observations))[:]).astype(np.float64)
if 'time_step' in block.attrs:
block_time *= block.attrs['time_step']
if 'time_start' in block.attrs:
block_time += block.attrs['time_start']
time.extend(block_time)
block_data = block['data'][:].astype(dtype)
if "int" in str(block['data'].dtype):
block_data[block_data == np.iinfo(block['data'].dtype).max] = np.nan
if 'gains' in block:
block_data *= block['gains'][:]
if 'offsets' in block:
block_data += block['offsets'][:]
data.append(block_data)
if no_blocks > 0:
data = np.vstack(data)
return np.array(time).astype(np.float64), np.array(data).astype(dtype)
def save(filename, data, **kwargs): def save(filename, data, **kwargs):
"""Save a 'General Time Series Data Format'-hdf5 datafile """Save a 'General Time Series Data Format'-hdf5 datafile
...@@ -226,36 +243,44 @@ def save(filename, data, **kwargs): ...@@ -226,36 +243,44 @@ def save(filename, data, **kwargs):
time_step=2, time_step=2,
dtype=np.float64) dtype=np.float64)
""" """
if not filename.lower().endswith('.hdf5'): if not filename.lower().endswith('.hdf5'):
filename += ".hdf5" filename += ".hdf5"
# exist_ok does not exist in Python27 # exist_ok does not exist in Python27
if not os.path.exists(os.path.dirname(os.path.abspath(filename))): if not os.path.exists(os.path.dirname(os.path.abspath(filename))):
os.makedirs(os.path.dirname(os.path.abspath(filename))) #, exist_ok=True) os.makedirs(os.path.dirname(os.path.abspath(filename))) #, exist_ok=True)
_save_info(filename, data.shape, **kwargs)
append_block(filename, data, **kwargs)
def _save_info(filename, data_shape, **kwargs):
f = h5py.File(filename, "w") f = h5py.File(filename, "w")
try: try:
f.attrs["type"] = "General time series data format" f.attrs["type"] = "General time series data format"
no_observations, no_attributes = data.shape no_observations, no_attributes = data_shape
if 'name' in kwargs: if 'name' in kwargs:
f.attrs['name'] = kwargs['name'] f.attrs['name'] = kwargs['name']
if 'description' in kwargs: if 'description' in kwargs:
f.attrs['description'] = kwargs['description'] f.attrs['description'] = kwargs['description']
f.attrs['no_attributes'] = no_attributes f.attrs['no_attributes'] = no_attributes
if 'attribute_names' in kwargs: if 'attribute_names' in kwargs:
assert len(kwargs['attribute_names']) == no_attributes, "len(attribute_names)=%d but data shape is %s" % (len(kwargs['attribute_names']), data.shape) if no_attributes:
assert len(kwargs['attribute_names']) == no_attributes, "len(attribute_names)=%d but data shape is %s" % (len(kwargs['attribute_names']), data_shape)
f.create_dataset("attribute_names", data=np.array([v.encode('utf-8') for v in kwargs['attribute_names']])) f.create_dataset("attribute_names", data=np.array([v.encode('utf-8') for v in kwargs['attribute_names']]))
if 'attribute_units' in kwargs: if 'attribute_units' in kwargs:
assert(len(kwargs['attribute_units']) == no_attributes) if no_attributes:
assert(len(kwargs['attribute_units']) == no_attributes)
f.create_dataset("attribute_units", data=np.array([v.encode('utf-8') for v in kwargs['attribute_units']])) f.create_dataset("attribute_units", data=np.array([v.encode('utf-8') for v in kwargs['attribute_units']]))
if 'attribute_descriptions' in kwargs: if 'attribute_descriptions' in kwargs:
assert(len(kwargs['attribute_descriptions']) == no_attributes) if no_attributes:
assert(len(kwargs['attribute_descriptions']) == no_attributes)
f.create_dataset("attribute_descriptions", data=np.array([v.encode('utf-8') for v in kwargs['attribute_descriptions']])) f.create_dataset("attribute_descriptions", data=np.array([v.encode('utf-8') for v in kwargs['attribute_descriptions']]))
f.attrs['no_blocks'] = 0 f.attrs['no_blocks'] = 0
except Exception: except Exception:
raise raise
finally: finally:
f.close() f.close()
append_block(filename, data, **kwargs)
def append_block(filename, data, **kwargs): def append_block(filename, data, **kwargs):
"""Append a data block and corresponding time data to already existing file """Append a data block and corresponding time data to already existing file
...@@ -398,3 +423,42 @@ def check_type(f): ...@@ -398,3 +423,42 @@ def check_type(f):
raise ValueError("HDF5 file must contain a 'type'-attribute with the value 'General time series data format'") raise ValueError("HDF5 file must contain a 'type'-attribute with the value 'General time series data format'")
if 'no_blocks' not in f.attrs: if 'no_blocks' not in f.attrs:
raise ValueError("HDF5 file must contain an attribute named 'no_blocks'") raise ValueError("HDF5 file must contain an attribute named 'no_blocks'")
def _get_statistic(time, data, statistics=['min','mean','max','std','eq3','eq4','eq6','eq8','eq10','eq12']):
def get_stat(stat):
if hasattr(np, stat):
return getattr(np,stat)(data,0)
elif (stat.startswith("eq") and stat[2:].isdigit()):
m = float(stat[2:])
return [eq_load(sensor, 46, m, time[-1]-time[0]+time[1]-time[0])[0][0] for sensor in data.T]
return np.array([get_stat(stat) for stat in statistics]).T
def _add_statistic_data(file, stat_data, statistics=['min','mean','max','std','eq3','eq4','eq6','eq8','eq10','eq12']):
f = h5py.File(file, "a")
stat_grp = f.create_group("Statistic")
stat_grp.create_dataset("statistic_names", data=np.array([v.encode('utf-8') for v in statistics]))
stat_grp.create_dataset("statistic_data", data=stat_data.astype(np.float))
f.close()
def add_statistic(file, statistics=['min','mean','max','std','eq3','eq4','eq6','eq8','eq10','eq12']):
time, data, info = load(file)
stat_data = _get_statistic(time, data, statistics)
_add_statistic_data(file, stat_data, statistics)
def load_statistic(filename):
f = _open_h5py_file(filename)
info = _load_info(f)
names = decode(f['Statistic']['statistic_names'])
data =np.array(f['Statistic']['statistic_data'])
return pd.DataFrame(data, columns=names), info
def compress2statistics(filename, statistics=['min','mean','max','std','eq3','eq4','eq6','eq8','eq10','eq12']):
time, data, info = load(filename)
stat_data = _get_statistic(time, data, statistics)
_save_info(filename, data.shape, **info)
_add_statistic_data(filename, stat_data, statistics)
'''
Created on 12/09/2013
@author: mmpe
'''
from __future__ import division
from __future__ import unicode_literals
from __future__ import print_function
from __future__ import absolute_import
from builtins import super
from builtins import range
from future import standard_library
standard_library.install_aliases()
import h5py
import numpy as np
from wetb import gtsdf
import unittest
import os
tmp_path = os.path.dirname(__file__) + "/tmp/"
tfp = os.path.dirname(os.path.abspath(__file__)) + "/test_files/"
class Test_gsdf(unittest.TestCase):
def setUp(self):
unittest.TestCase.setUp(self)
if not os.path.isdir(tmp_path):
os.makedirs(tmp_path)
@classmethod
def tearDownClass(cls):
super(Test_gsdf, cls).tearDownClass()
#shutil.rmtree(tmp_path)
def test_gtsdf_stat(self):
time, data, info = gtsdf.load(tfp+'test.hdf5')
print (data.shape)
fn = tmp_path + "test_stat.hdf5"
gtsdf.save(fn, data, time=time, **info)
gtsdf.add_statistic(fn)
stat_data,info = gtsdf.load_statistic(fn)
self.assertEqual(data[:,0].min(), stat_data.values[0,0])
self.assertEqual(stat_data.shape, (49,10))
def test_gtsdf_compress2stat(self):
time, data, info = gtsdf.load(tfp+'test.hdf5')
fn = tmp_path + "test_compress2stat.hdf5"
gtsdf.save(fn, data, time=time, **info)
gtsdf.save(tmp_path + "test_compress2stat2.hdf5", data, time=time, dtype=np.float, **info)
gtsdf.compress2statistics(fn)
self.assertLess(os.path.getsize(fn)*50, os.path.getsize(tfp+'test.hdf5'))
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
...@@ -67,10 +67,10 @@ class AEFile(object): ...@@ -67,10 +67,10 @@ class AEFile(object):
ae_data = self.ae_sets[set_nr] ae_data = self.ae_sets[set_nr]
index = np.searchsorted(ae_data[:, 0], radius) index = np.searchsorted(ae_data[:, 0], radius)
index = max(1, index) index = max(1, index)
setnr1, setnr2 = ae_data[index - 1:index + 1, 3] setnrs = ae_data[index - 1:index + 1, 3]
if setnr1 != setnr2: if setnrs[0] != setnrs[-1]:
raise NotImplementedError raise NotImplementedError
return setnr1 return setnrs[0]
......
...@@ -49,8 +49,9 @@ class AtTimeFile(object): ...@@ -49,8 +49,9 @@ class AtTimeFile(object):
self.blade_radius = bladetip_radius self.blade_radius = bladetip_radius
with open(filename, encoding='utf-8') as fid: with open(filename, encoding='utf-8') as fid:
lines = fid.readlines() lines = fid.readlines()
atttribute_name_line = [l.startswith("# Radius_s") for l in lines].index(True) atttribute_name_line = [l.strip().startswith("# Radius_s") for l in lines].index(True)
self.attribute_names = lines[atttribute_name_line].lower().replace("#", "").split() #self.attribute_names = lines[atttribute_name_line].lower().replace("#", "").split()
self.attribute_names = [n.strip() for n in lines[atttribute_name_line].lower().split("#")[1:]]
data = np.array([[float(l) for l in lines[i].split() ] for i in range(atttribute_name_line+1, len(lines))]) data = np.array([[float(l) for l in lines[i].split() ] for i in range(atttribute_name_line+1, len(lines))])
self.data = data self.data = data
def func_factory(column): def func_factory(column):
...@@ -97,10 +98,15 @@ class AtTimeFile(object): ...@@ -97,10 +98,15 @@ class AtTimeFile(object):
if __name__ == "__main__": if __name__ == "__main__":
at = AtTimeFile(r"tests/test_files/at.dat", 86.3655) # load file at = AtTimeFile(r"tests/test_files/at.dat", 86.3655) # load file
at = AtTimeFile(r'U:\hama\HAWC2-AVATAR\res/avatar-7ntm-scaled-rad.dat')
at.attribute_names # Attribute names at.attribute_names # Attribute names
at[:3,1] # first 3 twist rows at[:3,1] # first 3 twist rows
at.twist()[:3] # Twist first 3 twist rows print (len(at.attribute_names))
print (at.twist(36, curved_length=True)) # Twist at curved_length = 36 (interpolated) print ("\n".join(at.attribute_names))
print (at.twist(36)) # Twist at 36 (interpolated) print (at.data.shape)
#at.twist()[:3] # Twist first 3 twist rows
#print (at.twist(36, curved_length=True)) # Twist at curved_length = 36 (interpolated)
#print (at.twist(36)) # Twist at 36 (interpolated)
...@@ -14,53 +14,54 @@ from wetb.hawc2.st_file import StFile ...@@ -14,53 +14,54 @@ from wetb.hawc2.st_file import StFile
class BladeInfo(object): # class BladeInfo(object):
def twist(self, radius=None): # def twist(self, radius=None):
"""Aerodynamic twist [deg]. Negative when leading edge is twisted towards wind(opposite to normal definition)\n # """Aerodynamic twist [deg]. Negative when leading edge is twisted towards wind(opposite to normal definition)\n
#
# Parameters
# ---------
# radius : float or array_like, optional
# radius [m] of interest
# If None (default) the twist of all points are returned
#
# Returns
# -------
# twist [deg] : float
# """
# raise NotImplementedError()
#
# def chord(self, radius=None):
# """Aerodynamic chord
#
# Parameters
# ---------
# radius : float or array_like, optional
# radius [m] of interest
# If None (default) the twist of all points are returned
#
# Returns
# -------
# chord [m] : float
# """
# raise NotImplementedError()
#
#
#
# def c2nd(self, radius_nd):
# """Return center line position
#
# Parameters
# ---------
# radius_nd : float or array_like, optional
# non dimensional radius
#
# Returns
# -------
# x,y,z : float
# """
Parameters
---------
radius : float or array_like, optional
radius [m] of interest
If None (default) the twist of all points are returned
Returns
-------
twist [deg] : float
"""
raise NotImplementedError()
def chord(self, radius=None):
"""Aerodynamic chord
Parameters
---------
radius : float or array_like, optional
radius [m] of interest
If None (default) the twist of all points are returned
Returns
-------
chord [m] : float
"""
raise NotImplementedError()
class H2aeroBladeInfo(PCFile, AtTimeFile):
def c2nd(self, radius_nd):
"""Return center line position
Parameters
---------
radius_nd : float or array_like, optional
non dimensional radius
Returns
-------
x,y,z : float
"""
class H2BladeInfo(BladeInfo, PCFile, AtTimeFile):
"""Provide HAWC2 info about a blade """Provide HAWC2 info about a blade
From AE file: From AE file:
...@@ -85,45 +86,64 @@ class H2BladeInfo(BladeInfo, PCFile, AtTimeFile): ...@@ -85,45 +86,64 @@ class H2BladeInfo(BladeInfo, PCFile, AtTimeFile):
""" """
def __init__(self, htcfile, ae_filename=None, pc_filename=None, at_time_filename=None, st_filename=None, blade_name=None): def __init__(self, htcfile, ae_filename=None, pc_filename=None, at_time_filename=None, blade_name=None):
if isinstance(htcfile, str): if isinstance(htcfile, str):
assert htcfile.lower().endswith('.htc') assert htcfile.lower().endswith('.htc')
htcfile = HTCFile(htcfile) htcfile = HTCFile(htcfile)
self.htcfile = htcfile
blade_name = blade_name or htcfile.aero.link[2] blade_name = blade_name or htcfile.aero.link[2]
s = htcfile.new_htc_structure #s = htcfile.new_htc_structure
at_time_filename = at_time_filename or ("output_at_time" in htcfile and os.path.join(htcfile.modelpath, htcfile.output_at_time.filename[0] + ".dat")) at_time_filename = at_time_filename or ("output_at_time" in htcfile and os.path.join(htcfile.modelpath, htcfile.output_at_time.filename[0] + ".dat"))
pc_filename = pc_filename or os.path.join(htcfile.modelpath, htcfile.aero.pc_filename[0]) pc_filename = pc_filename or os.path.join(htcfile.modelpath, htcfile.aero.pc_filename[0])
ae_filename = ae_filename or os.path.join(htcfile.modelpath, htcfile.aero.ae_filename[0]) ae_filename = ae_filename or os.path.join(htcfile.modelpath, htcfile.aero.ae_filename[0])
mainbodies = [s[k] for k in s.keys() if s[k].name_ == "main_body"] #mainbodies = [s[k] for k in s.keys() if s[k].name_ == "main_body"]
mainbody_blade = [mb for mb in mainbodies if mb.name[0] == blade_name][0] #self.mainbody_blade = [mb for mb in mainbodies if mb.name[0] == blade_name][0]
st_filename = st_filename or os.path.join(htcfile.modelpath, mainbody_blade.timoschenko_input.filename[0])
if os.path.isfile(pc_filename) and os.path.isfile(ae_filename): if os.path.isfile(pc_filename) and os.path.isfile(ae_filename):
PCFile.__init__(self, pc_filename, ae_filename) PCFile.__init__(self, pc_filename, ae_filename)
blade_radius = self.ae_sets[1][-1,0] blade_radius = self.ae_sets[1][-1,0]
if os.path.isfile(st_filename):
StFile.__init__(self, st_filename)
if os.path.isfile(at_time_filename): if os.path.isfile(at_time_filename):
AtTimeFile.__init__(self, at_time_filename, blade_radius) AtTimeFile.__init__(self, at_time_filename, blade_radius)
self.curved_length = self.radius_curved_ac()[-1]
else:
self.c2def = np.array([v.values[1:5] for v in mainbody_blade.c2_def if v.name_ == "sec"]) raise NotImplementedError
#z_nd = (np.cos(np.linspace(np.pi, np.pi*2,len(curved_length)-1))+1)/2
#self.curved_length = np.cumsum(np.sqrt(np.sum(np.diff(self.c2def[:, :3], 1, 0) ** 2, 1)))[-1]
#self.c2nd = lambda x : interp1d(self.c2def[:, 2] / self.c2def[-1, 2], self.c2def[:, :3], axis=0, kind='cubic')(np.max([np.min([x, np.ones_like(x)], 0), np.zeros_like(x) + self.c2def[0, 2] / self.c2def[-1, 2]], 0))
x, y, z, twist = self.hawc2_splines()
def interpolate(r):
r = (np.max([np.min([r, np.ones_like(r)], 0), np.zeros_like(r) + self.c2def[0, 2] / self.c2def[-1, 2]], 0))
return np.array([np.interp(r, np.array(z) / z[-1], xyz) for xyz in [x,y,z, twist]]).T
self.c2nd = interpolate #lambda r : interp1d(np.array(z) / z[-1], np.array([x, y, z]).T, axis=0, kind=1)(np.max([np.min([r, np.ones_like(r)], 0), np.zeros_like(r) + self.c2def[0, 2] / self.c2def[-1, 2]], 0))
self.hawc2_splines_data = self.hawc2_splines()
@property
def c2def(self):
if not hasattr(self, "_c2def"):
self._c2def = np.array([v.values[1:5] for v in self.htcfile.blade_c2_def if v.name_ == "sec"])
return self._c2def
def c2nd(self, l_nd, curved_length=True):
curve_l_nd, x, y, z, twist = self.hawc2_splines_data
if curved_length:
l_nd = (np.max([np.min([l_nd, np.ones_like(l_nd)], 0), np.zeros_like(l_nd) + self.c2def[0, 2] / self.c2def[-1, 2]], 0))
return np.array([np.interp(l_nd, curve_l_nd, xyz) for xyz in [x,y,z, twist]]).T
else:
l_nd = (np.max([np.min([l_nd, np.ones_like(l_nd)], 0), np.zeros_like(l_nd)], 0))
return np.array([np.interp(l_nd, z/z[-1], xyz) for xyz in [x,y,z, twist]]).T
def c2(self, l, curved_length=True):
if curved_length:
L = self.curved_length
else:
L = self.c2def[-1,2]
return self.c2nd(l/L, curved_length)
def hawc2_splines(self): def hawc2_splines(self):
curve_z = np.r_[0, np.cumsum(np.sqrt(np.sum(np.diff(self.c2def[:, :3], 1, 0) ** 2, 1)))] curve_l = np.r_[0, np.cumsum(np.sqrt(np.sum(np.diff(self.c2def[:, :3], 1, 0) ** 2, 1)))]
curve_z_nd = curve_z / curve_z[-1] curve_l_nd = curve_l / curve_l[-1]
def akima(x, y): def akima(x, y):
n = len(x) n = len(x)
...@@ -166,10 +186,11 @@ class H2BladeInfo(BladeInfo, PCFile, AtTimeFile): ...@@ -166,10 +186,11 @@ class H2BladeInfo(BladeInfo, PCFile, AtTimeFile):
z = np.linspace(0, s[i + 1] - s[i ], 10, endpoint=i >= co.shape[0] - 2) z = np.linspace(0, s[i + 1] - s[i ], 10, endpoint=i >= co.shape[0] - 2)
x.extend(s[i] + z) x.extend(s[i] + z)
y.extend(p(z)) y.extend(p(z))
return y return x,y
x, y, z, twist = [coef2spline(curve_z_nd, akima(curve_z_nd, self.c2def[:, i])) for i in range(4)] x, y, z, twist = [coef2spline(curve_l_nd, akima(curve_l_nd, self.c2def[:, i]))[1] for i in range(4)]
return x, y, z, twist curve_l_nd = coef2spline(curve_l_nd, akima(curve_l_nd, self.c2def[:, 0]))[0]
return curve_l_nd, x, y, z, twist
def xyztwist(self, l=None, curved_length=False): def xyztwist(self, l=None, curved_length=False):
"""Return splined x,y,z and twist """Return splined x,y,z and twist
...@@ -188,44 +209,120 @@ class H2BladeInfo(BladeInfo, PCFile, AtTimeFile): ...@@ -188,44 +209,120 @@ class H2BladeInfo(BladeInfo, PCFile, AtTimeFile):
x,y,z,twist x,y,z,twist
""" """
if l is None: if l is None:
return self.c2def[:, 3] return self.c2def[:, :3]
else: else:
r_nd = np.linspace(0,1,100) r_nd = np.linspace(0,1,1000)
if curved_length: if curved_length:
curved_length = np.cumsum(np.sqrt((np.diff(self.c2nd(np.linspace(0,1,100)),1,0)[:,:3]**2).sum(1))) #curved_length = np.cumsum(np.sqrt((np.diff(self.c2nd(np.linspace(0,1,100)),1,0)[:,:3]**2).sum(1)))
assert np.all(l>=curved_length[0]) and np.all(l<=curved_length[-1]) return self.c2nd(l/self.radius_curved_ac()[-1])
return self.c2nd(r_nd[np.argmin(np.abs(curved_length-l))+1]) # z_nd = (np.cos(np.linspace(np.pi, np.pi*2,len(curved_length)-1))+1)/2
# assert np.all(l>=curved_length[0]) and np.all(l<=curved_length[-1])
# return self.c2nd(r_nd[np.argmin(np.abs(curved_length-l))+1])
else: else:
assert np.all(l>=self.c2def[0,2]) and np.all(l<=self.c2def[-1,2]) assert np.all(l>=self.c2def[0,2]) and np.all(l<=self.c2def[-1,2])
return self.c2nd(l/self.c2def[-1, 2]) return self.c2nd(l/self.c2def[-1, 2])
class H2aeroBladeInfo(H2BladeInfo): class H2BladeInfo(H2aeroBladeInfo):
"""Provide HAWC2 info about a blade
def __init__(self, at_time_filename, ae_filename, pc_filename, htc_filename):
""" From AE file:
at_time_filename: file name of at time file containing twist and chord data - chord(radius=None, set_nr=1):
""" - thickness(radius=None, set_nr=1)
PCFile.__init__(self, pc_filename, ae_filename) - radius_ae(radius=None, set_nr=1)
blade_radius = self.ae_sets[1][-1,0]
AtTimeFile.__init__(self, at_time_filename, blade_radius) From PC file
- CL(radius, alpha, ae_set_nr=1)
assert('twist' in self.attribute_names) - CD(radius, alpha, ae_set_nr=1)
htcfile = HTCFile(htc_filename) - CM(radius, alpha, ae_set_nr=1)
self.c2def = np.array([v.values[1:5] for v in htcfile.blade_c2_def if v.name_ == "sec"])
#self._c2nd = interp1d(self.c2def[:, 2] / self.c2def[-1, 2], self.c2def[:, :3], axis=0, kind='cubic')
ac_radii = self.ac_radius()
c2_axis_length = np.r_[0, np.cumsum(np.sqrt((np.diff(self.c2def[:, :3], 1, 0) ** 2).sum(1)))]
self._c2nd = interp1d(c2_axis_length / c2_axis_length[-1], self.c2def[:, :3], axis=0, kind='cubic')
#self._c2nd = interp1d(self.c2def[:,2]/self.c2def[-1,2], self.c2def[:, :3], axis=0, kind='cubic')
def c2nd(self, r_nd):
r_nd_min = np.zeros_like(r_nd) + self.c2def[0, 2] / self.c2def[-1, 2]
r_nd_max = np.ones_like(r_nd)
r_nd = np.max([np.min([r_nd, r_nd_max], 0), r_nd_min], 0)
return self._c2nd(r_nd)
From at_time_filename
- attribute_names
- xxx(radius=None, curved_length=None) # xxx for each attribute name
- radius_curved_ac(radius=None) # Curved length of nearest/all aerodynamic calculation points
From ST file
- radius_st(radius=None, mset=1, set=1)
- xxx(radius=None, mset=1, set=1) # xxx for each of r, m, x_cg,y_cg, ri_x, ri_y, xs, ys, E, G, Ix, Iy, K, kx, ky, A, pitch, xe, ye
with template
"""
def __init__(self, htcfile, ae_filename=None, pc_filename=None, at_time_filename=None, st_filename=None, blade_name=None):
if isinstance(htcfile, str):
htcfile = HTCFile(htcfile)
s = htcfile.new_htc_structure
# at_time_filename = at_time_filename or ("output_at_time" in htcfile and os.path.join(htcfile.modelpath, htcfile.output_at_time.filename[0] + ".dat"))
# pc_filename = pc_filename or os.path.join(htcfile.modelpath, htcfile.aero.pc_filename[0])
# ae_filename = ae_filename or os.path.join(htcfile.modelpath, htcfile.aero.ae_filename[0])
#
mainbodies = [s[k] for k in s.keys() if s[k].name_ == "main_body"]
if blade_name is None:
blade_name = htcfile.aero.link[2]
self.mainbody_blade = [mb for mb in mainbodies if mb.name[0] == blade_name][0]
H2aeroBladeInfo.__init__(self, htcfile, ae_filename=ae_filename, pc_filename=pc_filename, at_time_filename=at_time_filename, blade_name=blade_name)
# def __init__(self, htcfile, ae_filename=None, pc_filename=None, at_time_filename=None, st_filename=None, blade_name=None):
#
# if isinstance(htcfile, str):
# assert htcfile.lower().endswith('.htc')
# htcfile = HTCFile(htcfile)
#
# blade_name = blade_name or htcfile.aero.link[2]
st_filename = st_filename or os.path.join(htcfile.modelpath, self.mainbody_blade.timoschenko_input.filename[0])
#
# if os.path.isfile(pc_filename) and os.path.isfile(ae_filename):
# PCFile.__init__(self, pc_filename, ae_filename)
# blade_radius = self.ae_sets[1][-1,0]
#
if os.path.isfile(st_filename):
StFile.__init__(self, st_filename)
# if os.path.isfile(at_time_filename):
# AtTimeFile.__init__(self, at_time_filename, blade_radius)
# self.curved_length = self.radius_curved_ac()[-1]
# else:
# raise NotImplementedError
# #z_nd = (np.cos(np.linspace(np.pi, np.pi*2,len(curved_length)-1))+1)/2
# #self.curved_length = np.cumsum(np.sqrt(np.sum(np.diff(self.c2def[:, :3], 1, 0) ** 2, 1)))[-1]
#
# self.c2def = np.array([v.values[1:5] for v in mainbody_blade.c2_def if v.name_ == "sec"])
#
# self.hawc2_splines_data = self.hawc2_splines()
@property
def c2def(self):
if not hasattr(self, "_c2def"):
self._c2def = np.array([v.values[1:5] for v in self.mainbody_blade.c2_def if v.name_ == "sec"])
return self._c2def
#
# class H2aeroBladeInfo(H2BladeInfo):
#
# def __init__(self, at_time_filename, ae_filename, pc_filename, htc_filename):
# """
# at_time_filename: file name of at time file containing twist and chord data
# """
# PCFile.__init__(self, pc_filename, ae_filename)
# blade_radius = self.ae_sets[1][-1,0]
# AtTimeFile.__init__(self, at_time_filename, blade_radius)
#
# assert('twist' in self.attribute_names)
# htcfile = HTCFile(htc_filename)
#
#
# self.c2def = np.array([v.values[1:5] for v in htcfile.blade_c2_def if v.name_ == "sec"])
# #self._c2nd = interp1d(self.c2def[:, 2] / self.c2def[-1, 2], self.c2def[:, :3], axis=0, kind='cubic')
#
# ac_radii = self.radius_curved_ac()
# self.hawc2_splines_data = self.hawc2_splines()
# self.curved_length = np.cumsum(np.sqrt(np.sum(np.diff(self.c2def[:, :3], 1, 0) ** 2, 1)))[-1]
#
# # c2_axis_length = np.r_[0, np.cumsum(np.sqrt((np.diff(self.c2def[:, :3], 1, 0) ** 2).sum(1)))]
# # self._c2nd = interp1d(c2_axis_length / c2_axis_length[-1], self.c2def[:, :3], axis=0, kind='cubic')
# #self._c2nd = interp1d(self.c2def[:,2]/self.c2def[-1,2], self.c2def[:, :3], axis=0, kind='cubic')
# #
# # def c2nd(self, r_nd):
# # r_nd_min = np.zeros_like(r_nd) + self.c2def[0, 2] / self.c2def[-1, 2]
# # r_nd_max = np.ones_like(r_nd)
# # r_nd = np.max([np.min([r_nd, r_nd_max], 0), r_nd_min], 0)
# # return self._c2nd(r_nd)
#