Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • toolbox/WindEnergyToolbox
  • tlbl/WindEnergyToolbox
  • cpav/WindEnergyToolbox
  • frza/WindEnergyToolbox
  • borg/WindEnergyToolbox
  • mmpe/WindEnergyToolbox
  • ozgo/WindEnergyToolbox
  • dave/WindEnergyToolbox
  • mmir/WindEnergyToolbox
  • wluo/WindEnergyToolbox
  • welad/WindEnergyToolbox
  • chpav/WindEnergyToolbox
  • rink/WindEnergyToolbox
  • shfe/WindEnergyToolbox
  • shfe1/WindEnergyToolbox
  • acdi/WindEnergyToolbox
  • angl/WindEnergyToolbox
  • wliang/WindEnergyToolbox
  • mimc/WindEnergyToolbox
  • wtlib/WindEnergyToolbox
  • cmos/WindEnergyToolbox
  • fabpi/WindEnergyToolbox
22 results
Show changes
Commits on Source (124)
Showing
with 698 additions and 222 deletions
before_script:
- apt-get update
# uncomment first time
#- rm -rf TestFiles
#- git submodule update --init
- git submodule sync
- git submodule update
test-3.4:
image: mmpe/wetb
......
[submodule "TestFiles"]
path = TestFiles
url = https://gitlab.windenergy.dtu.dk/toolbox/TestFiles.git
Subproject commit cf64c48ffb671d35a457d0539e078bfe3eb07f0b
......@@ -15,9 +15,9 @@ This tool comes handy in the following scenarios:
The generator of the cases uses an input spreadsheet where the cases are defined
in a more compact way.
The tool is based on the "tags" concept that is used for the genetaion of the htc files.
The tool is based on the "tags" concept that is used for the generation of the htc files.
Main spreatsheet
Main spreadsheet
----------------
A main spreadsheet is used to defines all the DLC of the DLB. The file specifies the tags that are then required in the htc files.
......@@ -26,13 +26,38 @@ The file has:
* a Main sheet where some wind turbines parameters are defined, the tags are initialized, and the definitions of turbulence and gusts are given.
* a series of other sheets, each defining a DLC. In these sheets the tags that changes in that DLC are defined.
The tags are devided into three possible different categories:
The tags are divided into three possible different categories:
* Constants (C). Constants are tags that do not change in a DLC, e.g. simulation time, output format, ...;
* Variables (V). Variables are tags that define the number of cases in a DLC through their combinations, e.g. wind speed, number of turbilence seeds, wind direction, ..;
* Variables (V). Variables are tags that define the number of cases in a DLC through their combinations, e.g. wind speed, number of turbulence seeds, wind direction, ..;
* Functions (F). Functions are tags that depend on other tags through an expression, e.g. turbulence intensity, case name, ....
In each sheet the type of tag is defined in the line above the tag by typing one of the letters C, V, or F.
Functions (F) tags
------------------
* Numbers can be converted to strings (for example when a tag refers to a file name)
by using double quotes ```"``` for Functions (F):
* ```"wdir_[wdir]deg_wsp_[wsp]ms"``` will result in the tags ``` [wdir]```
and ```[wsp]``` being replaced with formatted text.
* following formatting rules are used:
* ```[wsp]```, ```[gridgustdelay]``` : ```02i```
* ```[wdir]```, ```[G_phi0]``` : ```03i```
* ```[Hs]```, ```[Tp]``` : ```05.02f```
* all other tags: ```04i```
* Only numbers in tags with double quotes are formatted. In all other cases
there is no formatting taking place and hence no loss of precision occurs.
* In this context, when using quotes, always use double quotes like ```"```.
Do not use single quotes ```'``` or any other quote character.
Variable (V) tags
-----------------
* ```[seed]``` and ```[wave_seed]``` are special variable tags. Instead of defining
a range of seeds, the user indicates the number of seeds to be used.
* ```[wsp]``` is a required variable tag
* ```[seed]``` should be placed in a column BEFORE ```[wsp]```
Generate the files
------------------
......
......@@ -263,7 +263,7 @@ When there is a new version of HAWC2, or when a new license manager is released,
you can update your local wine directory as follows:
```
g-000 $ cp /home/MET/hawc2exe/* /home/$USER/wine_exe/win32/
g-000 $ rsync -au /home/MET/hawc2exe/win32 /home/$USER/wine_exe/win32 --progress
```
The file ```hawc2-latest.exe``` will always be the latest HAWC2
......@@ -421,12 +421,12 @@ depth is varying for different load cases) it would be convienent to be able
to control which init file is used for which case (e.g. water depth).
When running a load case for which the mooring lines will run in init mode:
* ```[copyback_f1]``` = 'ESYSMooring_init.dat'
* ```[copyback_f1_rename]``` = 'mooringinits/ESYSMooring_init_vXYZ.dat'
* ```[copyback_f1] = 'ESYSMooring_init.dat'```
* ```[copyback_f1_rename] = 'mooringinits/ESYSMooring_init_vXYZ.dat'```
When using an a priory cacluated init file for the mooring lines:
* ```[copyto_generic_f1]``` = 'mooringinits/ESYSMooring_init_vXYZ.dat'
* ```[copyto_f1]``` = 'ESYSMooring_init.dat'
* ```[copyto_f1] = 'mooringinits/ESYSMooring_init_vXYZ.dat'```
* ```[copyto_generic_f1] = 'ESYSMooring_init.dat'```
Replace ```vXYZ``` with an appropriate identifier for your case.
......@@ -748,25 +748,48 @@ g-000 $ qsub-wrap.py -f ../myturbine.py --years=25 --neq=1e7 --stats --check_log
Other options for the original ```dlctemplate.py``` script:
```
usage: dlctemplate.py [-h] [--prep] [--check_logs] [--stats] [--fatigue]
[--csv] [--years YEARS] [--no_bins NO_BINS] [--neq NEQ]
[--envelopeblade] [--envelopeturbine]
(wetb_py3) [dave@jess]$ python dlctemplate.py --help
usage: dlctemplate.py [-h] [--prep] [--check_logs]
[--pbs_failed_path PBS_FAILED_PATH] [--stats]
[--fatigue] [--AEP] [--csv] [--years YEARS]
[--no_bins NO_BINS] [--neq NEQ] [--rotarea ROTAREA]
[--save_new_sigs] [--dlcplot] [--envelopeblade]
[--envelopeturbine] [--zipchunks] [--pbs_turb]
[--walltime WALLTIME]
pre- or post-processes DLC's
optional arguments:
-h, --help show this help message and exit
--prep create htc, pbs, files (default=False)
--check_logs check the log files (default=False)
--stats calculate statistics and 1Hz equivalent loads (default=False)
--fatigue calculate Leq for a full DLC (default=False)
--csv Save data also as csv file (default=False)
--years YEARS Total life time in years (default=20)
--no_bins NO_BINS Number of bins for fatigue loads (default=46)
--neq NEQ Equivalent cycles neq, default 1 Hz equivalent load
(neq = simulation duration in seconds)
--envelopeblade calculate the load envelope for sensors on the blades
--envelopeturbine calculate the load envelope for sensors on the turbine
-h, --help show this help message and exit
--prep create htc, pbs, files
--check_logs check the log files
--pbs_failed_path PBS_FAILED_PATH
Copy pbs launch files of the failed cases to a new
directory in order to prepare a re-run. Default value:
pbs_in_failed.
--stats calculate statistics and 1Hz equivalent loads
--fatigue calculate Leq for a full DLC
--AEP calculate AEP, requires htc/DLCs/dlc_config.xlsx
--csv Save data also as csv file
--years YEARS Total life time in years
--no_bins NO_BINS Number of bins for fatigue loads
--neq NEQ Equivalent cycles Neq used for Leq fatigue lifetime
calculations.
--rotarea ROTAREA Rotor area for C_T, C_P
--save_new_sigs Save post-processed sigs
--dlcplot Plot DLC load basis results
--envelopeblade Compute envelopeblade
--envelopeturbine Compute envelopeturbine
--zipchunks Create PBS launch files forrunning in zip-chunk
find+xargs mode.
--pbs_turb Create PBS launch files to create the turbulence boxes
in stand alone mode using the 64-bit Mann turbulence
box generator. This can be usefull if your turbulence
boxes are too big for running in HAWC2 32-bit mode.
Only works on Jess.
--walltime WALLTIME Queue walltime for each case/pbs file, format:
HH:MM:SS Default: 04:00:00
```
The load envelopes are computed for sensors specified in the
......
......@@ -4,9 +4,9 @@ The Wind Energy Toolbox has a workflow for automatically running design load
bases (DLBs) on Gorm.
This workflow has the following steps:
1. Create a master Excel sheet defining each case in the DLB
2. Create subordinate Excel sheets from each tab in the master Excel sheet
3. Create htc files and PBS job scripts for each requisite simulation using
the subordinate Excel files and a master htc file.
2. [Create subordinate Excel sheets from each tab in the master Excel sheet](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/2-creating-subordinate-excels.md)
3. [Create htc files and PBS job scripts for each requisite simulation using
the subordinate Excel files and a master htc file.](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/3-creating-htc-pbs-files.md)
4. Submit all PBS job scripts to the cluster
5. Post-process results
6. Visualize results
......
# Tutorial 2: Creating subordinate Excel files
The Wind Energy Toolbox has a workflow for automatically running design load
bases (DLBs) on Gorm.
This workflow has the following steps:
1. [Create a master Excel sheet defining each case in the DLB](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/1-creating-master-excel.md)
2. Create subordinate Excel sheets from each tab in the master Excel sheet
3. [Create htc files and PBS job scripts for each requisite simulation using
the subordinate Excel files and a master htc file.](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/3-creating-htc-pbs-files.md)
4. Submit all PBS job scripts to the cluster
5. Post-process results
6. Visualize results
This tutorial presents how to accomplish Step 2.
Note that it is possible to customize your simulations by skipping/modifying
steps.
Such a procedure will be discussed in a later tutorial.
If there are any problems with this tutorial, please [submit an issue](
https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues).
## 1. Background: Subordinate Excel Files
The subordinate Excel files are a series of basic Excel files that are
generated from the master Excel file. (See our tutorial on generating the
master Excel file [here]( https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tut orials/1-creating-master-excel.md).)
There is a different subordinate Excel file for every tab in the master Excel
file, except for the "Main" tab, one for each case to simulate (e.g., design
load case 1.2 from IEC-61400-1).
Each subordinate Excel file has a single tab that lists the different tag
values for the htc master file in the column, and each row corresponds to a
different htc file to be generated.
The generation of the htc files from the subordinate Excel files is discused
in the next tutorial.
## 2. Tutorial
The generation of the subordinate Excel files is done using the
[GenerateDLS.py](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/wetb/prepost/GenerateDLCs.py)
function in the Wind Energy Toolbox.
On Gorm, the command can be executed from the htc directory with the master
Excel file as follows:
```
export PATH=/home/python/miniconda3/bin:$PATH
source activate wetb_py3
python /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/GenerateDLCs.py [--folder=$FOLDER_NAME] [--master=$MASTER_NAME]
source deactivate
```
The ```export PATH``` command adds the miniconda bin directory to the path,
which is necessary for the toolbox.
The ```source activate wetb_py3``` and ```source deactivate``` are
Gorm-specific commands to activate the Wind Energy Toolbox Python environment.
The ```--folder``` and ```--master``` flags are optional flags to specify,
respectively, the name of the folder to which the subordinate Excel files
should be written to and the name of the master Excel file.
The default values for these two options are './' (i.e., the current
directory) and 'DLCs.xlsx', respectively.
After running the commands in the above box on Gorm, you should see a folder in
your htc directory with all of your subordinate Excel files.
## 3. Issues
If there are any problems with this tutorial, please [submit an issue](
https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues).
We will try to fix it as soon as possible.
# Tutorial 3: Creating htc and PBS files
The Wind Energy Toolbox has a workflow for automatically running design load
bases (DLBs) on Gorm.
This workflow has the following steps:
1. [Create a master Excel sheet defining each case in the DLB](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/1-creating-master-excel.md)
2. [Create subordinate Excel sheets from each tab in the master Excel sheet](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/2-creating-subordinate-excels.md)
3. Create htc files and PBS job scripts for each requisite simulation using
the subordinate Excel files and a master htc file.
4. Submit all PBS job scripts to the cluster
5. Post-process results
6. Visualize results
This tutorial presents how to accomplish Step 3.
Note that it is possible to customize your simulations by skipping/modifying
steps.
Such a procedure will be discussed in a later tutorial.
If there are any problems with this tutorial, please [submit an issue](
https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues).
## 1. Background: htc and PBS file creation
The main function used in this tutorial is [dlctemplate.py](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/wetb/prepost/dlctemplate.py),
which creates all htc and PBS job scripts for the cases specified in the
subordinate Excel file folder.
The htc files are the main input files for HAWC2 simulations.
They are created by copying the master htc file in the ```_master/``` folder in
your htc directory and replacing all of the tags with the values specified in
the subordinate Excel files.
All of htc files for a single case are saved in a case-specific folder in your
htc folder.
Thus, if you were running a standard DLB calculation for IEC 61400-1, your
folder structure after generating your htc files might look like this:
```
|-- $TURB_NAME/
| |-- $SET_ID/
| | |-- DLCs.xlsx
| | |-- _master/
| | | |-- $MASTER_NAME.htc
| | |-- DLCs/
| | |-- htc/
| | | |-- dlc12_iec61400-1ed3/
| | | | |-- dlc12_wsp04_wdir000_s2001.htc
| | | | |-- dlc12_wsp04_wdir000_s2002.htc
| | | | |-- ...
| | | |-- dlc13_iec61400-1ed3/
| | | |-- ...
```
The PBS job scripts are a series of text files that are used to tell the job
scheduler on the high-performance computing (HPC) cluster how to run each job.
These files end with ".p", and are saved to a folder ```pbs_in/``` that is
created in the main set ID folder on Gorm.
## 2. Tutorial
There are two ways to call ```dlctemplate.py```.
The first is to call the function directly.
The second is to wrap it in a job scheduler to submit the job to the HPC cluster.
The first option is fine if you have only a few htc files or if the job
scheduler is not working for some reason.
The second option is generally preferred.
### 2.1 Directly generate htc files
The htc and PBS files can be directly generated by running the following
commands from the set ID directory:
```
export PATH=/home/python/miniconda3/bin:$PATH
source activate wetb_py3
python /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/dlctemplate.py --prep
source deactivate
```
The ```export PATH``` command adds the miniconda bin directory to the path,
which is necessary for the toolbox.
The ```source activate wetb_py3``` and ```source deactivate``` are
Gorm-specific commands to activate the Wind Energy Toolbox Python environment.
The ```--prep``` option tells the script to run in preparation mode, in which
case it creates the htc and pbs files.
After running the commands in the above box on Gorm, you should have all of your
PBS input files in ```pbs_in/``` and all of your htc files in ```htc```.
### 2.2 Generate files using job scheduler
From the set ID folder, run the following code:
```
qsub-wrap.py -f /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/dlctemplate.py --prep
```
## 3. Issues
If there are any problems with this tutorial, please [submit an issue](
https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues).
We will try to fix it as soon as possible.
......@@ -12,7 +12,7 @@ Default constants: [ref_ti] [ref_wind_speed] [tsr] [hub_height] [diameter] [t0]
Default functions: [Turb base name] [time stop] [turb_dx] [wsp factor] [wind_ramp_t1] [wind_ramp_factor1] [time_start]
"""turb_wsp[wsp]_s[seed]""" [t0]+[sim_time] "[wsp]*[time stop]/8192,0" [tsr]/[wsp] [t0] [wsp factor] [t0]
"""turb_wsp[wsp]_s[seed]""" [t0]+[sim_time] "[wsp]*[sim_time]/8192,0" [tsr]/[wsp] [t0] [wsp factor] [t0]
......
# Add your requirements here like:
six
cython
numpy>=1.4
scipy>=0.9
matplotlib
......
......@@ -236,16 +236,20 @@ class DLCHighLevel(object):
pass
elif not hasattr(self, "res_folder") or self.res_folder == "":
files = glob.glob(os.path.join(self.res_path, "*"+ext)) + glob.glob(os.path.join(self.res_path, "*/*"+ext))
if len(files)==0:
raise Exception('No *%s files found in:\n%s or\n%s'%(ext, self.res_path, os.path.join(self.res_path, "*/")))
else:
files = []
for dlc_id in fatigue_dlcs:
dlc_id = str(dlc_id)
if "%" in self.res_folder:
folder = self.res_folder % dlc_id
else:
folder = self.res_folder
files.extend(glob.glob(os.path.join(self.res_path , folder, "*"+ext)))
dlc_files = (glob.glob(os.path.join(self.res_path , folder, "*"+ext)))
if len(dlc_files)==0:
raise Exception('DLC%s included in fatigue analysis, but no *%s files found in:\n%s'%(dlc_id, ext, os.path.join(self.res_path , folder)))
files.extend(dlc_files)
keys = list(zip(*self.dist_value_keys))[1]
fmt = self.format_tag_value
tags = [[fmt(tag.replace(key, "")) for tag, key in zip(os.path.basename(f).split("_"), keys)] for f in files]
......
......@@ -19,6 +19,7 @@ from __future__ import print_function
from __future__ import unicode_literals
from __future__ import absolute_import
from future import standard_library
import warnings
standard_library.install_aliases()
import numpy as np
from wetb.fatigue_tools.rainflowcounting import rainflowcount
......@@ -104,7 +105,9 @@ def eq_load_and_cycles(signals, no_bins=46, m=[3, 4, 6, 8, 10, 12], neq=[10 ** 6
if 0: #to be similar to windap
ampl_bin_mean = (ampl_bin_edges[:-1] + ampl_bin_edges[1:]) / 2
cycles, ampl_bin_mean = cycles.flatten(), ampl_bin_mean.flatten()
eq_loads = [[((np.nansum(cycles * ampl_bin_mean ** _m) / _neq) ** (1. / _m)) for _m in np.atleast_1d(m)] for _neq in np.atleast_1d(neq)]
with warnings.catch_warnings():
warnings.simplefilter("ignore")
eq_loads = [[((np.nansum(cycles * ampl_bin_mean ** _m) / _neq) ** (1. / _m)) for _m in np.atleast_1d(m)] for _neq in np.atleast_1d(neq)]
return eq_loads, cycles, ampl_bin_mean, ampl_bin_edges
......@@ -156,11 +159,12 @@ def cycle_matrix(signals, ampl_bins=10, mean_bins=10, rainflow_func=rainflow_win
if isinstance(ampl_bins, int):
ampl_bins = np.linspace(0, 1, num=ampl_bins + 1) * ampls[weights>0].max()
cycles, ampl_edges, mean_edges = np.histogram2d(ampls, means, [ampl_bins, mean_bins], weights=weights)
ampl_bin_sum = np.histogram2d(ampls, means, [ampl_bins, mean_bins], weights=weights * ampls)[0]
ampl_bin_mean = np.nanmean(ampl_bin_sum / np.where(cycles,cycles,np.nan),1)
mean_bin_sum = np.histogram2d(ampls, means, [ampl_bins, mean_bins], weights=weights * means)[0]
mean_bin_mean = np.nanmean(mean_bin_sum / np.where(cycles, cycles, np.nan), 1)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
ampl_bin_sum = np.histogram2d(ampls, means, [ampl_bins, mean_bins], weights=weights * ampls)[0]
ampl_bin_mean = np.nanmean(ampl_bin_sum / np.where(cycles,cycles,np.nan),1)
mean_bin_sum = np.histogram2d(ampls, means, [ampl_bins, mean_bins], weights=weights * means)[0]
mean_bin_mean = np.nanmean(mean_bin_sum / np.where(cycles, cycles, np.nan), 1)
cycles = cycles / 2 # to get full cycles
return cycles, ampl_bin_mean, ampl_edges, mean_bin_mean, mean_edges
......
......@@ -64,7 +64,8 @@ def rainflow_windap(signal, levels=255., thresshold=(255 / 50)):
check_signal(signal)
#type <double> is required by <find_extreme> and <rainflow>
signal = signal.astype(np.double)
if np.all(np.isnan(signal)):
return None
offset = np.nanmin(signal)
signal -= offset
if np.nanmax(signal) > 0:
......
......@@ -38,9 +38,13 @@ from .gtsdf import save
from .gtsdf import load
from .gtsdf import append_block
from .gtsdf import load_pandas
from .gtsdf import add_statistic
from .gtsdf import load_statistic
from .gtsdf import compress2statistics
class Dataset(object):
def __init__(self, filename):
self.filename = filename
self.time, self.data, self.info = load(filename)
def __call__(self, id):
if isinstance(id, str):
......@@ -64,8 +68,8 @@ class Dataset(object):
try:
return self(name)
except:
for i, n in enumerate(self.info['attribute_names']):
print (i,n)
# for i, n in enumerate(self.info['attribute_names']):
# print (i,n)
raise e
def __contains__(self, name):
......
......@@ -3,6 +3,7 @@ from builtins import zip
from builtins import range
from builtins import str
from future import standard_library
from wetb.fatigue_tools.fatigue import eq_load
standard_library.install_aliases()
import warnings
from wetb.gtsdf.unix_time import from_unix
......@@ -13,6 +14,7 @@ except ImportError as e:
import os
import numpy as np
import numpy.ma as ma
import pandas as pd
block_name_fmt = "block%04d"
def load(filename, dtype=None):
......@@ -89,80 +91,95 @@ def load(filename, dtype=None):
'type': 'General time series data format',
'description': 'MyDatasetDescription'}
"""
f = _open_h5py_file(filename)
try:
info = _load_info(f)
time, data = _load_timedata(f,dtype)
return time, data, info
finally:
try:
f.close()
except:
pass
def _open_h5py_file(filename):
if isinstance(filename, h5py.File):
f = filename
filename = f.filename
else:
assert os.path.isfile(filename), "File, %s, does not exists" % filename
f = h5py.File(filename, 'r')
try:
def decode(v):
if isinstance(v, bytes):
return v.decode('latin1')
return v
info = {k: decode(v) for k, v in f.attrs.items()}
check_type(f)
if (block_name_fmt % 0) not in f:
raise ValueError("HDF5 file must contain a group named '%s'" % (block_name_fmt % 0))
block0 = f[block_name_fmt % 0]
if 'data' not in block0:
raise ValueError("group %s must contain a dataset called 'data'" % (block_name_fmt % 0))
_, no_attributes = block0['data'].shape
if 'name' not in info:
info['name'] = os.path.splitext(os.path.basename(filename))[0]
if 'attribute_names' in f:
info['attribute_names'] = [v.decode('latin1') for v in f['attribute_names']]
if 'attribute_units' in f:
info['attribute_units'] = [v.decode('latin1') for v in f['attribute_units']]
if 'attribute_descriptions' in f:
info['attribute_descriptions'] = [v.decode('latin1') for v in f['attribute_descriptions']]
no_blocks = f.attrs['no_blocks']
if dtype is None:
file_dtype = f[block_name_fmt % 0]['data'].dtype
if "float" in str(file_dtype):
dtype = file_dtype
elif file_dtype in [np.int8, np.uint8, np.int16, np.uint16]:
dtype = np.float32
else:
dtype = np.float64
time = []
data = []
for i in range(no_blocks):
try:
block = f[block_name_fmt % i]
except KeyError:
continue
no_observations, no_attributes = block['data'].shape
block_time = (block.get('time', np.arange(no_observations))[:]).astype(np.float64)
if 'time_step' in block.attrs:
block_time *= block.attrs['time_step']
if 'time_start' in block.attrs:
block_time += block.attrs['time_start']
time.extend(block_time)
block_data = block['data'][:].astype(dtype)
if "int" in str(block['data'].dtype):
block_data[block_data == np.iinfo(block['data'].dtype).max] = np.nan
if 'gains' in block:
block_data *= block['gains'][:]
if 'offsets' in block:
block_data += block['offsets'][:]
data.append(block_data)
f.close()
if no_blocks > 0:
data = np.vstack(data)
return np.array(time).astype(np.float64), np.array(data).astype(dtype), info
except (ValueError, AssertionError):
f.close()
raise
return f
def decode(v):
if isinstance(v, bytes):
return v.decode('latin1')
elif hasattr(v,'len'):
return [decode(v_) for v_ in v]
return v
def _load_info(f):
info = {k: decode(v) for k, v in f.attrs.items()}
check_type(f)
if 'name' not in info:
info['name'] = os.path.splitext(os.path.basename(f.filename))[0]
if 'attribute_names' in f:
info['attribute_names'] = [v.decode('latin1') for v in f['attribute_names']]
if 'attribute_units' in f:
info['attribute_units'] = [v.decode('latin1') for v in f['attribute_units']]
if 'attribute_descriptions' in f:
info['attribute_descriptions'] = [v.decode('latin1') for v in f['attribute_descriptions']]
return info
def _load_timedata(f, dtype):
no_blocks = f.attrs['no_blocks']
if (block_name_fmt % 0) not in f:
raise ValueError("HDF5 file must contain a group named '%s'" % (block_name_fmt % 0))
block0 = f[block_name_fmt % 0]
if 'data' not in block0:
raise ValueError("group %s must contain a dataset called 'data'" % (block_name_fmt % 0))
_, no_attributes = block0['data'].shape
if dtype is None:
file_dtype = f[block_name_fmt % 0]['data'].dtype
if "float" in str(file_dtype):
dtype = file_dtype
elif file_dtype in [np.int8, np.uint8, np.int16, np.uint16]:
dtype = np.float32
else:
dtype = np.float64
time = []
data = []
for i in range(no_blocks):
try:
block = f[block_name_fmt % i]
except KeyError:
continue
no_observations, no_attributes = block['data'].shape
block_time = (block.get('time', np.arange(no_observations))[:]).astype(np.float64)
if 'time_step' in block.attrs:
block_time *= block.attrs['time_step']
if 'time_start' in block.attrs:
block_time += block.attrs['time_start']
time.extend(block_time)
block_data = block['data'][:].astype(dtype)
if "int" in str(block['data'].dtype):
block_data[block_data == np.iinfo(block['data'].dtype).max] = np.nan
if 'gains' in block:
block_data *= block['gains'][:]
if 'offsets' in block:
block_data += block['offsets'][:]
data.append(block_data)
if no_blocks > 0:
data = np.vstack(data)
return np.array(time).astype(np.float64), np.array(data).astype(dtype)
def save(filename, data, **kwargs):
"""Save a 'General Time Series Data Format'-hdf5 datafile
......@@ -226,36 +243,44 @@ def save(filename, data, **kwargs):
time_step=2,
dtype=np.float64)
"""
if not filename.lower().endswith('.hdf5'):
filename += ".hdf5"
# exist_ok does not exist in Python27
if not os.path.exists(os.path.dirname(os.path.abspath(filename))):
os.makedirs(os.path.dirname(os.path.abspath(filename))) #, exist_ok=True)
_save_info(filename, data.shape, **kwargs)
append_block(filename, data, **kwargs)
def _save_info(filename, data_shape, **kwargs):
f = h5py.File(filename, "w")
try:
f.attrs["type"] = "General time series data format"
no_observations, no_attributes = data.shape
no_observations, no_attributes = data_shape
if 'name' in kwargs:
f.attrs['name'] = kwargs['name']
if 'description' in kwargs:
f.attrs['description'] = kwargs['description']
f.attrs['no_attributes'] = no_attributes
if 'attribute_names' in kwargs:
assert len(kwargs['attribute_names']) == no_attributes, "len(attribute_names)=%d but data shape is %s" % (len(kwargs['attribute_names']), data.shape)
if no_attributes:
assert len(kwargs['attribute_names']) == no_attributes, "len(attribute_names)=%d but data shape is %s" % (len(kwargs['attribute_names']), data_shape)
f.create_dataset("attribute_names", data=np.array([v.encode('utf-8') for v in kwargs['attribute_names']]))
if 'attribute_units' in kwargs:
assert(len(kwargs['attribute_units']) == no_attributes)
if no_attributes:
assert(len(kwargs['attribute_units']) == no_attributes)
f.create_dataset("attribute_units", data=np.array([v.encode('utf-8') for v in kwargs['attribute_units']]))
if 'attribute_descriptions' in kwargs:
assert(len(kwargs['attribute_descriptions']) == no_attributes)
if no_attributes:
assert(len(kwargs['attribute_descriptions']) == no_attributes)
f.create_dataset("attribute_descriptions", data=np.array([v.encode('utf-8') for v in kwargs['attribute_descriptions']]))
f.attrs['no_blocks'] = 0
except Exception:
raise
finally:
f.close()
append_block(filename, data, **kwargs)
def append_block(filename, data, **kwargs):
"""Append a data block and corresponding time data to already existing file
......@@ -398,3 +423,42 @@ def check_type(f):
raise ValueError("HDF5 file must contain a 'type'-attribute with the value 'General time series data format'")
if 'no_blocks' not in f.attrs:
raise ValueError("HDF5 file must contain an attribute named 'no_blocks'")
def _get_statistic(time, data, statistics=['min','mean','max','std','eq3','eq4','eq6','eq8','eq10','eq12']):
def get_stat(stat):
if hasattr(np, stat):
return getattr(np,stat)(data,0)
elif (stat.startswith("eq") and stat[2:].isdigit()):
m = float(stat[2:])
return [eq_load(sensor, 46, m, time[-1]-time[0]+time[1]-time[0])[0][0] for sensor in data.T]
return np.array([get_stat(stat) for stat in statistics]).T
def _add_statistic_data(file, stat_data, statistics=['min','mean','max','std','eq3','eq4','eq6','eq8','eq10','eq12']):
f = h5py.File(file, "a")
stat_grp = f.create_group("Statistic")
stat_grp.create_dataset("statistic_names", data=np.array([v.encode('utf-8') for v in statistics]))
stat_grp.create_dataset("statistic_data", data=stat_data.astype(np.float))
f.close()
def add_statistic(file, statistics=['min','mean','max','std','eq3','eq4','eq6','eq8','eq10','eq12']):
time, data, info = load(file)
stat_data = _get_statistic(time, data, statistics)
_add_statistic_data(file, stat_data, statistics)
def load_statistic(filename):
f = _open_h5py_file(filename)
info = _load_info(f)
names = decode(f['Statistic']['statistic_names'])
data =np.array(f['Statistic']['statistic_data'])
return pd.DataFrame(data, columns=names), info
def compress2statistics(filename, statistics=['min','mean','max','std','eq3','eq4','eq6','eq8','eq10','eq12']):
time, data, info = load(filename)
stat_data = _get_statistic(time, data, statistics)
_save_info(filename, data.shape, **info)
_add_statistic_data(filename, stat_data, statistics)
'''
Created on 12/09/2013
@author: mmpe
'''
from __future__ import division
from __future__ import unicode_literals
from __future__ import print_function
from __future__ import absolute_import
from builtins import super
from builtins import range
from future import standard_library
standard_library.install_aliases()
import h5py
import numpy as np
from wetb import gtsdf
import unittest
import os
tmp_path = os.path.dirname(__file__) + "/tmp/"
tfp = os.path.dirname(os.path.abspath(__file__)) + "/test_files/"
class Test_gsdf(unittest.TestCase):
def setUp(self):
unittest.TestCase.setUp(self)
if not os.path.isdir(tmp_path):
os.makedirs(tmp_path)
@classmethod
def tearDownClass(cls):
super(Test_gsdf, cls).tearDownClass()
#shutil.rmtree(tmp_path)
def test_gtsdf_stat(self):
time, data, info = gtsdf.load(tfp+'test.hdf5')
print (data.shape)
fn = tmp_path + "test_stat.hdf5"
gtsdf.save(fn, data, time=time, **info)
gtsdf.add_statistic(fn)
stat_data,info = gtsdf.load_statistic(fn)
self.assertEqual(data[:,0].min(), stat_data.values[0,0])
self.assertEqual(stat_data.shape, (49,10))
def test_gtsdf_compress2stat(self):
time, data, info = gtsdf.load(tfp+'test.hdf5')
fn = tmp_path + "test_compress2stat.hdf5"
gtsdf.save(fn, data, time=time, **info)
gtsdf.save(tmp_path + "test_compress2stat2.hdf5", data, time=time, dtype=np.float, **info)
gtsdf.compress2statistics(fn)
self.assertLess(os.path.getsize(fn)*50, os.path.getsize(tfp+'test.hdf5'))
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
......@@ -67,10 +67,10 @@ class AEFile(object):
ae_data = self.ae_sets[set_nr]
index = np.searchsorted(ae_data[:, 0], radius)
index = max(1, index)
setnr1, setnr2 = ae_data[index - 1:index + 1, 3]
if setnr1 != setnr2:
setnrs = ae_data[index - 1:index + 1, 3]
if setnrs[0] != setnrs[-1]:
raise NotImplementedError
return setnr1
return setnrs[0]
......
......@@ -49,8 +49,9 @@ class AtTimeFile(object):
self.blade_radius = bladetip_radius
with open(filename, encoding='utf-8') as fid:
lines = fid.readlines()
atttribute_name_line = [l.startswith("# Radius_s") for l in lines].index(True)
self.attribute_names = lines[atttribute_name_line].lower().replace("#", "").split()
atttribute_name_line = [l.strip().startswith("# Radius_s") for l in lines].index(True)
#self.attribute_names = lines[atttribute_name_line].lower().replace("#", "").split()
self.attribute_names = [n.strip() for n in lines[atttribute_name_line].lower().split("#")[1:]]
data = np.array([[float(l) for l in lines[i].split() ] for i in range(atttribute_name_line+1, len(lines))])
self.data = data
def func_factory(column):
......@@ -97,10 +98,15 @@ class AtTimeFile(object):
if __name__ == "__main__":
at = AtTimeFile(r"tests/test_files/at.dat", 86.3655) # load file
at = AtTimeFile(r'U:\hama\HAWC2-AVATAR\res/avatar-7ntm-scaled-rad.dat')
at.attribute_names # Attribute names
at[:3,1] # first 3 twist rows
at.twist()[:3] # Twist first 3 twist rows
print (at.twist(36, curved_length=True)) # Twist at curved_length = 36 (interpolated)
print (at.twist(36)) # Twist at 36 (interpolated)
print (len(at.attribute_names))
print ("\n".join(at.attribute_names))
print (at.data.shape)
#at.twist()[:3] # Twist first 3 twist rows
#print (at.twist(36, curved_length=True)) # Twist at curved_length = 36 (interpolated)
#print (at.twist(36)) # Twist at 36 (interpolated)
......@@ -14,53 +14,54 @@ from wetb.hawc2.st_file import StFile
class BladeInfo(object):
def twist(self, radius=None):
"""Aerodynamic twist [deg]. Negative when leading edge is twisted towards wind(opposite to normal definition)\n
# class BladeInfo(object):
# def twist(self, radius=None):
# """Aerodynamic twist [deg]. Negative when leading edge is twisted towards wind(opposite to normal definition)\n
#
# Parameters
# ---------
# radius : float or array_like, optional
# radius [m] of interest
# If None (default) the twist of all points are returned
#
# Returns
# -------
# twist [deg] : float
# """
# raise NotImplementedError()
#
# def chord(self, radius=None):
# """Aerodynamic chord
#
# Parameters
# ---------
# radius : float or array_like, optional
# radius [m] of interest
# If None (default) the twist of all points are returned
#
# Returns
# -------
# chord [m] : float
# """
# raise NotImplementedError()
#
#
#
# def c2nd(self, radius_nd):
# """Return center line position
#
# Parameters
# ---------
# radius_nd : float or array_like, optional
# non dimensional radius
#
# Returns
# -------
# x,y,z : float
# """
Parameters
---------
radius : float or array_like, optional
radius [m] of interest
If None (default) the twist of all points are returned
Returns
-------
twist [deg] : float
"""
raise NotImplementedError()
def chord(self, radius=None):
"""Aerodynamic chord
Parameters
---------
radius : float or array_like, optional
radius [m] of interest
If None (default) the twist of all points are returned
Returns
-------
chord [m] : float
"""
raise NotImplementedError()
def c2nd(self, radius_nd):
"""Return center line position
Parameters
---------
radius_nd : float or array_like, optional
non dimensional radius
Returns
-------
x,y,z : float
"""
class H2BladeInfo(BladeInfo, PCFile, AtTimeFile):
class H2aeroBladeInfo(PCFile, AtTimeFile):
"""Provide HAWC2 info about a blade
From AE file:
......@@ -85,45 +86,64 @@ class H2BladeInfo(BladeInfo, PCFile, AtTimeFile):
"""
def __init__(self, htcfile, ae_filename=None, pc_filename=None, at_time_filename=None, st_filename=None, blade_name=None):
def __init__(self, htcfile, ae_filename=None, pc_filename=None, at_time_filename=None, blade_name=None):
if isinstance(htcfile, str):
assert htcfile.lower().endswith('.htc')
htcfile = HTCFile(htcfile)
self.htcfile = htcfile
blade_name = blade_name or htcfile.aero.link[2]
s = htcfile.new_htc_structure
#s = htcfile.new_htc_structure
at_time_filename = at_time_filename or ("output_at_time" in htcfile and os.path.join(htcfile.modelpath, htcfile.output_at_time.filename[0] + ".dat"))
pc_filename = pc_filename or os.path.join(htcfile.modelpath, htcfile.aero.pc_filename[0])
ae_filename = ae_filename or os.path.join(htcfile.modelpath, htcfile.aero.ae_filename[0])
mainbodies = [s[k] for k in s.keys() if s[k].name_ == "main_body"]
mainbody_blade = [mb for mb in mainbodies if mb.name[0] == blade_name][0]
st_filename = st_filename or os.path.join(htcfile.modelpath, mainbody_blade.timoschenko_input.filename[0])
#mainbodies = [s[k] for k in s.keys() if s[k].name_ == "main_body"]
#self.mainbody_blade = [mb for mb in mainbodies if mb.name[0] == blade_name][0]
if os.path.isfile(pc_filename) and os.path.isfile(ae_filename):
PCFile.__init__(self, pc_filename, ae_filename)
blade_radius = self.ae_sets[1][-1,0]
if os.path.isfile(st_filename):
StFile.__init__(self, st_filename)
if os.path.isfile(at_time_filename):
AtTimeFile.__init__(self, at_time_filename, blade_radius)
self.c2def = np.array([v.values[1:5] for v in mainbody_blade.c2_def if v.name_ == "sec"])
self.curved_length = self.radius_curved_ac()[-1]
else:
raise NotImplementedError
#z_nd = (np.cos(np.linspace(np.pi, np.pi*2,len(curved_length)-1))+1)/2
#self.curved_length = np.cumsum(np.sqrt(np.sum(np.diff(self.c2def[:, :3], 1, 0) ** 2, 1)))[-1]
#self.c2nd = lambda x : interp1d(self.c2def[:, 2] / self.c2def[-1, 2], self.c2def[:, :3], axis=0, kind='cubic')(np.max([np.min([x, np.ones_like(x)], 0), np.zeros_like(x) + self.c2def[0, 2] / self.c2def[-1, 2]], 0))
x, y, z, twist = self.hawc2_splines()
def interpolate(r):
r = (np.max([np.min([r, np.ones_like(r)], 0), np.zeros_like(r) + self.c2def[0, 2] / self.c2def[-1, 2]], 0))
return np.array([np.interp(r, np.array(z) / z[-1], xyz) for xyz in [x,y,z, twist]]).T
self.c2nd = interpolate #lambda r : interp1d(np.array(z) / z[-1], np.array([x, y, z]).T, axis=0, kind=1)(np.max([np.min([r, np.ones_like(r)], 0), np.zeros_like(r) + self.c2def[0, 2] / self.c2def[-1, 2]], 0))
self.hawc2_splines_data = self.hawc2_splines()
@property
def c2def(self):
if not hasattr(self, "_c2def"):
self._c2def = np.array([v.values[1:5] for v in self.htcfile.blade_c2_def if v.name_ == "sec"])
return self._c2def
def c2nd(self, l_nd, curved_length=True):
curve_l_nd, x, y, z, twist = self.hawc2_splines_data
if curved_length:
l_nd = (np.max([np.min([l_nd, np.ones_like(l_nd)], 0), np.zeros_like(l_nd) + self.c2def[0, 2] / self.c2def[-1, 2]], 0))
return np.array([np.interp(l_nd, curve_l_nd, xyz) for xyz in [x,y,z, twist]]).T
else:
l_nd = (np.max([np.min([l_nd, np.ones_like(l_nd)], 0), np.zeros_like(l_nd)], 0))
return np.array([np.interp(l_nd, z/z[-1], xyz) for xyz in [x,y,z, twist]]).T
def c2(self, l, curved_length=True):
if curved_length:
L = self.curved_length
else:
L = self.c2def[-1,2]
return self.c2nd(l/L, curved_length)
def hawc2_splines(self):
curve_z = np.r_[0, np.cumsum(np.sqrt(np.sum(np.diff(self.c2def[:, :3], 1, 0) ** 2, 1)))]
curve_z_nd = curve_z / curve_z[-1]
curve_l = np.r_[0, np.cumsum(np.sqrt(np.sum(np.diff(self.c2def[:, :3], 1, 0) ** 2, 1)))]
curve_l_nd = curve_l / curve_l[-1]
def akima(x, y):
n = len(x)
......@@ -166,10 +186,11 @@ class H2BladeInfo(BladeInfo, PCFile, AtTimeFile):
z = np.linspace(0, s[i + 1] - s[i ], 10, endpoint=i >= co.shape[0] - 2)
x.extend(s[i] + z)
y.extend(p(z))
return y
return x,y
x, y, z, twist = [coef2spline(curve_z_nd, akima(curve_z_nd, self.c2def[:, i])) for i in range(4)]
return x, y, z, twist
x, y, z, twist = [coef2spline(curve_l_nd, akima(curve_l_nd, self.c2def[:, i]))[1] for i in range(4)]
curve_l_nd = coef2spline(curve_l_nd, akima(curve_l_nd, self.c2def[:, 0]))[0]
return curve_l_nd, x, y, z, twist
def xyztwist(self, l=None, curved_length=False):
"""Return splined x,y,z and twist
......@@ -188,44 +209,120 @@ class H2BladeInfo(BladeInfo, PCFile, AtTimeFile):
x,y,z,twist
"""
if l is None:
return self.c2def[:, 3]
return self.c2def[:, :3]
else:
r_nd = np.linspace(0,1,100)
r_nd = np.linspace(0,1,1000)
if curved_length:
curved_length = np.cumsum(np.sqrt((np.diff(self.c2nd(np.linspace(0,1,100)),1,0)[:,:3]**2).sum(1)))
assert np.all(l>=curved_length[0]) and np.all(l<=curved_length[-1])
return self.c2nd(r_nd[np.argmin(np.abs(curved_length-l))+1])
#curved_length = np.cumsum(np.sqrt((np.diff(self.c2nd(np.linspace(0,1,100)),1,0)[:,:3]**2).sum(1)))
return self.c2nd(l/self.radius_curved_ac()[-1])
# z_nd = (np.cos(np.linspace(np.pi, np.pi*2,len(curved_length)-1))+1)/2
# assert np.all(l>=curved_length[0]) and np.all(l<=curved_length[-1])
# return self.c2nd(r_nd[np.argmin(np.abs(curved_length-l))+1])
else:
assert np.all(l>=self.c2def[0,2]) and np.all(l<=self.c2def[-1,2])
return self.c2nd(l/self.c2def[-1, 2])
class H2aeroBladeInfo(H2BladeInfo):
def __init__(self, at_time_filename, ae_filename, pc_filename, htc_filename):
"""
at_time_filename: file name of at time file containing twist and chord data
"""
PCFile.__init__(self, pc_filename, ae_filename)
blade_radius = self.ae_sets[1][-1,0]
AtTimeFile.__init__(self, at_time_filename, blade_radius)
assert('twist' in self.attribute_names)
htcfile = HTCFile(htc_filename)
self.c2def = np.array([v.values[1:5] for v in htcfile.blade_c2_def if v.name_ == "sec"])
#self._c2nd = interp1d(self.c2def[:, 2] / self.c2def[-1, 2], self.c2def[:, :3], axis=0, kind='cubic')
ac_radii = self.ac_radius()
c2_axis_length = np.r_[0, np.cumsum(np.sqrt((np.diff(self.c2def[:, :3], 1, 0) ** 2).sum(1)))]
self._c2nd = interp1d(c2_axis_length / c2_axis_length[-1], self.c2def[:, :3], axis=0, kind='cubic')
#self._c2nd = interp1d(self.c2def[:,2]/self.c2def[-1,2], self.c2def[:, :3], axis=0, kind='cubic')
def c2nd(self, r_nd):
r_nd_min = np.zeros_like(r_nd) + self.c2def[0, 2] / self.c2def[-1, 2]
r_nd_max = np.ones_like(r_nd)
r_nd = np.max([np.min([r_nd, r_nd_max], 0), r_nd_min], 0)
return self._c2nd(r_nd)
class H2BladeInfo(H2aeroBladeInfo):
"""Provide HAWC2 info about a blade
From AE file:
- chord(radius=None, set_nr=1):
- thickness(radius=None, set_nr=1)
- radius_ae(radius=None, set_nr=1)
From PC file
- CL(radius, alpha, ae_set_nr=1)
- CD(radius, alpha, ae_set_nr=1)
- CM(radius, alpha, ae_set_nr=1)
From at_time_filename
- attribute_names
- xxx(radius=None, curved_length=None) # xxx for each attribute name
- radius_curved_ac(radius=None) # Curved length of nearest/all aerodynamic calculation points
From ST file
- radius_st(radius=None, mset=1, set=1)
- xxx(radius=None, mset=1, set=1) # xxx for each of r, m, x_cg,y_cg, ri_x, ri_y, xs, ys, E, G, Ix, Iy, K, kx, ky, A, pitch, xe, ye
with template
"""
def __init__(self, htcfile, ae_filename=None, pc_filename=None, at_time_filename=None, st_filename=None, blade_name=None):
if isinstance(htcfile, str):
htcfile = HTCFile(htcfile)
s = htcfile.new_htc_structure
# at_time_filename = at_time_filename or ("output_at_time" in htcfile and os.path.join(htcfile.modelpath, htcfile.output_at_time.filename[0] + ".dat"))
# pc_filename = pc_filename or os.path.join(htcfile.modelpath, htcfile.aero.pc_filename[0])
# ae_filename = ae_filename or os.path.join(htcfile.modelpath, htcfile.aero.ae_filename[0])
#
mainbodies = [s[k] for k in s.keys() if s[k].name_ == "main_body"]
if blade_name is None:
blade_name = htcfile.aero.link[2]
self.mainbody_blade = [mb for mb in mainbodies if mb.name[0] == blade_name][0]
H2aeroBladeInfo.__init__(self, htcfile, ae_filename=ae_filename, pc_filename=pc_filename, at_time_filename=at_time_filename, blade_name=blade_name)
# def __init__(self, htcfile, ae_filename=None, pc_filename=None, at_time_filename=None, st_filename=None, blade_name=None):
#
# if isinstance(htcfile, str):
# assert htcfile.lower().endswith('.htc')
# htcfile = HTCFile(htcfile)
#
# blade_name = blade_name or htcfile.aero.link[2]
st_filename = st_filename or os.path.join(htcfile.modelpath, self.mainbody_blade.timoschenko_input.filename[0])
#
# if os.path.isfile(pc_filename) and os.path.isfile(ae_filename):
# PCFile.__init__(self, pc_filename, ae_filename)
# blade_radius = self.ae_sets[1][-1,0]
#
if os.path.isfile(st_filename):
StFile.__init__(self, st_filename)
# if os.path.isfile(at_time_filename):
# AtTimeFile.__init__(self, at_time_filename, blade_radius)
# self.curved_length = self.radius_curved_ac()[-1]
# else:
# raise NotImplementedError
# #z_nd = (np.cos(np.linspace(np.pi, np.pi*2,len(curved_length)-1))+1)/2
# #self.curved_length = np.cumsum(np.sqrt(np.sum(np.diff(self.c2def[:, :3], 1, 0) ** 2, 1)))[-1]
#
# self.c2def = np.array([v.values[1:5] for v in mainbody_blade.c2_def if v.name_ == "sec"])
#
# self.hawc2_splines_data = self.hawc2_splines()
@property
def c2def(self):
if not hasattr(self, "_c2def"):
self._c2def = np.array([v.values[1:5] for v in self.mainbody_blade.c2_def if v.name_ == "sec"])
return self._c2def
#
# class H2aeroBladeInfo(H2BladeInfo):
#
# def __init__(self, at_time_filename, ae_filename, pc_filename, htc_filename):
# """
# at_time_filename: file name of at time file containing twist and chord data
# """
# PCFile.__init__(self, pc_filename, ae_filename)
# blade_radius = self.ae_sets[1][-1,0]
# AtTimeFile.__init__(self, at_time_filename, blade_radius)
#
# assert('twist' in self.attribute_names)
# htcfile = HTCFile(htc_filename)
#
#
# self.c2def = np.array([v.values[1:5] for v in htcfile.blade_c2_def if v.name_ == "sec"])
# #self._c2nd = interp1d(self.c2def[:, 2] / self.c2def[-1, 2], self.c2def[:, :3], axis=0, kind='cubic')
#
# ac_radii = self.radius_curved_ac()
# self.hawc2_splines_data = self.hawc2_splines()
# self.curved_length = np.cumsum(np.sqrt(np.sum(np.diff(self.c2def[:, :3], 1, 0) ** 2, 1)))[-1]
#
# # c2_axis_length = np.r_[0, np.cumsum(np.sqrt((np.diff(self.c2def[:, :3], 1, 0) ** 2).sum(1)))]
# # self._c2nd = interp1d(c2_axis_length / c2_axis_length[-1], self.c2def[:, :3], axis=0, kind='cubic')
# #self._c2nd = interp1d(self.c2def[:,2]/self.c2def[-1,2], self.c2def[:, :3], axis=0, kind='cubic')
# #
# # def c2nd(self, r_nd):
# # r_nd_min = np.zeros_like(r_nd) + self.c2def[0, 2] / self.c2def[-1, 2]
# # r_nd_max = np.ones_like(r_nd)
# # r_nd = np.max([np.min([r_nd, r_nd_max], 0), r_nd_min], 0)
# # return self._c2nd(r_nd)
#