Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision

Target

Select target project
  • toolbox/WindEnergyToolbox
  • tlbl/WindEnergyToolbox
  • cpav/WindEnergyToolbox
  • frza/WindEnergyToolbox
  • borg/WindEnergyToolbox
  • mmpe/WindEnergyToolbox
  • ozgo/WindEnergyToolbox
  • dave/WindEnergyToolbox
  • mmir/WindEnergyToolbox
  • wluo/WindEnergyToolbox
  • welad/WindEnergyToolbox
  • chpav/WindEnergyToolbox
  • rink/WindEnergyToolbox
  • shfe/WindEnergyToolbox
  • shfe1/WindEnergyToolbox
  • acdi/WindEnergyToolbox
  • angl/WindEnergyToolbox
  • wliang/WindEnergyToolbox
  • mimc/WindEnergyToolbox
  • wtlib/WindEnergyToolbox
  • cmos/WindEnergyToolbox
  • fabpi/WindEnergyToolbox
22 results
Select Git revision
Show changes
Showing
with 2565 additions and 126 deletions
C F V V V C F C C C C C
[Case folder] [Case id.] [wdir] [seed] [wsp] [turb_format] [TI] [cutin_t0] [Induction] [Rotor azimuth] [Free shaft rot] [Rotor locked]
DLC81_IEC61400-1Ed3 """DLC81_wsp[wsp]_wdir[wdir]_s[seed]""" 352 6 18 1 "([ref_ti]*(0,75*[wsp]+5,6))/[wsp]" 1000 0 180 ;
8
Master spreadsheets to generate the set of spreadsheets required as inputs to the DLB calculator.
"Each sheet defines the tags of a DLC, except the main one. The main sheet defines: wind turbine parameters, default tags values, and gusts and turbulence definitions."
"Tags are devided into 3 categories: constants (C), variables (V), and functions (F). The category is specified in the line above the tag."
Constants do not change in a DLC. Variables define the number of cases within a DLC through their combinations. Functions are tags that depends on other tags through and expression.
Parameters: Vrate Vout
12 26
Default constants: [ref_ti] [ref_wind_speed] [tsr] [hub_height] [diameter] [t0] [wdir] [shear_exp] [out_format] [gust] [gust_type] [G_A] [G_phi0] [G_t0] [G_T] [Rotor azimuth] [Free shaft rot] [init_wr] [Pitch 1 DLC22b] [Rotor locked] [Time stuck DLC22b] [Cut-in time] [Stop type] [Pitvel 1] [Pitvel 2] [Grid loss time] [Time pitch runaway] [Induction] [Dyn stall] [dis_setbeta] [long_scale_param] [t_flap_on] [turb_format] [staircase] [Rotor azimuth] [sim_time] [Cut-out time]
0.16 50 8.0 90 178 100 0 0.2 hawc_binary ; 0 0.5 0 ; -1 1 1 4 6 10000 10000 1 2 42 20 1 ; 0 600 10000
Default functions: [Turb base name] [time stop] [turb_dx] [wsp factor] [wind_ramp_t1] [wind_ramp_factor1] [time_start]
"""turb_wsp[wsp]_s[seed]""" [t0]+[sim_time] "[wsp]*[sim_time]/8192,0" [tsr]/[wsp] [t0] [wsp factor] [t0]
Gusts:
EOG "min([1,35*(0,8*1,4*[ref_wind_speed]-[wsp]);3,3*[TI]*[wsp]/(1+0,1*[diameter]/[long_scale_param])])"
ECD 15
EWS "(2,5+0,2*6,4*[TI]*[wsp]*([diameter]/[long_scale_param])**0,25)/[diameter]"
EDC "4*arctan([TI]/(1+0,1*[diameter]/[long_scale_param]))*180/pi"
Turbulence:
NTM "([ref_ti]*(0,75*[wsp]+5,6))/[wsp]"
ETM "2*[ref_ti]*(0,072*(0,2*[ref_wind_speed]/2+3)*([wsp]/2-4)+10)/[wsp]"
Wind speeds:
4
6
8
10
12
14
16
18
20
22
24
26
28
30
32
34
36
38
40
42
44
46
48
50
This diff is collapsed.
# Update conda ```py36-wetb``` environment and ```wetb```
There are pre-configured miniconda/anaconda python environments installed on
Gorm and Jess at:
```
/home/python/miniconda3/envs/py36-wetb
```
Note that these refer to the home drives of Gorm and Jess respectively and thus
refer to two different directories (but are named the same).
Update the root Anaconda environment:
```
conda update --all
```
Activate the ```py36-wetb``` environment:
```
source activate py36-wetb
```
Update the ```py36-wetb``` environment:
```
conda update --all
```
Pull latest wetb changes and create re-distributable binary wheel package for ```py36-wetb```:
```
cd /home/MET/repositories/tooblox/WindEnergyToolbox
git pull
python setup.py bdist_wheel -d dist/
```
And install the wheel package (```*.whl```)
```
pip install --no-deps -U dist/wetb-X.Y.Z.post0.devXXXXXXXX-cp35m-linux_x86_64.whl
```
The option ```--no-deps``` is used here to avoid pip installing possible newer
versions of packages that should be managed by conda. This only works when all
dependencies of ```wetb``` are met (which is assumed by default for the
```py36-wetb``` environment).
How to use the Statistics DataFrame
===================================
Introduction
------------
The statistical data of your post-processed load cases are saved in the HDF
format. You can use Pandas to retrieve and organize that data. Pandas organizes
the data in a DataFrame, and the library is powerful, comprehensive and requires
some learning. There are extensive resources out in the wild that can will help
you getting started:
* [list](http://pandas.pydata.org/pandas-docs/stable/tutorials.html)
of good tutorials can be found in the Pandas
[documentation](http://pandas.pydata.org/pandas-docs/version/0.16.2/).
* short and simple
[tutorial](https://github.com/DTUWindEnergy/Python4WindEnergy/blob/master/lesson%204/pandas.ipynb)
as used for the Python 4 Wind Energy course
The data is organized in simple 2-dimensional table. However, since the statistics
of each channel is included for multiple simulations, the data set is actually
3-dimensional. As an example, this is how a table could like:
```
[case_id] [channel name] [mean] [std] [windspeed]
sim_1 pitch 0 1 8
sim_1 rpm 1 7 8
sim_2 pitch 2 9 9
sim_2 rpm 3 2 9
sim_3 pitch 0 1 7
```
Each row is a channel of a certain simulation, and the columns represent the
following:
* a tag from the master file and the corresponding value for the given simulation
* the channel name, description, units and unique identifier
* the statistical parameters of the given channel
Load the statistics as a pandas DataFrame
-----------------------------------------
Pandas has some very powerful functions that will help analysing large and
complex DataFrames. The documentation is extensive and is supplemented with
various tutorials. You can use
[10 Minutes to pandas](http://pandas.pydata.org/pandas-docs/stable/10min.html)
as a first introduction.
Loading the pandas DataFrame table works as follows:
```python
import pandas as pd
df = pd.read_hdf('path/to/file/name.h5', 'table')
```
Some tips for inspecting the data:
```python
import numpy as np
# Check the available data columns:
for colname in sorted(df.keys()):
print colname
# list all available channels:
print np.sort(df['channel'].unique())
# list the different load cases
df['[Case folder]'].unique()
```
Reduce memory footprint using categoricals
------------------------------------------
When the DataFrame is consuming too much memory, you can try to reduce its
size by using categoricals. A more extensive introduction to categoricals can be
found
[here](http://pandas.pydata.org/pandas-docs/stable/faq.html#dataframe-memory-usage)
and [here](http://matthewrocklin.com/blog/work/2015/06/18/Categoricals/).
The basic idea is to replace all string values with an integer,
and have an index that maps the string value to the index. This trick only works
when you have long strings that occur multiple times throughout your data set.
The following example shows how you can use categoricals to reduce the memory
usage of a pandas DataFrame:
```python
# load a certain DataFrame
df = pd.read_hdf(fname, 'table')
# Return the total estimated memory usage
print '%10.02f MB' % (df.memory_usage(index=True).sum()/1024.0/1024.0)
# the data type of column that contains strings is called object
# convert objects to categories to reduce memory consumption
for column_name, column_dtype in df.dtypes.iteritems():
# applying categoricals mostly makes sense for objects, we ignore all others
if column_dtype.name == 'object':
df[column_name] = df[column_name].astype('category')
print '%10.02f MB' % (df.memory_usage(index=True).sum()/1024.0/1024.0)
```
Python has a garbage collector working in the background that deletes
un-referenced objects. In some cases it might help to actively trigger the
garbage collector as follows, in an attempt to free up memory during a run of
a script that is almost flooding the memory:
```python
import gc
gc.collect()
```
Load a DataFrame that is too big for memory in chunks
-----------------------------------------------------
When a DataFrame is too big to load into memory at once, and you already
compressed your data using categoricals (as explained above), you can read
the DataFrame one chunk at the time. A chunk is a selection of rows. For
example, you can read 1000 rows at the time by setting ```chunksize=1000```
when calling ```pd.read_hdf()```. For example:
```python
# load a large DataFrame in chunks of 1000 rows
for df_chunk in pd.read_hdf(fname, 'table', chunksize=1000):
print 'DataFrame chunk contains %i rows' % (len(df_chunk))
```
We will read a large DataFrame as chunks into memory, and select only those
rows who belong to dlc12:
```python
# only select one DLC, and place them in one DataFrame. If the data
# containing one DLC is still to big for memory, this approach will fail
# create an empty DataFrame, here we collect the results we want
df_dlc12 = pd.DataFrame()
for df_chunk in pd.read_hdf(fname, 'table', chunksize=1000):
# organize the chunk: all rows for which [Case folder] is the same
# in a single group. Each group is now a DataFrame for which
# [Case folder] has the same value.
for group_name, group_df in df_chunk.groupby(df_chunk['[Case folder]']):
# if we have the group with dlc12, save them for later
if group_name == 'dlc12_iec61400-1ed3':
df_dlc12 = pd.concat([df_dlc12, group_df])#, ignore_index=True)
```
Plot wind speed vs rotor speed
------------------------------
```python
# select the channels of interest
for group_name, group_df in df_dlc12.groupby(df_dlc12['channel']):
# iterate over all channel groups, but only do something with the channels
# we are interested in
if group_name == 'Omega':
# we save the case_id tag, the mean value of channel Omega
df_rpm = group_df[['[case_id]', 'mean']].copy()
# note we made a copy because we will change the DataFrame in the next line
# rename the column mean to something more useful
df_rpm.rename(columns={'mean': 'Omega-mean'}, inplace=True)
elif group_name == 'windspeed-global-Vy-0.00-0.00--127.00':
# we save the case_id tag, the mean value of channel wind, and the
# value of the Windspeed tag
df_wind = group_df[['[case_id]', 'mean', '[Windspeed]']].copy()
# note we made a copy because we will change the DataFrame in the next line
# rename the mean of the wind channel to something more useful
df_wind.rename(columns={'mean': 'wind-mean'}, inplace=True)
# join both results on the case_id value so the mean RPM and mean wind speed
# are referring to the same simulation/case_id.
df_res = pd.merge(df_wind, df_rpm, on='[case_id]', how='inner')
# now we can plot RPM vs wind speed
from matplotlib import pyplot as plt
plt.plot(df_res['wind-mean'].values, df_res['Omega-mean'].values, '*')
```
'''
Created on 28. jul. 2017
@author: mmpe
'''
import os
import subprocess
def _run_git_cmd(cmd, git_repo_path=None):
git_repo_path = git_repo_path or os.getcwd()
if not os.path.isdir(os.path.join(git_repo_path, ".git")):
raise Warning("'%s' does not appear to be a Git repository." % git_repo_path)
try:
process = subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=True,
cwd=os.path.abspath(git_repo_path))
stdout,stderr = process.communicate()
if process.returncode != 0:
raise EnvironmentError("%s\n%s"%(stdout, stderr))
return stdout.strip()
except EnvironmentError as e:
raise e
raise Warning("unable to run git")
def get_git_branch(git_repo_path=None):
cmd = ["git", "rev-parse", "--abbrev-ref", "HEAD"]
return _run_git_cmd(cmd, git_repo_path)
def get_git_version(git_repo_path=None):
cmd = ["git", "describe", "--tags", "--dirty", "--always"]
# format it will return: 'v0.1.0-12-g22668f0'
v = _run_git_cmd(cmd, git_repo_path)
# convert to something Pypi likes: 0.1.2.dev3.123456
# see also https://setuptools.pypa.io/en/latest/userguide/distribution.html
# and/or PEP440 https://peps.python.org/pep-0440/
v = v.replace('-', '.post', 1)
return v
def get_tag(git_repo_path=None, verbose=False):
tag = _run_git_cmd(['git', 'describe', '--tags', '--always', '--abbrev=0'],
git_repo_path)
if verbose:
print(tag)
return tag
def set_tag(tag, push, git_repo_path=None):
_run_git_cmd(["git", "tag", tag], git_repo_path)
if push:
_run_git_cmd(["git", "push"], git_repo_path)
_run_git_cmd(["git", "push", "--tags"], git_repo_path)
def update_git_version(version_module, git_repo_path=None):
"""Update <version_module>.__version__ to git version"""
version_str = get_git_version(git_repo_path)
assert os.path.isfile(version_module.__file__)
with open(version_module.__file__, "w") as fid:
fid.write("__version__ = '%s'" % version_str)
# ensure file is written, closed and ready
with open(version_module.__file__) as fid:
fid.read()
return version_str
def write_vers(vers_file='wetb/__init__.py', repo=None, skip_chars=1):
"""Writes out version string as follows:
"last tag"-("nr commits since tag")-("branch name")-("hash commit")
and where nr of commits since last tag is only included if >0,
branch name is only inlcuded when not on master,
and hash commit is only included when not at a tag (when nr of commits > 0)
"""
if not repo:
repo = os.getcwd()
version_long = get_git_version(repo)
branch = get_git_branch(repo)
verel = version_long.split('-')
# tag name
version = verel[0][skip_chars:]
# number of commits since last tag, only if >0
nr_commits = 0
if len(verel) > 1:
try:
nr_commits = int(verel[1])
except ValueError:
nr_commits = -1
if nr_commits > 0:
version += '-' + verel[1]
# branch name, only when NOT on master
if branch != 'master':
version += '-' + branch
# hash commit, only if not at tag
if len(verel) > 2 and nr_commits > 0:
# first character on the hash is always a g (not part of the hash)
version += '-' + verel[2][1:]
# if "-HEAD" is added to the version, which pypi does not like:
if version.endswith('-HEAD'):
version = version[:-5]
print(version_long)
print('Writing version: {} in {}'.format(version, vers_file))
with open(vers_file, 'r') as f:
lines = f.readlines()
for n, l in enumerate(lines):
if l.startswith('__version__'):
lines[n] = "__version__ = '{}'\n".format(version)
for n, l in enumerate(lines):
if l.startswith('__release__'):
lines[n] = "__release__ = '{}'\n".format(version)
with open(vers_file, 'w') as f:
f.write(''.join(lines))
return version
def rename_dist_file():
for f in os.listdir('dist'):
if f.endswith('whl'):
split = f.split('linux')
new_name = 'manylinux1'.join(split)
old_path = os.path.join('dist', f)
new_path = os.path.join('dist', new_name)
os.rename(old_path, new_path)
def main():
"""Example of how to run (pytest-friendly)"""
if __name__ == '__main__':
pass
main()
File added
[build-system]
requires = [
"setuptools>=60",
"setuptools-scm>=8.0"]
build-backend = "setuptools.build_meta"
[project]
name = "wetb"
authors = [{name="DTU Wind and Energy Systems"}]
description = "The Wind Energy Toolbox (or ```wetb```, pronounce as wee-tee-bee) is a collection of Python scripts that facilitate working with (potentially a lot) of HAWC2, HAWCStab2, FAST or other text input based simulation tools."
dependencies = [
'certifi',
'click',
'Cython',
'h5py',
'Jinja2',
'lxml',
'matplotlib',
'pillow',
'mock',
'numpy',
'numba',
'openpyxl',
'pandas',
'paramiko',
'psutil',
'pytest',
'pytest-cov',
'scipy',
'sshtunnel',
'tables',
'tqdm',
'xarray',
'xlwt',
'XlsxWriter',
]
license = {text = "wetb is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License (GPL, http://www.gnu.org/copyleft/gpl.html) as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version. wetb is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details http://www.gnu.org/licenses/ We encourage you to submit new code for possible inclusion in future versions of wetb."}
dynamic = ["version"]
[project.urls]
repository = "https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox"
documentation = "https://toolbox.pages.windenergy.dtu.dk/WindEnergyToolbox/"
[project.optional-dependencies]
prepost = ["openpyxl", "tables", "xlwt", "Cython"]
all = ["openpyxl", "tables", "xlwt", "Cython", "paramiko", "sshtunnel", 'pytest', 'mock', 'click']
[tool.setuptools_scm]
version_scheme = "no-guess-dev"
[tool.setuptools]
packages = ["wetb"]
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.