Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • toolbox/WindEnergyToolbox
  • tlbl/WindEnergyToolbox
  • cpav/WindEnergyToolbox
  • frza/WindEnergyToolbox
  • borg/WindEnergyToolbox
  • mmpe/WindEnergyToolbox
  • ozgo/WindEnergyToolbox
  • dave/WindEnergyToolbox
  • mmir/WindEnergyToolbox
  • wluo/WindEnergyToolbox
  • welad/WindEnergyToolbox
  • chpav/WindEnergyToolbox
  • rink/WindEnergyToolbox
  • shfe/WindEnergyToolbox
  • shfe1/WindEnergyToolbox
  • acdi/WindEnergyToolbox
  • angl/WindEnergyToolbox
  • wliang/WindEnergyToolbox
  • mimc/WindEnergyToolbox
  • wtlib/WindEnergyToolbox
  • cmos/WindEnergyToolbox
  • fabpi/WindEnergyToolbox
22 results
Show changes
Commits on Source (909)
Showing
with 1732 additions and 307 deletions
......@@ -3,17 +3,22 @@ dist/*
*.pyc
*.pyd
*egg-info*
*.so
.eggs
doctrees
/docs/_build
/.project
/.pydevproject
/.settings/org.eclipse.core.resources.prefs
/wetb/gtsdf/tests/tmp
/wetb/dlc/tests/test_files/res_all
/wetb/dlc/tests/test_files/res2_all
/wetb/hawc2/ascii2bin/ascii2bin_dist
/wetb/hawc2/tests/test_files/htcfiles/tmp.htc
/wetb/hawc2/ascii2bin/tests/test_files/Hawc2ascii_bin.sel
/wetb/hawc2/ascii2bin/tests/test_files/Hawc2ascii_bin.dat
docs/_build
.project
.pydevproject
.settings/org.eclipse.core.resources.prefs
wetb/gtsdf/tests/tmp
wetb/dlc/tests/test_files/res_all
wetb/dlc/tests/test_files/res2_all
wetb/hawc2/ascii2bin/ascii2bin_dist
wetb/hawc2/tests/test_files/htcfiles/tmp.htc
wetb/hawc2/ascii2bin/tests/test_files/Hawc2ascii_bin.sel
wetb/hawc2/ascii2bin/tests/test_files/Hawc2ascii_bin.dat
wetb/prepost/tests/data/demo_dlc/remote*
/wetb/fatigue_tools/rainflowcounting/compile.py
/docs/api
/htmlcov
before_script:
- apt-get update
# uncomment first time
#- rm -rf TestFiles
#- git submodule update --init
- git submodule sync
- git submodule update
test-3.4:
image: mmpe/wetb
script:
- python3 setup.py test
\ No newline at end of file
#- python3 setup.py test
- python3 -m pytest --cov=wetb
[submodule "TestFiles"]
path = TestFiles
url = https://gitlab.windenergy.dtu.dk/toolbox/TestFiles.git
......@@ -3,4 +3,5 @@ Developers
==========
* Mads Mølgaard Pedersen
* David Verelst
* David R.S. Verelst
* Carlo Tibaldi
Contributions
-------------
If you make a change in the toolbox, that others can benefit from please make a merge request.
If you can, please submit a merge request with the fix or improvements including tests.
The workflow to make a merge request is as follows:
-Create a feature branch, branch away from master
-Write tests and code
-Push the commit(s) to your fork
-Submit a merge request (MR) to the master branch of
-Link any relevant issues in the merge request description and leave a comment on them with a link back to the MR
-Your tests should run as fast as possible, and if it uses test files, these files should be as small as possible.
-Please keep the change in a single MR as small as possible. Split the functionality if you can
\ No newline at end of file
|build status| |coverage report|
Introduction
============
The Wind Energy Toolbox (or ``wetb``, pronounce as wee-tee-bee) is a
collection of Python scripts that facilitate working with (potentially a
lot) of HAWC2, HAWCStab2, FAST or other text input based simulation
tools.
Note that this toolbox is very much a WIP (work in progress). For
example, some of the functions in the `prepost <#prepost>`__ module have
a similar functions in `Hawc2io <wetb/hawc2/Hawc2io.py>`__. These
different implementations will be merged in due time.
Both Python2 and Python3 are supported.
Installation
============
- `Simple user <docs/install.md>`__
- `Developer/contributor <docs/developer-guide.md>`__
Contents of WindEnergyToolbox, `wetb <wetb>`__
==============================================
Overview
~~~~~~~~
- `hawc2 <#hawc2>`__
- `gtsdf <#gtsdf>`__
- `fatigue\_tools <#fatigue_tools>`__
- `wind <#wind>`__
- `dlc <#dlc>`__
- `prepost <#prepost>`__
- `fast <#fast>`__
- `utils <#utils>`__
`hawc2 <wetb/hawc2>`__
~~~~~~~~~~~~~~~~~~~~~~
- `Hawc2io <wetb/hawc2/Hawc2io.py>`__: Read binary, ascii and flex
result files
- `sel\_file <wetb/hawc2/sel_file.py>`__: Read/write \*.sel (sensor
list) files
- `htc\_file <wetb/hawc2/htc_file.py>`__: Read/write/manipulate htc
files
- `ae\_file <wetb/hawc2/ae_file.py>`__: Read AE (aerodynamic blade
layout) files
- `pc\_file <wetb/hawc2/pc_file.py>`__: Read PC (profile coefficient)
files
- `st\_file <wetb/hawc2/st_file.py>`__: Read ST (structural properties)
files
- `shear\_file <wetb/hawc2/shear_file.py>`__: Create user defined shear
file
- `at\_time\_file <wetb/hawc2/at_time_file.py>`__: Read at
output\_at\_time files
- `log\_file <wetb/hawc2/log_file.py>`__: Read and interpret log files
- `ascii2bin <wetb/hawc2/ascii2bin>`__: Compress HAWC2 ascii result
files to binary
`gtsdf <wetb/gtsdf>`__
~~~~~~~~~~~~~~~~~~~~~~
General Time Series Data Format, a binary hdf5 data format for storing
time series data. - `gtsdf <wetb/gtsdf/gtsdf.py>`__: read/write/append
gtsdf files - `unix\_time <wetb/gtsdf/unix_time.py>`__: convert between
datetime and unix time (seconds since 1/1/1970)
`fatigue\_tools <wetb/fatigue_tools>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- `fatigue <wetb/fatigue_tools/fatigue.py>`__: Rainflow counting, cycle
matrix and equivalent loads
- `bearing\_damage <wetb/fatigue_tools/bearing_damage.py>`__: Calculate
a comparable measure of bearing damage
`wind <wetb/wind>`__
~~~~~~~~~~~~~~~~~~~~
- `shear <wetb/wind/shear.py>`__: Calculate and fit wind shear
`dlc <wetb/dlc>`__
~~~~~~~~~~~~~~~~~~
Module for working with "Design load cases" (Code independent) -
`high\_level <wetb/dlc/high_level.py>`__ Class for working with the
highlevel dlc excell sheet
`prepost <wetb/prepost>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~
Module for creating an arbitrary number of HAWC2 simulations, and
optionally corresponding execution scripts for a PBS Torque cluster
(Linux), simple bash (Linux), or Windows batch scripts. A
post-processing module is also included that calculates statistical
parameters, performs rainflow counting for fatigue load calculations,
and create load envelopes.
Additional documentation can be found here:
- `Getting started with DLBs <docs/getting-started-with-dlbs.md>`__
- `Generate DLB spreadsheets <docs/generate-spreadsheet.md>`__
- `Auto-generation of Design Load Cases <docs/howto-make-dlcs.md>`__
- `House rules for storing results on
``mimer/hawc2sim`` <docs/houserules-mimerhawc2sim.md>`__
- `How to use the Statistics
DataFrame <docs/using-statistics-df.md>`__
`fast <wetb/fast>`__
~~~~~~~~~~~~~~~~~~~~
Tools for working with NREL's FAST code (An aeroelastic computer-aided
engineering (CAE) tool for horizontal axis wind turbines) -
`fast\_io <wetb/fast/fast_io.py>`__: Read binary and ascii result files
`utils <wetb/utils>`__
~~~~~~~~~~~~~~~~~~~~~~
Other functions - `geometry <wetb/utils/geometry.py>`__: Different kind
of geometry conversion functions -
`process\_exec <wetb/utils/process_exec.py>`__: Run system command in
subprocess - `timing <wetb/utils/timing.py>`__: Decorators for
evaluating execution time of functions -
`caching <wetb/utils/caching.py>`__: Decorators to create cached
(calculate once) functions and properties
.. |build status| image:: https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/badges/master/build.svg
:target: https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/commits/master
.. |coverage report| image:: https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/badges/master/coverage.svg
:target: https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/commits/master
[![build status](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/badges/master/build.svg)](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/commits/master)
[![coverage report](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/badges/master/coverage.svg)](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/commits/master)
# Introduction
The Wind Energy Toolbox (or ```wetb```, pronounce as wee-tee-bee) is a collection
......@@ -10,72 +13,12 @@ some of the functions in the [prepost](#prepost) module have a similar functions
in [Hawc2io](wetb/hawc2/Hawc2io.py). These different implementations will be
merged in due time.
# Python 3
This module currently only works under Python 3. If you are working in Python 2,
this could be a good moment to consider switching. If you are bound to Python 2
due to critical 3th party dependencies you are encouraged to cast your vote for
Python 2 compatibility in
[issue 1](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues/1).
Switching to Python 3 is in general a very good idea especially since Python 3.5
was released. Some even dare to say it
[is like eating your vegetables](http://nothingbutsnark.svbtle.com/porting-to-python-3-is-like-eating-your-vegetables).
You can automatically convert your code from Python 2 to 3 using the
[2to3](https://docs.python.org/2/library/2to3.html) utility which is included
in Python 2.7 by default. You can also write code that is compatible with both
2 and 3 at the same time (you can find additional resources in
[issue 1](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues/1)).
# Dependencies
* [numpy](http://www.numpy.org/)
* [scipy](http://scipy.org/scipylib/)
* [pandas](http://pandas.pydata.org/)
* xlrd
* h5py
* [matplotlib](http://matplotlib.org/)
* [pytables](http://www.pytables.org/)
* [pyscaffold](http://pyscaffold.readthedocs.org/en/)
* pytest
* six
Both Python2 and Python3 are supported.
# Installation
Detailed installation instructions, including how to install Python from scratch,
are described in the [detailed installation manual](docs/install-manual-detailed.md).
If you now what you are doing, you can install as follows:
```
python setup.py
```
Or create a binary wheel distribution package with:
```
python setup.py bdist_wheel -d dist
```
# Tests
Only a small part of the code is covered by unittests currently. More tests are
forthcoming.
- [Simple user](docs/install.md)
- [Developer/contributor](docs/developer-guide.md)
# Contents of WindEnergyToolbox, [wetb](wetb)
......@@ -97,6 +40,7 @@ forthcoming.
- [htc_file](wetb/hawc2/htc_file.py): Read/write/manipulate htc files
- [ae_file](wetb/hawc2/ae_file.py): Read AE (aerodynamic blade layout) files
- [pc_file](wetb/hawc2/pc_file.py): Read PC (profile coefficient) files
- [st_file](wetb/hawc2/st_file.py): Read ST (structural properties) files
- [shear_file](wetb/hawc2/shear_file.py): Create user defined shear file
- [at_time_file](wetb/hawc2/at_time_file.py): Read at output_at_time files
- [log_file](wetb/hawc2/log_file.py): Read and interpret log files
......@@ -112,7 +56,7 @@ General Time Series Data Format, a binary hdf5 data format for storing time seri
- [bearing_damage](wetb/fatigue_tools/bearing_damage.py): Calculate a comparable measure of bearing damage
### [wind](wetb/wind)
- [shear](wetb/wind/shear.py): Calculate and fit wind shear
- [shear](wetb/wind/shear.py): Calculate and fit wind shear
### [dlc](wetb/dlc)
Module for working with "Design load cases" (Code independent)
......@@ -125,6 +69,14 @@ corresponding execution scripts for a PBS Torque cluster (Linux), simple bash
that calculates statistical parameters, performs rainflow counting for fatigue
load calculations, and create load envelopes.
Additional documentation can be found here:
- [Getting started with DLBs](docs/getting-started-with-dlbs.md)
- [Generate DLB spreadsheets](docs/generate-spreadsheet.md)
- [Auto-generation of Design Load Cases](docs/howto-make-dlcs.md)
- [House rules for storing results on ```mimer/hawc2sim```](docs/houserules-mimerhawc2sim.md)
- [How to use the Statistics DataFrame](docs/using-statistics-df.md)
### [fast](wetb/fast)
Tools for working with NREL's FAST code (An aeroelastic computer-aided engineering (CAE) tool for horizontal axis wind turbines)
- [fast_io](wetb/fast/fast_io.py): Read binary and ascii result files
......@@ -136,9 +88,3 @@ Other functions
- [timing](wetb/utils/timing.py): Decorators for evaluating execution time of functions
- [caching](wetb/utils/caching.py): Decorators to create cached (calculate once) functions and properties
# Note
This project has been set up using PyScaffold 2.5. For details and usage
information on PyScaffold see http://pyscaffold.readthedocs.org/.
Subproject commit cf64c48ffb671d35a457d0539e078bfe3eb07f0b
Background Information Regarding Wine
-------------------------------------
> Note that the steps described here are executed automatically by the
configuration script [```config-wine-hawc2.sh```]
(https://gitlab.windenergy.dtu.dk/toolbox/pbsutils/blob/master/config-wine-hawc2.sh)
in ```pbsutils```.
Configure Wine for Gorm
------------------------
You will also need to configure wine and place the HAWC2 executables in a
directory that wine knows about. First, activate the correct wine environment by
typing in a shell in the Gorm's home directory (it can be activated with
ssh (Linux, Mac) or putty (MS Windows)):
```
g-000 $ WINEARCH=win32 WINEPREFIX=~/.wine32 wine test.exe
```
Optionally, you can also make an alias (a short format for a longer, more complex
command). In the ```.bashrc``` file in your home directory
(```/home/$USER/.bash_profile```), add at the bottom of the file:
```
alias wine32='WINEARCH=win32 WINEPREFIX=~/.wine32 wine'
```
Add a folder called ```~/wine_exe/win32``` to your wine system's PATH so we can
copy all the HAWC2 executables in here:
```
$EXE_DIR_WINE="z:/home/$USER/wine_exe/win32/"
printf 'REGEDIT4\n[HKEY_CURRENT_USER\\Environment]\n"PATH"="'"$EXE_DIR_WINE"'"\n' >> ./tmp.reg
WINEARCH=win32 WINEPREFIX=~/.wine32 wine regedit ./tmp.reg
rm ./tmp.reg
```
And now copy all the HAWC2 executables, DLL's (including the license manager)
to your wine directory. You can copy all the required executables, dll's and
the license manager are located at ```/home/MET/hawc2exe```. The following
command will update your local directory with any new executables that have
been placed in ```/home/MET/hawc2exe/win32/```:
```
g-000 $ rsync -a /home/MET/hawc2exe/win32/* /home/$USER/wine_exe/win32/
```
Notice that the HAWC2 executable names are ```hawc2-latest.exe```,
```hawc2-118.exe```, etc. By default the latest version will be used and the user
does not need to specify this. However, when you need to compare different version
you can easily do so by specifying which case should be run with which
executable. The file ```hawc2-latest.exe``` will always be the latest HAWC2
version at ```/home/MET/hawc2exe/```. When a new HAWC2 is released you can
simply copy all the files from there again to update.
Configure Wine for Jess
------------------------
Same principles apply to Jess, and [```config-wine-hawc2.sh```]
(https://gitlab.windenergy.dtu.dk/toolbox/pbsutils/blob/master/config-wine-hawc2.sh)
can be used to initialize and configure your wine environment.
Note that due to a bug in the specific version of wine that is installed on
Jess, ```config-wine-hawc2.sh``` will apply the following command to fix this.
It is important to note that this fix will have to be executed on each node at
the beginning of each new session:
```
j-000 $ WINEARCH=win32 WINEPREFIX=~/.wine32 winefix
```
```winefix``` is automatically included in the ```pbs_in``` scripts genetrated
by the toolbox.
# Developer guide
Thank you for your interest in developing wetb. This guide details how to
contribute to wetb in a way that is efficient for everyone.
## Contents
- [Fork](#fork-project)
- [Requirements](#requirements)
- [Install Python](#install-python)
- [Install/build dependencies](#installbuild-dependencies)
- [Get wetb](#get-wetb)
- [Install wetb](#install-wetb)
- [Contributions](#contributions)
- [Upload contributions](#upload-contributions)
- [Make and upload wheels](#make-and-upload-wheels)
## Fork project
We prefer that you make your contributions in your own fork of the project,
[make your changes](#Contributions) and [make a merge request](#Upload contributions).
The project can be forked to your own user account via the \<Fork\> button on
the [frontpage](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox)
## Requirements
### Command line
This guide will use the command line (aka command prompt) frequently.
You can launch a Windows terminal as follows: press Start> and type
"cmd" + \<Enter\>. A link to the command prompt should be visible now.
In case you want an alternative, more capable windows terminal, you could consider
using [ConEmu](https://conemu.github.io/) (this is optional).
> ConEmu-Maximus5 is a Windows console emulator with tabs, which presents
> multiple consoles and simple GUI applications as one customizable GUI window
> with various features.
### Git
* Download and install Git version control system for Windows 64-bit
[here](https://git-scm.com/download/win). Only select the Windows Portable
option if you know what you are doing or if you do not have administrative
rights on your computer.
* Git comes with a simple GUI, but there are more and different options available
if you are not happy with it, see [here](https://git-scm.com/downloads/guis).
* On windows we highly recommend [tortoisegit](https://tortoisegit.org/). It
is a gui integrated into the windows explorer.
## Install Python
For all platforms we recommend that you download and install the Anaconda -
a professional grade, full blown scientific Python distribution.
### Installing Anaconda, activate root environment
* Download and install Anaconda (Python 3.5 version, 64 bit installer is
recommended) from <https://www.continuum.io/downloads>
> Note: The Python 2.7 or Python 3.5 choice of Anaconda only affects the
root environment. You can always create additional environments using other
Python versions, see below.
* Update the root Anaconda environment (type in a terminal):
```
>> conda update --all
```
* Activate the Anaconda root environment in a terminal as follows:
```
>> activate
```
and your terminal will do something like:
```
C:\Users\> activate
(root) C:\Users\>
```
note that the name of the environment is now a prefix before the current path.
use ```deactivate``` to deactivate the environment.
### Optionally, create other independent Anaconda environments
By using environments you can manage different Python installations with
different versions on your system. Creating environments is as easy as:
```
>> conda create -n py27 python=2.7
>> conda create -n py34 python=3.4
>> conda create -n py35 python=3.5
```
These environments can be activated as follows:
```
>> activate py27
>> activate py34
>> activate py35
```
The Python distribution in use will now be located in e.g. \<path_to_anaconda\>/env/py35/
use ```deactivate``` to deactivate the environment.
## Install/build dependencies
- Compiler (```wetb``` contains cython extensions that require a compiler):
- Linux: gcc (should be installed by default)
- Windows:
- Python 2.7: [Microsoft Visual C++ Compiler for Python 2.7](http://aka.ms/vcpython27),
or the [direct link](https://www.microsoft.com/en-gb/download/details.aspx?id=44266).
- Python 3.4: MS Visual Studio 2010
- Python 3.5: MS Visual Studio 2015 or [Visual C++ Build Tools](http://landinghub.visualstudio.com/visual-cpp-build-tools)
- Only one MS Visual Studio version can be installed, but you can for
example install MS Visual Studio 2010 alongside the Visual C++ Build Tools.
- [numpy](http://www.numpy.org/)
- [cython](http://cython.org/)
- [scipy](http://scipy.org/scipylib/)
- [pandas](http://pandas.pydata.org/)
- xlrd and xlwt from [python-excel](http://www.python-excel.org/)
- [openpyxl](http://openpyxl.readthedocs.org/en/default/)
- [h5py](http://www.h5py.org/)
- [matplotlib](http://matplotlib.org/)
- [pytables](http://www.pytables.org/)
- [pyscaffold](http://pyscaffold.readthedocs.org/en/)
- [pytest](https://pypi.python.org/pypi/pytest)
- [pytest-cov](https://pypi.python.org/pypi/pytest-cov/)
- six, [future](http://python-future.org/index.html)
- nose, sphinx, blosc, pbr, psutil, coverage, setuptools_scm
- [parimeko](http://www.paramiko.org/)
- [sshtunnel](https://github.com/pahaz/sshtunnel)
- [pandoc](http://pandoc.org/) , [pypandoc](https://pypi.python.org/pypi/pypandoc):
convert markdown formatted readme file to rst for PyPi compatibility. See also
issue #22. ```pandoc``` is available in Anaconda. When installing
```pypandoc``` via pip, you have to install ```pandoc``` via your package
manager (Linux/Mac).
- [twine](https://pypi.python.org/pypi/twine): upload package to
[PyPi](https://pypi.python.org/pypi)
Install the necessary Python dependencies using the conda package manager:
```
>> conda install setuptools_scm future h5py pytables pytest pytest-cov nose sphinx blosc pbr paramiko
>> conda install scipy pandas matplotlib cython xlrd coverage xlwt openpyxl psutil pandoc
>> conda install -c conda-forge pyscaffold sshtunnel twine pypandoc --no-deps
```
Note that ```--no-deps``` avoids that newer packages from the channel
```conda-forge``` will be used instead of those from the default ```anaconda```
channel. Depending on which packages get overwritten, this might brake your
Anaconda root environment. As such, using ```--no-deps``` should be
used for safety (especially when operating from the root environment).
Note that:
- With Python 2.7, blosc fails to install.
- With Python 3.6, twine, pypandoc fails to install.
## Get wetb
Copy the https - link on the front page of your fork of wetb
```
>> git clone <https-link>
```
or via tortoise-git:
- Right-click in your working folder
- "Git Clone..."
- \<Ok\>
## Install wetb
```
>> cd WindEnergyToolbox
>> pip install -e . --no-deps
```
Note that the ```--no-deps``` option here is used for the same reason as explained
above for the ```conda-forge``` channel: it is to avoid that pip will replace
newer packages compared to the ones as available in the ```Anaconda``` channel.
## Update wetb
```
>> cd WindEnergyToolbox
>> git pull
>> pip install -e . --no-deps
```
## Run tests
Note that the test should be executed from a clean repository and which is not
used as a development installation with ```pip install -e .```. For example,
create a clone of your local git repository in which your development takes
place, but name the top level folder to something else:
```
>> git clone WindEnergyToolbox/ wetb_tests
>> cd wetb_tests
```
In order to make sure your git repository is clean, this will remove all
untracked files, and undo all untracked changes. WARNING: you will loose all
untracked files and changes!!
```
>> git clean -df & git checkout .
```
Now we have clean repository that is not used as a development installation
directory, and we simply track our own local development git repository.
Use ```git pull``` to get the latest local commits.
```
>> python -m pytest --cov=wetb
```
## Contributions
If you make a change in the toolbox, that others can benefit from please make a merge request.
If you can, please submit a merge request with the fix or improvements including tests.
The workflow to make a merge request is as follows:
- Create a feature branch, branch away from master
- Write tests and code
- Push the commit(s) to your fork
- Submit a merge request (MR) to the master branch of
- Link any relevant issues in the merge request description and leave a comment on them with a link back to the MR
- Your tests should run as fast as possible, and if it uses test files, these files should be as small as possible.
- Please keep the change in a single MR as small as possible. Split the functionality if you can
## Upload contributions
To be written
## Make and upload wheels to PyPi
Workflow for creating and uploading wheels is as follows:
- Make tag: ```git tag "vX.Y.Z"```, and push tag to remote: ```git push --tags```
- In order to have a clean version number (which is determined automagically)
make sure your git working directory is clean (no uncommitted changes etc).
- ```pip install -e . --upgrade```
- ```python setup.py bdist_wheel -d dist``` (wheel includes compiled extensions)
- On Linux you will have to rename the binary wheel file
(see [PEP 513](https://www.python.org/dev/peps/pep-0513/) for a background discussion):
- from: ```wetb-0.0.5-cp35-cp35m-linux_x86_64.whl```
- to: ```wetb-0.0.5-cp35-cp35m-manylinux1_x86_64.whl```
- ```python setup.py sdist -d dist``` (for general source distribution installs)
- ```twine upload dist/*```
In case of problems:
- Make sure the version tag is compliant with
[PEP 440](https://www.python.org/dev/peps/pep-0440/), otherwise ```twine upload```
will fail. This means commit hashes can not be part of the version number.
Note that when your git working directory is not clean, the scheme for automatic
versioning number will add ```dirty``` to the version number.
Auto-generation of DLB Spreadsheets
===================================
Introduction
------------
This manual explains how to automatically generate the set of spreadsheets that
defines a DLB and is required as input to the pre-processor.
This tool comes handy in the following scenarios:
* a DLB for a new turbine needs to be generated;
* a different wind turbine class needs to be evaluated;
* a new parameter needs to be included in the htc file;
* different parameters variations are required, e.g. different wind speed range or different number of turbulent seed.
The generator of the cases uses an input spreadsheet where the cases are defined
in a more compact way.
The tool is based on the "tags" concept that is used for the generation of the htc files.
Main spreadsheet
----------------
A main spreadsheet is used to defines all the DLC of the DLB. The file specifies the tags that are then required in the htc files.
The file has:
* a Main sheet where some wind turbines parameters are defined, the tags are initialized, and the definitions of turbulence and gusts are given.
* a series of other sheets, each defining a DLC. In these sheets the tags that changes in that DLC are defined.
The tags are divided into three possible different categories:
* Constants (C). Constants are tags that do not change in a DLC, e.g. simulation time, output format, ...;
* Variables (V). Variables are tags that define the number of cases in a DLC through their combinations, e.g. wind speed, number of turbulence seeds, wind direction, ..;
* Functions (F). Functions are tags that depend on other tags through an expression, e.g. turbulence intensity, case name, ....
In each sheet the type of tag is defined in the line above the tag by typing one of the letters C, V, or F.
Functions (F) tags
------------------
* Numbers can be converted to strings (for example when a tag refers to a file name)
by using double quotes ```"``` for Functions (F):
* ```"wdir_[wdir]deg_wsp_[wsp]ms"``` will result in the tags ``` [wdir]```
and ```[wsp]``` being replaced with formatted text.
* following formatting rules are used:
* ```[wsp]```, ```[gridgustdelay]``` : ```02i```
* ```[wdir]```, ```[G_phi0]``` : ```03i```
* ```[Hs]```, ```[Tp]``` : ```05.02f```
* all other tags: ```04i```
* Only numbers in tags with double quotes are formatted. In all other cases
there is no formatting taking place and hence no loss of precision occurs.
* In this context, when using quotes, always use double quotes like ```"```.
Do not use single quotes ```'``` or any other quote character.
Variable (V) tags
-----------------
* ```[seed]``` and ```[wave_seed]``` are special variable tags. Instead of defining
a range of seeds, the user indicates the number of seeds to be used.
* ```[wsp]``` is a required variable tag
* ```[seed]``` should be placed in a column BEFORE ```[wsp]```
Generate the files
------------------
To generate the files defining the different DLC the following lines need to be executed:
export PATH=/home/python/miniconda3/bin:$PATH
source activate wetb_py3
python /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/GenerateDLCs.py --folder=DLCs
the first two lines activate the virtual environment. The third calls the routine *GenerateDLCs.py * that generates the files.
The routine should be called from the folder *htc* where also the master spreadsheet *DLCs.xlsx* need to be located.
The generated files are placed in the folder *DLCs*.
# Getting started with generating DLBs for HAWC2
Note that DLB stands for Design Load Basis. It refers to a set of cases that are
used to evaluate the fitness of a certain design. An example of a DLB definition
is the IEC 61400-1ed3.
## Overview
This document intends to provide an extremely brief overview of how to run a set
of HAWC2 simulations using the Gorm cluster at DTU and the Mimer storage.
This document is a work in progress, and is by no means exhaustive.
## Resources
The majority of this information can be found in the Wind Energy Toolbox
documentation. In particular, [generate-spreadsheet](docs/generate-spreadsheet.md)
discusses how to use a "master" Excel spreadsheet to generate the subordinate
Excel spreadsheets that will later be used to create the necessary HTC files.
[howto-make-dlcs](docs/howto-make-dlcs.md) discusses how to create htc files
from the subordinate spreadsheets, submit those HTC files to the cluster,
and post-process results.
[houserules-mimerhawc2sim](docs/houserules-mimerhawc2sim.md) has some
"house rules" on storing simulations on mimer.
[using-statistics-df.md](docs/using-statistics-df) has some information
on loading the post-processing statistics using Python.
## Steps
##### 1. Make sure that you can access the cluster/mimer.
See the instructions on [this page](docs/howto-make-dlcs.md).
##### 2. Create a Set ID folder for this project/simulation.
You should find that, within a given turbine model, the folder structure is
similar to the following:
```
|-- DTU10MW/
| |-- AA0001
| | |-- ...
| |-- AA0002
| | |-- ...
| |-- ...
| |-- AB0001
| |-- ...
|-- AA_log_DTUMW.xlsx
|-- AB_log_DTUMW.xlsx
|-- ...
```
Here, each of these alphanumeric folders are "set IDs", and you should have a
unique set ID for each set of simulations. Detailed house rules on how you
should store data on mimer can be found in the
[houserules-mimerhawc2sim](docs/houserules-mimerhawc2sim.md) document.
There are two steps to creating your new set ID folder:
1. Determine if you need to create a new turbine model folder. You should only
do this when the turbulence box size changes (e.g., if the rotor size changes)
or if you have a model that's never been simulated on mimer.
2. Determine your set ID code. There are two scenarios:
* No one else in your project has run simulations on mimer. In this case,
create a new set ID alpha code (e.g., "AA", "AB", etc.).
* Simulations for this project/turbine configuration already exist. In this
case, use a pre-existing set ID alpha code and add one to the most recent
Set ID (e.g., if "AB0008" exists, your new folder should be "AB0009").
##### 3. Add proper log files for your Set ID folder.
See the [house rules](docs/houserules-mimerhawc2sim.md) regarding log files.
##### 4. Add your model files.
Within your new Set ID folder, add your HAWC2 model files. Keep a folder
structure similar to this:
```
|-- control/
| |-- ...
|-- data/
| |-- ...
|-- htc/
| |-- _master/
| | |-- TURB_master_AA0001.htc
| |-- DLCs.xlsx
```
Your master htc file, stored in ```htc/_master/```, can take any desired naming
convention, but it must have ```_master_``` in the name or future scripts will
abort. ```htc/DLCs.xlsx``` is your master Excel file that will create the
subordinate Excel files in the coming steps.
##### 5. Create your subordinate Excel files.
From a terminal, change to your htc directory. Then run the following code:
```
$ export PATH=/home/python/miniconda3/bin:$PATH
$ source activate wetb_py3
$ python /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/GenerateDLCs.py --folder=DLCs
$ source deactivate
```
This will create a subfolders DLCs and fill that new subfolder with the created
subordinate Excel files.
##### 6. Create your htc files and PBS job scripts .
These files and scripts are generated from the subordinate Excel files from
Step 5. To do this, in the terminal, change up a level to your Set ID folder
(e.g., to folder "AB0001"). Then run this code
```
$ qsub-wrap.py -f /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/dlctemplate.py --prep
```
Your htc files should now be placed in subfolders in the htc folder, and PBS
job files should be in folder ```pbs_in```.
##### 7. Launch the htc files to the cluster.
Use the ```launch.py``` function to launch the jobs on the cluster.
For example, the following code will launch the jobs in folder ```pbs_in``` on
100 nodes. You must be in the top-level Set ID folder for this to work (e.g.,
in folder "AB0001").
```
$ launch.py -n 100 -p pbs_in/
```
There are many launch options available. You can read more about the options
and querying the cluster configurations/status/etc. on
[this page](docs/howto-make-dlcs.md), or you can use the ```launchy.py```
help function to print available launch options:
```
$ launch.py --help
```
##### 8. Post-process results.
The wetb function ```qsub-wrap.py``` can not only generate htc files but also
post-process results. For example, here is code to check the log files
and calculate the statistics, the AEP and the lifetime equivalent loads
(must be executed from the top-level Set ID folder):
```
$ qsub-wrap.py -f /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/dlctemplate.py --years=25 --neq=1e7 --stats --check_logs --fatigue
```
More details regarding loading the post-processed with statistics dataframes
can be found here: [using-statistics-df](docs/using-statistics-df.md).
House Rules for ```mimer/hawc2sim``` and HAWC2 model folder structure
=====================================================================
Objectives
----------
* Re-use turbulence boxes (save disk space)
* Find each others simulations, review, re-run
* Find working examples of simulations, DLB's
* Avoid running the same DLB, simulations more than once
* Disk usage quota review: administrators will create an overview of disk usage
as used per turbine and user.
Basic structure
---------------
The HAWC2 simulations are located on the data capacitor [mimer]
(http://mimer.risoe.dk/mimerwiki), on the following address:
```
# on Windows, use the following address when mapping a new network drive
\\mimer\hawc2sim
# on Linux you can use sshfs or mount -t cifs
//mimer.risoe.dk/hawc2sim
```
The following structure is currently used for this ```hawc2sim``` directory:
* turbine model (e.g. DTU10MW, NREL5MW, etc)
* set ID: 2 alphabetic characters followed by 4 numbers (e.g. AA0001)
* letters are task/project oriented, numbers are case oriented
For example:
* DTU10MW
* AA0001
* AA0002
* AB0001
* log_AA.xlsx
* log_BB.xlsx
* log_overview.xlsx
* NREL5MW
* AA0001
* AA0002
* BA0001
* log_AA.xlsx
* log_BB.xlsx
* log_overview.xlsx
House rules
-----------
* New Turbine model folder when a new size of the turbulence box is required
(i.e. when the rotor size is different)
* One "set ID" refers to one analysis, and it might contain more than one DLB
* If you realize more cases have to be included, add them in the same
"set ID". Don't start new "set ID" numbers.
* Each "set ID" number consists of 2 alphabetic followed by 4
numerical characters.
* Log file
* Indicate which DLB used for the given "set ID" in the log file
* Indicate the changes wrt to a previous "set ID"
* Write clear and concise log messages so others can understand what
analysis or which DLB is considered
* Indicate in the log if something works or not.
* Indicate if a certain "set ID" is used for a certain publication or report
* Keep a log file of the different letters. For instance AA might refer to load
simulations carried out within a certain project
* When results are outdated or wrong, delete the log and result files, but keep
the htc, data and pbs input files so the "set ID" could be re-run again in the
future. This is especially important if the given "set ID" has been used in a
publication, report or Master/PhD thesis.
File permissions
----------------
* By default only the person who generated the simulations within a given
"set ID" can delete or modify the input files, other users have only read access.
If you want to give everyone read and write access, you do:
```
# replace demo/AA0001 with the relevant turbine/set id
g-000 $ cd /mnt/mimer/hawc2sim/demo
g-000 $ chmod 777 -R AA0001
```
HAWC2 folder structure
----------------------
The current DLB setup assumes the following HAWC2 model folder structure:
```
|-- control
| |-- ...
|-- data
| |-- ...
|-- htc
| |-- DLCs
| | |-- dlc12_iec61400-1ed3.xlsx
| | |-- dlc13_iec61400-1ed3.xlsx
| | |-- ...
| |-- _master
| | `-- dtu10mw_master_C0013.htc
```
The load case definitions should be placed in Excel spreadsheets with a
```*.xlsx``` extension. The above example shows one possible scenario whereby
all the load case definitions are placed in ```htc/DLCs``` (all folder names
are case sensitive). Alternatively, one can also place the spreadsheets in
separate sub folders, for example:
```
|-- control
| |-- ...
|-- data
| |-- ...
|-- htc
| |-- dlc12_iec61400-1ed3
| | |-- dlc12_iec61400-1ed3.xlsx
| |-- dlc13_iec61400-1ed3
| | |-- dlc13_iec61400-1ed3.xlsx
```
In order to use this auto-configuration mode, there can only be one master file
in ```_master``` that contains ```_master_``` in its file name.
For the NREL5MW and the DTU10MW HAWC2 models, you can find their respective
master files and DLC definition spreadsheet files on ```mimer/hawc2sim```.
......@@ -4,17 +4,21 @@ Auto-generation of Design Load Cases
<!---
TODO, improvements:
putty reference and instructions (fill in username in the address username@gorm
how to mount gorm home on windows
do as on Arch Linux wiki: top line is the file name where you need to add stuff
point to the gorm/jess wiki's
explain the difference in the paths seen from a windows computer and the cluster
DONE:
- putty reference and instructions (fill in username in the address
username@gorm) [rink]
- how to mount gorm home on windows [rink]
- point to the gorm/jess wiki's [rink]
-->
WARNING: these notes contain configuration settings that are specif to the
> WARNING: these notes contain configuration settings that are specif to the
DTU Wind Energy cluster Gorm. Only follow this guide in another environment if
you know what you are doing!
Introduction
------------
......@@ -40,7 +44,7 @@ in the Excel spreadsheets): ```[Case folder]```, ```[Case id.]```, and
```[Turb base name]```.
The system will always force the values of the tags to be lower case anyway, and
when working on Windows, this might cause some confusing and unexpected behaviour.
when working on Windows, this might cause some confusing and unexpected behavior.
The tags themselves can have lower and upper case characters as can be seen
in the example above.
......@@ -54,69 +58,135 @@ line starts with ```g-000 $```. The command that needs to be entered starts
after the ```$```.
Pdap
----
You can also use the Pdap for post-processing, which includes a MS Word report
generator based on a full DLB, a GUI for easy plotting of HAWC2 result files,
and a Python scripting interface:
* [Pdap](http://www.hawc2.dk/Download/Post-processing-tools/Pdap)
* [Pdap report/docs](http://orbit.dtu.dk/en/publications/post-processing-of-design-load-cases-using-pdap%28827c432b-cf7d-44eb-899b-93e9c0648ca5%29.html)
Connecting to the cluster
-------------------------
You connect to the cluster via an SSH terminal. SSH is supported out of the box
for Linux and Mac OSX terminals (such as bash), but requires a separate
terminal client under Windows. Windows users are advised to use PuTTY and can
be downloaded at:
[http://www.chiark.greenend.org.uk/~sgtatham/putty/](http://www.chiark.greenend.org.uk/~sgtatham/putty/).
Here's a random
[tutorial](http://www.ghacks.net/2008/02/09/about-putty-and-tutorials-including-a-putty-tutorial/),
you can use your favourite search engine if you need more or different instructions.
More answers regarding PuTTY can also be found in the online
We provide here an overview of how to connect to the cluster, but general,
up-to-date information can be found in the [HPC documentation](https://docs.hpc.ait.dtu.dk)
or on the [Gorm wiki](http://gorm.risoe.dk/gormwiki). Note that the
information from the Gorm wiki will be migrated into the HPC documentation
over time.
You connect to the cluster via an SSH terminal, and there are different SSH
terminals based on your operating system (see the platform-specific
instructions in the next subsections). The cluster can only be reached when
on the DTU network (wired, or only from a DTU computer when using a wireless
connection), when connected to the DTU VPN, or from one of the DTU
[databars](http://www.databar.dtu.dk/).
### Windows
Windows users are advised to use PuTTY, which can
be downloaded from
[this link](http://www.chiark.greenend.org.uk/~sgtatham/putty/).
Once you have installed PuTTY and placed the executable somewhere convenient
(e.g., the Desktop), double click on the executable. In the window that opens
up, enter/verify the following settings:
* Session > Host Name: gorm.risoe.dk
* Session > Port: 22
* Session > Connection type: SSH
* Session > Saved Sessions: Gorm
* Connection > Data > Auto-login username: your DTU username
* Connection > Data > When username is not specified: Use system username
* Window > Colours > Select a colour to adjust > ANSI Blue: RGB = 85, 85, 255
* Window > Colours > Select a colour to adjust > ANSI Bold Blue: RGB = 128, 128, 255
Note that these last two options are optional. We've found that the default
color for comments, ANSI Blue, is too dark to be seen on the black
background. The last two options in the list set ANSI Blue and ANSI Blue Bold
to be lighter and therefore easier to read when working in the terminal. Once
you have entered these options, click "Save" on the "Session" tab and close
the window.
With PuTTY configured, you can connect to Gorm by double-clicking the PuTTY
executable; then, in the window that opens select "Gorm" in "Saved Sessions",
click the "Load" button, and finally click the "Open" button. A terminal
window will open up. Type your DTU password in this new window when prompted
(your text will not appear in the window) and then hit the Enter key. You
should now be logged into Gorm.
To close the PuTTY window, you can either hit the red "X" in the upper-right
corner of the window or type "exit" in the terminal and hit enter.
More information on using PuTTY and how it works can be found in this
[PuTTY tutorial](http://www.ghacks.net/2008/02/09/about-putty-and-tutorials-including-a-putty-tutorial/)
or in the online
[documentation](http://the.earth.li/~sgtatham/putty/latest/htmldoc/).
You are also welcome to use Google and read the many online resources.
The cluster that is setup for using the pre- and post-processing tools for HAWC2
has the following address: ```gorm.risoe.dk```.
### Unix
On Linux/Mac connecting to the cluster is as simple as running the following
command in the terminal:
Unlike Windows, SSH is supported out of the box for Linux and Mac OSX
terminals. To connect to the cluster, enter the following command into
the terminal:
```
g-000 $ ssh $USER@gorm.risoe.dk
ssh $USER@gorm.risoe.dk
```
Use your DTU password when asked. This will give you terminal access to the
cluster called Gorm.
The cluster can only be reached when on the DTU network (wired, or only from a
DTU computer when using a wireless connection), when connected to the DTU VPN,
or from one of the DTU [databars](http://www.databar.dtu.dk/).
More information about the cluster can be found on the
[Gorm-wiki](http://gorm.risoe.dk/gormwiki)
Enter your DTU password when prompted. This will give you terminal access
to the Gorm cluster.
Mounting the cluster discs
--------------------------
You need to be connected to the DTU network in order for this to work. You can
also connect to the DTU network over VPN.
When doing the HAWC2 simulations, you will interact regularly with the cluster
file system and discs. It is convenient to map these discs as network
drives (in Windows terms). Map the following network drives (replace ```$USER```
with your user name):
file system and discs. Thus, it can be very useful to have two discs mounted
locally so you can easily access them: 1) your home directory on Gorm and 2)
the HAWC2 simulation folder on Mimer.
You need to be connected to the DTU network (either directly or via VPN) for
the following instructions to work.
```
\\mimer\hawc2sim
\\gorm\$USER # this is your Gorm home directory
```
Alternatively, on Windows you can use [WinSCP](http://winscp.net) to interact
with the cluster discs.
### Windows
Note that by default Windows Explorer will hide some of the files you will need edit.
In order to show all files on your Gorm home drive, you need to un-hide system files:
Explorer > Organize > Folder and search options > select tab "view" > select the option to show hidden files and folders.
On Windows, we recommend mapping the two drives to local network drives, which
means that you can navigate/copy/paste to/from them in Windows Explorer just as
you would do with normal folders on your computer. You may also use [WinSCP](http://winscp.net)
to interact with the cluster discs if you are more familiar with that option.
Here we provide instructions for mapping network drives in Windows 7. If these
instructions don't work for you, you can always find directions for your
version of Windows by Googling "map network drive windows $WIN_VERSION", where
$WIN_VERSION is your version number.
In Windows 7, you can map a network drive in the following steps:
1. Open a Windows Explorer window
2. Right-click on "Computer" and select "Map network drive"
3. Select any unused drive and type ```\\gorm.risoe.dk\$USER``` into the folder field,
replacing "$USER" with your DTU username (e.g., DTU user "ABCD" has a Gorm home
drive of ```\\gorm.risoe.dk\abcd```)
4. Check the "Reconnect at logon" box if you want to connect to this drive
every time you log into your computer (recommended)
5. Click the Finish button
6. Repeat Steps 1 through 5, replacing the Gorm home address in Step 3 with the
HAWC2 simulation folder address: ```\\mimer.risoe.dk\hawc2sim```
Note that by default Windows Explorer will hide some of the files you will need
edit. In order to show all files on your Gorm home drive, you need to un-hide
system files: Explorer > Organize > Folder and search options > "View" tab >
Hidden files and folders > "Show hidden files, folders, and drives".
### Unix
From Linux/Mac, you should be able to mount using either of the following
addresses:
```
//mimer.risoe.dk/hawc2sim
//mimer.risoe.dk/well/hawc2sim
//gorm.risoe.dk/$USER
```
You can use either ```sshfs``` or ```mount -t cifs``` to mount the discs.
......@@ -131,63 +201,53 @@ by editing the file ```.bash_profile``` file in your Gorm’s home directory
or create a new file with this file name in case it doesn't exist):
```
export PATH=$PATH:/home/MET/STABCON/repositories/toolbox/pbsutils/
export PATH=$PATH:/home/MET/repositories/toolbox/pbsutils/
```
(The corresponding open repository is on the DTU Wind Energy Gitlab server:
[pbsutils](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox). Please
[pbsutils](https://gitlab.windenergy.dtu.dk/toolbox/pbsutils). Please
considering reporting bugs and/or suggest improvements there. You're contributions
are much appreciated!)
If you have been using an old version of this how-to, you might be pointing
to an earlier version of these tools/utils and its reference should be removed
from your ```.bash_profile``` file:
> If you have been using an old version of this how-to, you might be pointing
to an earlier version of these tools/utils and any references containing
```cluster-tools``` or ```prepost``` should be removed
from your ```.bash_profile``` and/or ```.bashrc``` file on your gorm home drive.
```
export PATH=$PATH:/home/MET/STABCON/repositories/cluster-tools/
```
After modifying ```.bash_profile```, save and close it. Then, in the terminal, run the command:
After modifying ```.bash_profile```, save and close it. Then, in the terminal,
run the command (or logout and in again to be safe):
```
g-000 $ source ~/.bash_profile
```
In order for any changes made in ```.bash_profile``` to take effect, you need to either ```source``` it (as shown above), or log out and in again.
You will also need to configure wine and place the HAWC2 executables in a
directory that wine knows about. First, activate the correct wine environment by
typing in a shell in the Gorm's home directory (it can be activated with
ssh (Linux, Mac) or putty (MS Windows)):
```
g-000 $ WINEARCH=win32 WINEPREFIX=~/.wine32 wine test.exe
g-000 $ source ~/.bashrc
```
Optionally, you can also make an alias (a short format for a longer, more complex
command). In the ```.bashrc``` file in your home directory
(```/home/$USER/.bash_profile```), add at the bottom of the file:
You will also need to configure wine and place the HAWC2 executables in your
local wine directory, which by default is assumed to be ```~/.wine32```, and
```pbsutils``` contains and automatic configuration script you can run:
```
alias wine32='WINEARCH=win32 WINEPREFIX=~/.wine32 wine'
g-000 $ /home/MET/repositories/toolbox/pbsutils/config-wine-hawc2.sh
```
And now copy all the HAWC2 executables, DLL's (including the license manager)
to your wine directory. You can copy all the required executables, dll's and
the license manager are located at ```/home/MET/hawc2exe```. The following
command will do this copying:
If you need more information on what is going on, you can read a more detailed
description [here]
(https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/configure-wine.md).
```
g-000 $ cp /home/MET/hawc2exe/* /home/$USER/.wine32/drive_c/windows/system32
```
All your HAWC2 executables and DLL's are now located
at ```/home/$USER/wine_exe/win32```.
Notice that the HAWC2 executable names are ```hawc2-latest.exe```,
```hawc2-118.exe```, etc. By default the latest version will be used and the user
does not need to specify this. However, when you need to compare different version
you can easily do so by specifying which case should be run with which
executable. The file ```hawc2-latest.exe``` will always be the latest HAWC2
version at ```/home/MET/hawc2exe/```. When a new HAWC2 is released you can
simply copy all the files from there again to update.
executable.
Alternatively you can also include all the DLL's and executables in the root of
your HAWC2 model folder. Executables and DLL's placed in the root folder take
precedence over the ones placed in ```/home/$USER/wine_exe/win32```.
Log out and in again from the cluster (close and restart PuTTY).
> IMPORTANT: log out and in again from the cluster (close and restart PuTTY)
> before trying to see if you can run HAWC2.
At this stage you can run HAWC2 as follows:
......@@ -196,73 +256,73 @@ g-000 $ wine32 hawc2-latest htc/some-intput-file.htc
```
Method A: Generating htc input files on the cluster
---------------------------------------------------
Use ssh (Linux, Mac) or putty (MS Windows) to connect to the cluster.
With qsub-wrap.py the user can wrap a PBS launch script around any executable or
Python/Matlab/... script. In doing so, the executable/Python script will be
immediately submitted to the cluster for execution. By default, the Anaconda
Python environment in ```/home/MET/STABCON/miniconda``` will be activated. The
Anaconda Python environment is not relevant, and can be safely ignored, if the
executable does not have anything to do with Python.
Updating local HAWC2 executables
--------------------------------
In order to see the different options of this qsub-wrap utility, do:
When there is a new version of HAWC2, or when a new license manager is released,
you can update your local wine directory as follows:
```
g-000 $ qsub-wrap.py --help
g-000 $ rsync -au /home/MET/hawc2exe/win32 /home/$USER/wine_exe/win32 --progress
```
For example, in order to generate the default IEC DLCs:
The file ```hawc2-latest.exe``` will always be the latest HAWC2
version at ```/home/MET/hawc2exe/```. When a new HAWC2 is released you can
simply copy all the files from there again to update.
HAWC2 model folder structure and results on mimer/hawc2sim
----------------------------------------------------------
See [house rules on mimer/hawc2sim]
(https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/houserules-mimerhawc2sim.md)
for a more detailed description.
Method A: Generating htc input files on the cluster (recommended)
-----------------------------------------------------------------
Use ssh (Linux, Mac) or putty (MS Windows) to connect to the cluster.
In order to simplify things, we're using ```qsub-wrap.py``` from ```pbsutils```
(which we added under the [preparation]/(#preparation) section) in order to
generate the htc files. It will execute, on a compute node, any given Python
script in a pre-installed Python environment that has the Wind Energy Toolbox
installed.
For the current implementation of the DLB the following template is available:
```
g-000 $ cd path/to/HAWC2/model # folder where the hawc2 model is located
g-000 $ qsub-wrap.py -f /home/MET/STABCON/repositories/prepost/dlctemplate.py -c python --prep
/home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/dlctemplate.py
```
Note that the following folder structure for the HAWC2 model is assumed:
And the corresponding definitions of all the different load cases can be copied
from here (valid for the DTU10MW):
```
|-- control
| |-- ...
|-- data
| |-- ...
|-- htc
| |-- DLCs
| | |-- dlc12_iec61400-1ed3.xlsx
| | |-- dlc13_iec61400-1ed3.xlsx
| | |-- ...
| |-- _master
| | `-- dtu10mw_master_C0013.htc
/mnt/mimer/hawc2sim/DTU10MW/C0020/htc/DLCs
```
The load case definitions should be placed in Excel spreadsheets with a
```*.xlsx``` extension. The above example shows one possible scenario whereby
all the load case definitions are placed in ```htc/DLCs``` (all folder names
are case sensitive). Alternatively, one can also place the spreadsheets in
separate sub folders, for example:
Note that ```dlctemplate.py``` does not require any changes or modifications
if you are only interested in running the standard DLB as explained here.
For example, in order to generate all the HAWC2 htc input files and the
corresponding ```*.p``` cluster launch files using this default DLB setup with:
```
|-- control
| |-- ...
|-- data
| |-- ...
|-- htc
| |-- dlc12_iec61400-1ed3
| | |-- dlc12_iec61400-1ed3.xlsx
| |-- dlc13_iec61400-1ed3
| | |-- dlc13_iec61400-1ed3.xlsx
g-000 $ cd /mnt/mimer/hawc2sim/demo/A0001 # folder where the hawc2 model is located
g-000 $ qsub-wrap.py -f /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/dlctemplate.py --prep
```
In order to use this auto-configuration mode, there can only be one master file
in ```_master``` that contains ```_master_``` in its file name.
You could consider adding ```dlctemplate.py``` into the turbine folder or in
the simulation set id folder for your convenience:
For the NREL5MW and the DTU10MW HAWC2 models, you can find their respective
master files and DLC definition spreadsheet files on Mimer. When connected
to Gorm over SSH/PuTTY, you will find these files at:
```
/mnt/mimer/hawc2sim # (when on Gorm)
g-000 $ cd /mnt/mimer/hawc2sim/demo/
# copy the dlctemplate to your turbine model folder and rename to myturbine.py
g-000 $ cp /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/dlctemplate.py ./myturbine.py
g-000 $ cd A0001
g-000 $ qsub-wrap.py -f ../myturbine.py --prep
```
......@@ -278,20 +338,17 @@ First activate the Anaconda Python environment by typing:
```bash
# add the Anaconda Python environment paths to the system PATH
g-000 $ export PATH=/home/MET/STABCON/miniconda/bin:$PATH
g-000 $ export PATH=/home/python/miniconda3/bin:$PATH
# activate the custom python environment:
g-000 $ source activate anaconda
# add the Pythone libraries to the PYTHONPATH
g-000 $ export PYTHONPATH=/home/MET/STABCON/repositories/prepost:$PYTHONPATH
g-000 $ export PYTHONPATH=/home/MET/STABCON/repositories/pythontoolbox/fatigue_tools:$PYTHONPATH
g-000 $ export PYTHONPATH=/home/MET/STABCON/repositories/pythontoolbox:$PYTHONPATH
g-000 $ export PYTHONPATH=/home/MET/STABCON/repositories/MMPE:$PYTHONPATH
g-000 $ source activate wetb_py3
```
For example, launch the auto-generation of DLCs input files:
```
g-000 $ cd path/to/HAWC2/model # folder where the hawc2 model is located
g-000 $ python /home/MET/STABCON/repositories/prepost/dlctemplate.py --prep
# folder where the HAWC2 model is located
g-000 $ cd /mnt/mimer/hawc2sim/demo/AA0001
# assuming myturbine.py is copy of dlctemplate.py and is placed one level up
g-000 $ python ../myturbine.py --prep
```
Or start an interactive IPython shell:
......@@ -309,16 +366,19 @@ jammed.
Method C: Generating htc input files locally
--------------------------------------------
This approach gives you total freedom, but is also more difficult since you
will have to have fully configured Python environment installed locally.
This approach gives you more flexibility and room for custimizations, but you
will need to install a Python environment with all its dependencies locally.
Additionally, you need access to the cluster discs from your local workstation.
Method C is not documented yet.
The installation procedure for wetb is outlined in the
[simple user](docs/install.md) or the
[developer/contributor](docs/developer-guide.md) installation manual.
Optional configuration
----------------------
Optional tags that can be set in the Excel spreadsheet, and their corresponding
Optional tags that can be set in the Excel spreadsheet and their corresponding
default values are given below. Beside a replacement value in the master htc
file, there are also special actions connected to these values. Consequently,
these tags have to be present. When removed, the system will stop working properly.
......@@ -355,6 +415,21 @@ Optional
* ```[mooring_dir] = False```, all files and sub-folders copied to node
* ```[hydro_dir] = False```, all files and sub-folders copied to node
The mooring line dll has a fixed name init file that has to be in the root of
the HAWC2 folder. When you have to use various init files (e.g. when the water
depth is varying for different load cases) it would be convienent to be able
to control which init file is used for which case (e.g. water depth).
When running a load case for which the mooring lines will run in init mode:
* ```[copyback_f1] = 'ESYSMooring_init.dat'```
* ```[copyback_f1_rename] = 'mooringinits/ESYSMooring_init_vXYZ.dat'```
When using an a priory cacluated init file for the mooring lines:
* ```[copyto_f1] = 'mooringinits/ESYSMooring_init_vXYZ.dat'```
* ```[copyto_generic_f1] = 'ESYSMooring_init.dat'```
Replace ```vXYZ``` with an appropriate identifier for your case.
A zip file will be created which contains all files in the model root directory,
and all the contents (files and folders) of the following directories:
```[control_dir], [mooring_dir], [hydro_dir], 'externalforce/', [data_dir]```.
......@@ -362,8 +437,73 @@ This zip file will be extracted into the execution directory (```[run_dir]```).
After the model has ran on the node, only the files that have been created
during simulation time in the ```[log_dir]```, ```[res_dir]```,
```[animation_dir]```, and ```[eigenfreq_dir]``` will be copied back.
Optionally, on can also copy back the turbulence files, and other explicitly
defined files [TODO: expand manual here].
### Advanced configuration options by modifying dlctemplate.py
> Note that not all features are documented yet...
Special tags: copy special result files from the compute node back to the HAWC2
working directory on the network drive, and optionally rename the file in case
it would otherwise be overwritten by other cases in your DLB:
* ```[copyback_files] = ['ESYSMooring_init.dat']```
* ```[copyback_frename] = ['path/to/ESYSMooring_init_vXYZ.dat']```, optionally
specify a different file path/name
Copy files from the HAWC2 working directory with a special name to the compute
node for which the a fixed file name is assumed
* ```[copyto_files] = ['path/to/ESYSMooring_init_vXYZ.dat']```
* ```[copyto_generic] = ['ESYSMooring_init.dat']```
### Tags required for standalone Mann 64-bit turbulence generator
```dlctemplate.py``` has a flag named ```--pbs_turb```, which when activated
generates PBS input files containing the instructions to generate all required
turbulence boxes using the 64-bit version of the stand alone Mann turbulence
box generator. The appropriate input parameters are taken from the following
tags:
* ```[tu_model]```
* ```[Turb base name]```
* ```[MannAlfaEpsilon]```
* ```[MannL]```
* ```[MannGamma]```
* ```[seed]```
* ```[turb_nr_u]``` : number of grid points in the u direction
* ```[turb_nr_v]``` : number of grid points in the v direction
* ```[turb_nr_w]``` : number of grid points in the w direction
* ```[turb_dx]``` : grid spacing in meters in the u direction
* ```[turb_dy]``` : grid spacing in meters in the v direction
* ```[turb_dz]``` : grid spacing in meters in the w direction
* ```[high_freq_comp]```
### Tags required for hydro file generation
* ```[hydro_dir]```
* ```[hydro input name]```
* ```[wave_type]``` : see HAWC2 manual for options
* ```[wave_spectrum]``` : see HAWC2 manual for options
* ```[wdepth]```
* ```[Hs]``` : see HAWC2 manual for options
* ```[Tp]``` : see HAWC2 manual for options
* ```[wave_seed]``` : see HAWC2 manual for options
And the corresponding section the htc master file:
```
begin hydro;
begin water_properties;
rho 1027 ; kg/m^3
gravity 9.81 ; m/s^2
mwl 0.0;
mudlevel [wdepth];
wave_direction [wave_dir];
water_kinematics_dll ./wkin_dll.dll ./[hydro_dir][hydro input name].inp;
end water_properties;
end hydro;
```
Launching the jobs on the cluster
......@@ -386,27 +526,61 @@ number of cpu's requested (using ```-c``` or ```--nr_cpus```) and minimum
of required free cpu's on the cluster (using ```--cpu_free```, 48 by default).
Jobs will be launched after a predefined sleep time (as set by the
```--tsleep``` option, and set to 5 seconds by default). After the initial sleep
time a new job will be launched every 0.1 second. If the launch condition is not
met (```nr_cpus > cpu's used by user AND cpu's free on cluster > cpu_free```),
the program will wait 5 seconds before trying to launch a new job again.
time a new job will be launched every 0.5 second. If the launch condition is not
met:
```
nr_cpus > cpu's used by user
AND cpu's free on cluster > cpu_free
AND jobs queued by user < cpu_user_queue
```
the program will sleep 5 seconds before trying to launch a new job again.
Depending on the amount of jobs and the required computation time, it could
take a while before all jobs are launched. When running the launch script from
the login node, this might be a problem when you have to close your ssh/putty
session before all jobs are launched. In that case the user should use a
dedicated compute node for launching jobs. To run the launch script on a
compute instead of the login node, use the ```--node``` option. You can inspect
the progress in the ```launch_scheduler_log.txt``` file.
session before all jobs are launched. In that case the user can use the
```--crontab``` argument: it will trigger the ```launch.py``` script every 5
minutes to check if more jobs can be launched until all jobs have been
executed. The user does not need to have an active ssh/putty session for this to
work. You can follow the progress and configuration of ```launch.py``` in
crontab mode in the following files:
The ```launch.py``` script has some different options, and you can read about
* ```launch_scheduler_log.txt```
* ```launch_scheduler_config.txt```: you can change your launch settings on the fly
* ```launch_scheduler_state.txt```
* ```launch_pbs_filelist.txt```: remaining jobs, when a job is launched it is
removed from this list
You can check if ```launch.py``` is actually active as a crontab job with:
```
crontab -l
```
```launch.py``` will clean-up the crontab after all jobs are launched, but if
you need to prevent it from launching new jobs before that, you can clean up your
crontab with:
```
crontab -r
```
The ```launch.py``` script has various different options, and you can read about
them by using the help function (the output is included for your convenience):
```bash
g-000 $ launch.py --help
Usage:
usage: launch.py -n nr_cpus
launch.py -n nr_cpus
options:
launch.py --crontab when running a single iteration of launch.py as a crontab job every 5 minutes.
File list is read from "launch_pbs_filelist.txt", and the configuration can be changed on the fly
by editing the file "launch_scheduler_config.txt".
Options:
-h, --help show this help message and exit
--depend Switch on for launch depend method
-n NR_CPUS, --nr_cpus=NR_CPUS
......@@ -417,24 +591,58 @@ options:
full pbs file path. Escape backslashes! By default it
will select all *.p files in pbs_in/.
--dry dry run: do not alter pbs files, do not launch
--tsleep=TSLEEP Sleep time [s] after qsub command. Default=5 seconds
--tsleep=TSLEEP Sleep time [s] when cluster is too bussy to launch new
jobs. Default=5 seconds
--tsleep_short=TSLEEP_SHORT
Sleep time [s] between between successive job
launches. Default=0.5 seconds.
--logfile=LOGFILE Save output to file.
-c, --cache If on, files are read from cache
--cpu_free=CPU_FREE No more jobs will be launched when the cluster does
not have the specified amount of cpus free. This will
make sure there is room for others on the cluster, but
might mean less cpus available for you. Default=48.
might mean less cpus available for you. Default=48
--cpu_user_queue=CPU_USER_QUEUE
No more jobs will be launched after having
cpu_user_queue number of jobs in the queue. This
prevents users from filling the queue, while still
allowing to aim for a high cpu_free target. Default=5
--qsub_cmd=QSUB_CMD Is set automatically by --node flag
--node If executed on dedicated node.
--node If executed on dedicated node. Although this works,
consider using --crontab instead. Default=False
--sort Sort pbs file list. Default=False
--crontab Crontab mode: %prog will check every 5 (default)
minutes if more jobs can be launched. Not compatible
with --node. When all jobs are done, crontab -r will
remove all existing crontab jobs of the current user.
Use crontab -l to inspect current crontab jobs, and
edit them with crontab -e. Default=False
--every_min=EVERY_MIN
Crontab update interval in minutes. Default=5
--debug Debug print statements. Default=False
```
Then launch the actual jobs (each job is a ```*.p``` file in ```pbs_in```) using
100 cpu's, and using a compute node instead of the login node (see you can exit
the ssh/putty session without interrupting the launching process):
100 cpu's:
```bash
g-000 $ cd path/to/HAWC2/model
g-000 $ launch.py -n 100 --node
g-000 $ cd /mnt/mimer/hawc2sim/demo/A0001
g-000 $ launch.py -n 100 -p pbs_in/
```
If the launching process requires hours, and you have to close you SHH/PuTTY
session before it reaches the end, you can either use the ```--node``` or the
```--crontab``` argument. When using ```--node```, ```launch.py``` will run on
a dedicated cluster node, submitted as a PBS job. When using ```--crontab```,
```launch.py``` will be run once every 5 minutes as a ```crontab``` job on the
login node. This is preferred since you are not occupying a node with a very
simple and light job. ```launch.py``` will remove all the users crontab jobs
at the end with ```crontab -r```.
```bash
g-000 $ cd /mnt/mimer/hawc2sim/demo/A0001
g-000 $ launch.py -n 100 -p pbs_in/ --crontab
```
......@@ -473,6 +681,19 @@ your running HAWC2 model (replace 123456 with the relevant job id):
g-000 $ cd /scratch/$USER/123456.g-000.risoe.dk
```
You can find what HAWC2 (or whatever other executable you are running) is
outputting to the command line in the file:
```
/var/lib/torque/spool/JOBID.jess.dtu.dk.OU
```
Or when watch what is happening at the end in real time
```
# on Jess:
tail -f /var/lib/torque/spool/JOBID.jess.dtu.dk.OU
# on Gorm:
tail -f /var/spool/pbs/spool/JOBID.g-000.risoe.dk.OU
```
Re-launching failed jobs
------------------------
......@@ -489,11 +710,11 @@ g-000 $ launch.py -n 100 --node -p pbs_in_failed
```
2. Use the ```--cache``` option, and edit the PBS file list in the file
```pbs_in_file_cache.txt``` so that only the simulations remain that have to be
run again. Note that the ```pbs_in_file_cache.txt``` file is created every time
you run a ```launch.py```. Note that you can use the option ```--dry``` to make
a practice launch run, and that will create a ```pbs_in_file_cache.txt``` file,
but not a single job will be launched.
```launch_pbs_filelist.txt``` so that only the simulations remain that have to be
run again. ```launch_pbs_filelist.txt``` is created every time you run
```launch.py```. You can use the option ```--dry``` to make a practice launch
run, and that will create a ```launch_pbs_filelist.txt``` file, but not a single
job will be launched.
3. Each pbs file can be launched manually as follows:
```
......@@ -520,48 +741,74 @@ htc files, but now we set different flags. For example, for checking the log
files, calculating the statistics, the AEP and the life time equivalent loads:
```
g-000 $ qsub-wrap.py -f /home/MET/STABCON/repositories/prepost/dlctemplate.py -c python --years=25 --neq=1e7 --stats --check_logs --fatigue
# myturbine.py (copy of dlctemplate.py) is assumed to be located one folder up
g-000 $ qsub-wrap.py -f ../myturbine.py --years=25 --neq=1e7 --stats --check_logs --fatigue
```
Other options for the ```dlctemplate.py``` script:
Other options for the original ```dlctemplate.py``` script:
```
usage: dlctemplate.py [-h] [--prep] [--check_logs] [--stats] [--fatigue]
[--csv] [--years YEARS] [--no_bins NO_BINS] [--neq NEQ]
[--envelopeblade] [--envelopeturbine]
(wetb_py3) [dave@jess]$ python dlctemplate.py --help
usage: dlctemplate.py [-h] [--prep] [--check_logs]
[--pbs_failed_path PBS_FAILED_PATH] [--stats]
[--fatigue] [--AEP] [--csv] [--years YEARS]
[--no_bins NO_BINS] [--neq NEQ] [--rotarea ROTAREA]
[--save_new_sigs] [--dlcplot] [--envelopeblade]
[--envelopeturbine] [--zipchunks] [--pbs_turb]
[--walltime WALLTIME]
pre- or post-processes DLC's
optional arguments:
-h, --help show this help message and exit
--prep create htc, pbs, files (default=False)
--check_logs check the log files (default=False)
--stats calculate statistics (default=False)
--fatigue calculate Leq for a full DLC (default=False)
--csv Save data also as csv file (default=False)
--years YEARS Total life time in years (default=20)
--no_bins NO_BINS Number of bins for fatigue loads (default=46)
--neq NEQ Equivalent cycles neq (default=1e6)
--envelopeblade calculate the load envelope for sensors on the blades
--envelopeturbine calculate the load envelope for sensors on the turbine
```
The load envelopes are computed for sensors specified in the
```dlctemplate.py``` file. The sensors are specified in a list of lists. The
-h, --help show this help message and exit
--prep create htc, pbs, files
--check_logs check the log files
--pbs_failed_path PBS_FAILED_PATH
Copy pbs launch files of the failed cases to a new
directory in order to prepare a re-run. Default value:
pbs_in_failed.
--stats calculate statistics and 1Hz equivalent loads
--fatigue calculate Leq for a full DLC
--AEP calculate AEP, requires htc/DLCs/dlc_config.xlsx
--csv Save data also as csv file
--years YEARS Total life time in years
--no_bins NO_BINS Number of bins for fatigue loads
--neq NEQ Equivalent cycles Neq used for Leq fatigue lifetime
calculations.
--rotarea ROTAREA Rotor area for C_T, C_P
--save_new_sigs Save post-processed sigs
--dlcplot Plot DLC load basis results
--envelopeblade Compute envelopeblade
--envelopeturbine Compute envelopeturbine
--zipchunks Create PBS launch files forrunning in zip-chunk
find+xargs mode.
--pbs_turb Create PBS launch files to create the turbulence boxes
in stand alone mode using the 64-bit Mann turbulence
box generator. This can be usefull if your turbulence
boxes are too big for running in HAWC2 32-bit mode.
Only works on Jess.
--walltime WALLTIME Queue walltime for each case/pbs file, format:
HH:MM:SS Default: 04:00:00
```
The load envelopes are computed for sensors specified in the
```myturbine.py``` file. The sensors are specified in a list of lists. The
inner list contains the sensors at one location. The envelope is computed for
the first two sensors of the inner list and the other sensors are used to
retrieve the remaining loads defining the load state occurring at the same
retrieve the remaining loads defining the load state occurring at the same
instant. The outer list is used to specify sensors at different locations.
The default values for the blade envelopes are used to compute the Mx-My
envelopes and retrieve the Mz-Fx-Fy-Fz loads occuring at the same moment.
envelopes and retrieve the Mz-Fx-Fy-Fz loads occurring at the same moment.
Debugging
---------
Any output (everything that involves print statements) generated during the
post-processing of the simulations using ```dlctemplate.py``` is captured in
the ```pbs_out/qsub-wrap_dlctemplate.py.out``` file, while exceptions and errors
are redirected to the ```pbs_out/qsub-wrap_dlctemplate.py.err``` text file.
post-processing of the simulations using ```myturbine.py``` is captured in
the ```pbs_out/qsub-wrap_myturbine.py.out``` file, while exceptions and errors
are redirected to the ```pbs_out/qsub-wrap_myturbine.py.err``` text file.
The output and errors of HAWC2 simulations can also be found in the ```pbs_out```
directory. The ```.err``` and ```.out``` files will be named exactly the same
......
# Installation manual
## Anaconda or Miniconda on Linux
```
conda update --all
conda create -n wetb_py3 python=3.5
source activate wetb_py3
conda install setuptools_scm future h5py pytables pytest nose sphinx blosc psutil
conda install scipy pandas matplotlib cython xlrd coverage xlwt openpyxl paramiko
conda install -c https://conda.anaconda.org/conda-forge pyscaffold pytest-cov
```
## Anaconda or Miniconda on Windows
```
conda update --all
conda create -n wetb_py3 python=3.4
source activate wetb_py3
conda install setuptools_scm future h5py pytables pytest nose sphinx psutil
conda install scipy pandas matplotlib cython xlrd coverage xlwt openpyxl paramiko
conda install -c https://conda.anaconda.org/conda-forge pyscaffold pytest-cov
```
!! This guide is not finished yet, it is a WIP (Work In Progress) !!
# Detailed Installation Manual
Installing Python packages with compiled extensions can be a challenge especially
on Windows systems. However, when using Miniconda things can be simplified to a
great extent as this manual hopefully will show you.
The this approach will require you to use the command line, but it is as easy
as copy-pasting them from this page straight into your command prompt.
## Using Miniconda
* Download the latest Python 3 (!!) Miniconda installer for your platform
[here](http://conda.pydata.org/miniconda.html)
* No need to worry about Python 2 or 3 at this stage. You can still use the
Python 3 installer for creating Python 2 conda environments
* Install the necessary Python dependencies using the conda package manager:
```
conda install scipy pandas matplotlib cython xlrd pytables sphinx mingw
```
* Not all packages are available in the conda repositories, but they can be
easily installed with pip:
```
pip install pyscaffold pytest pytest-cov
```
# Anaconda (Windows/Mac/Linux)
## Installation
Install the necessary Python dependencies using the ```conda``` package manager:
```
>> conda install setuptools_scm future h5py pytables pytest pytest-cov nose sphinx blosc pbr paramiko
>> conda install scipy pandas matplotlib cython xlrd coverage xlwt openpyxl psutil
>> conda install -c conda-forge pyscaffold sshtunnel --no-deps
```
Now you can install ```wetb``` with ```pip``` (there is no ```conda``` package
available yet, see [issue 21](toolbox/WindEnergyToolbox#21)).
Since we prefer that ```conda``` manages and installs all dependencies we
expclicitally tell ```pip``` to only install ```wetb``` and nothing more:
```
>> pip install wetb --upgrade --no-deps
```
## Update conda and ```wetb```
```
>> conda update --all
>> pip install wetb --upgrade --no-deps
```
# Pip (Windows/Mac/Linux)
## Installation and update
```
>> pip install --upgrade wetb
```
# Works with Python 2 and Python 3
This module is tested for Python 2.7 and 3.4+ compatibility, and works on both
Windows and Linux. Testing for Mac is on the way, but in theory it should work.
Python 2 and 3 compatibility is achieved with a single code base with the help
of the Python module [future](http://python-future.org/index.html).
Switching to Python 3 is in general a very good idea especially since Python 3.5
was released. Some even dare to say it
[is like eating your vegetables](http://nothingbutsnark.svbtle.com/porting-to-python-3-is-like-eating-your-vegetables).
So if you are still on Python 2, we would recommend you to give Python 3 a try!
You can automatically convert your code from Python 2 to 3 using the
[2to3](https://docs.python.org/2/library/2to3.html) utility which is included
in Python 2.7 by default. You can also write code that is compatible with both
2 and 3 at the same time (you can find additional resources in
[issue 1](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues/1)).
# Note
This project has been set up using PyScaffold 2.5. For details and usage
information on PyScaffold see http://pyscaffold.readthedocs.org/.
# Tutorial 1: Creating master Excel file
The Wind Energy Toolbox has a workflow for automatically running design load
bases (DLBs) on Gorm.
This workflow has the following steps:
1. Create a master Excel sheet defining each case in the DLB
2. [Create subordinate Excel sheets from each tab in the master Excel sheet](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/2-creating-subordinate-excels.md)
3. [Create htc files and PBS job scripts for each requisite simulation using
the subordinate Excel files and a master htc file.](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/3-creating-htc-pbs-files.md)
4. Submit all PBS job scripts to the cluster
5. Post-process results
6. Visualize results
This tutorial presents how to accomplish Step 1.
Note that it is possible to customize your simulations by skipping/modifying
steps.
Such a procedure will be discussed in a later tutorial.
If there are any problems with this tutorial, please [submit an issue](
https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues).
## 1. Background: Master Excel File
The master Excel file is an Excel file that is used to create subordinate
Excel files for generation of htc files and PBS job scripts.
### Master file structure
The master Excel file has a main tab, called "Main", that defines default
values and necessary functions that are called in the other tabs.
Each other tab defines a new case, and one subordinate Excel file will be
generated for each case.
There are three variable types in the master Excel file:
- Constants: values that do not change within a case
- Variables: values that do change within a case, but are numbers that do not
depend on any other values (e.g., wind speed in DLC 1.2)
- Functions: values that depend on other values
### Tag names
The values that are defined in the master Excel file (and eventually the
subordinate Excel files) are used to replace "tags" in the master htc file.
These tags are of the form ```[$TAG_NAME]```.
Theoretically, a user can define any new tags they desire, there are no
require naming conventions.
However, there are some tags that are currently hard-coded into the Toolbox
that can result in errors if the tag names are changed.
Thus, **we do not recommend you change the tag names from those in the
tutorial**.
If you need new values that do not exist in the tutorial's master htc file
and produced master file, then it should be fine to add them.
There are a few tags that deserve special mention:
- ```[Case folder]```: the htc files for each case will be saved in this case
folder. We do not recommend changing the tag name or naming convention here
if you are doing a standard DLB.
- ```[Case id.]```: this defines the naming convention for each htc file. We
do not recommend changing the tag name or naming convention here if you are
doing a standard DLB.
- ```[seed]```: this variable indicates the desired number of seeds for each
set of variables. Thus, for example, in DLC 1.2, 1.3, the ```[seed]``` value
should be set to at least 6.
Lastly, it is extremely important that your tag names in your master Excel
file match the tag names in your master htc file.
Thus, **be sure to verify that your tag names in your master Excel and master
htc files are consistent**.
## 2. Tutorial
The procedure for creating the master Excel sheet is simple: each desired DLB
is defined in a tab-delimited text file, and these are loaded into a single
Excel file.
It is assumed that the user has a collection of text files in a folder for
all of the DLBs to be simulated.
This tutorial uses the text files located in
```wetb/docs/tutorials/data/DLCs_onshore```, which contain a series of text
files for a full DLB of an onshore turbine.
These text files correspond to the onshore DTU 10 MW master htc file that is
located in the same directoy.
Generate the master Excel file in a few easy steps:
1. Open a command window.
2. If you are running the tutorial locally (i.e., not on Gorm), navigate to
the Wind Energy Toolbox tutorials directory.
3. From a terminal/command window, run the code to generate the Excel file
from a folder of text files:
* Windows (from the wetb tutorials folder):
```python ..\..\wetb\prepost\write_master.py --folder data\DLCs_onshore --filename DLCs_onshore.xlsx```
* Mac/Linux (from the wetb tutorials folder):
```python ../../wetb/prepost/write_master.py --folder data/DLCs_onshore --filename DLCs_onshore.xlsx```
* Gorm (from any folder that contains a subfolder with your text files. Note
you must activate the wetb environment (see Step 5 [here](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/getting-started-with-dlbs.md)
) before this command will work. This command also assumes the folder with your
text files is called "DLCs_onshore" and is located in the working directory.):
```python /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/write_master.py --folder ./DLCs_onshore --filename ./DLCs_onshore.xlsx```
The master Excel file "DLCs_onshore.xlsx" should now be in the your current
directory.
Note that we have used the parser options ```--folder``` and ```--filename```
to specify the folder with the text files and the name of the resulting Excel
file.
Other parser options are also available.
(See doc string in ```write_master.py``` function.)
## 3. Generation options
See doc string in ```write_master.py``` function.
## 4. Issues
If there are any problems with this tutorial, please [submit an issue](
https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues).
We will try to fix it as soon as possible.
# Tutorial 2: Creating subordinate Excel files
The Wind Energy Toolbox has a workflow for automatically running design load
bases (DLBs) on Gorm.
This workflow has the following steps:
1. [Create a master Excel sheet defining each case in the DLB](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/1-creating-master-excel.md)
2. Create subordinate Excel sheets from each tab in the master Excel sheet
3. [Create htc files and PBS job scripts for each requisite simulation using
the subordinate Excel files and a master htc file.](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/3-creating-htc-pbs-files.md)
4. Submit all PBS job scripts to the cluster
5. Post-process results
6. Visualize results
This tutorial presents how to accomplish Step 2.
Note that it is possible to customize your simulations by skipping/modifying
steps.
Such a procedure will be discussed in a later tutorial.
If there are any problems with this tutorial, please [submit an issue](
https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues).
## 1. Background: Subordinate Excel Files
The subordinate Excel files are a series of basic Excel files that are
generated from the master Excel file. (See our tutorial on generating the
master Excel file [here]( https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tut orials/1-creating-master-excel.md).)
There is a different subordinate Excel file for every tab in the master Excel
file, except for the "Main" tab, one for each case to simulate (e.g., design
load case 1.2 from IEC-61400-1).
Each subordinate Excel file has a single tab that lists the different tag
values for the htc master file in the column, and each row corresponds to a
different htc file to be generated.
The generation of the htc files from the subordinate Excel files is discused
in the next tutorial.
## 2. Tutorial
The generation of the subordinate Excel files is done using the
[GenerateDLS.py](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/wetb/prepost/GenerateDLCs.py)
function in the Wind Energy Toolbox.
On Gorm, the command can be executed from the htc directory with the master
Excel file as follows:
```
export PATH=/home/python/miniconda3/bin:$PATH
source activate wetb_py3
python /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/GenerateDLCs.py [--folder=$FOLDER_NAME] [--master=$MASTER_NAME]
source deactivate
```
The ```export PATH``` command adds the miniconda bin directory to the path,
which is necessary for the toolbox.
The ```source activate wetb_py3``` and ```source deactivate``` are
Gorm-specific commands to activate the Wind Energy Toolbox Python environment.
The ```--folder``` and ```--master``` flags are optional flags to specify,
respectively, the name of the folder to which the subordinate Excel files
should be written to and the name of the master Excel file.
The default values for these two options are './' (i.e., the current
directory) and 'DLCs.xlsx', respectively.
After running the commands in the above box on Gorm, you should see a folder in
your htc directory with all of your subordinate Excel files.
## 3. Issues
If there are any problems with this tutorial, please [submit an issue](
https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues).
We will try to fix it as soon as possible.
# Tutorial 3: Creating htc and PBS files
The Wind Energy Toolbox has a workflow for automatically running design load
bases (DLBs) on Gorm.
This workflow has the following steps:
1. [Create a master Excel sheet defining each case in the DLB](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/1-creating-master-excel.md)
2. [Create subordinate Excel sheets from each tab in the master Excel sheet](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/tutorials/2-creating-subordinate-excels.md)
3. Create htc files and PBS job scripts for each requisite simulation using
the subordinate Excel files and a master htc file.
4. Submit all PBS job scripts to the cluster
5. Post-process results
6. Visualize results
This tutorial presents how to accomplish Step 3.
Note that it is possible to customize your simulations by skipping/modifying
steps.
Such a procedure will be discussed in a later tutorial.
If there are any problems with this tutorial, please [submit an issue](
https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues).
## 1. Background: htc and PBS file creation
The main function used in this tutorial is [dlctemplate.py](https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/wetb/prepost/dlctemplate.py),
which creates all htc and PBS job scripts for the cases specified in the
subordinate Excel file folder.
The htc files are the main input files for HAWC2 simulations.
They are created by copying the master htc file in the ```_master/``` folder in
your htc directory and replacing all of the tags with the values specified in
the subordinate Excel files.
All of htc files for a single case are saved in a case-specific folder in your
htc folder.
Thus, if you were running a standard DLB calculation for IEC 61400-1, your
folder structure after generating your htc files might look like this:
```
|-- $TURB_NAME/
| |-- $SET_ID/
| | |-- DLCs.xlsx
| | |-- _master/
| | | |-- $MASTER_NAME.htc
| | |-- DLCs/
| | |-- htc/
| | | |-- dlc12_iec61400-1ed3/
| | | | |-- dlc12_wsp04_wdir000_s2001.htc
| | | | |-- dlc12_wsp04_wdir000_s2002.htc
| | | | |-- ...
| | | |-- dlc13_iec61400-1ed3/
| | | |-- ...
```
The PBS job scripts are a series of text files that are used to tell the job
scheduler on the high-performance computing (HPC) cluster how to run each job.
These files end with ".p", and are saved to a folder ```pbs_in/``` that is
created in the main set ID folder on Gorm.
## 2. Tutorial
There are two ways to call ```dlctemplate.py```.
The first is to call the function directly.
The second is to wrap it in a job scheduler to submit the job to the HPC cluster.
The first option is fine if you have only a few htc files or if the job
scheduler is not working for some reason.
The second option is generally preferred.
### 2.1 Directly generate htc files
The htc and PBS files can be directly generated by running the following
commands from the set ID directory:
```
export PATH=/home/python/miniconda3/bin:$PATH
source activate wetb_py3
python /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/dlctemplate.py --prep
source deactivate
```
The ```export PATH``` command adds the miniconda bin directory to the path,
which is necessary for the toolbox.
The ```source activate wetb_py3``` and ```source deactivate``` are
Gorm-specific commands to activate the Wind Energy Toolbox Python environment.
The ```--prep``` option tells the script to run in preparation mode, in which
case it creates the htc and pbs files.
After running the commands in the above box on Gorm, you should have all of your
PBS input files in ```pbs_in/``` and all of your htc files in ```htc```.
### 2.2 Generate files using job scheduler
From the set ID folder, run the following code:
```
qsub-wrap.py -f /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/dlctemplate.py --prep
```
## 3. Issues
If there are any problems with this tutorial, please [submit an issue](
https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/issues).
We will try to fix it as soon as possible.