-
David Verelst authored
Something went really wrong with the merge at e0e8c0d3. As a result, changes made by @dave where reverted in the merge. This commit restores those changes
David Verelst authoredSomething went really wrong with the merge at e0e8c0d3. As a result, changes made by @dave where reverted in the merge. This commit restores those changes
Auto-generation of Design Load Cases
WARNING: these notes contain configuration settings that are specif to the DTU Wind Energy cluster Gorm. Only follow this guide in another environment if you know what you are doing!
Introduction
For the auto generation of load cases and the corresponding execution on the cluster, the following events will take place:
- Create an htc master file, and define the various tags in the exchange files (spreadsheets).
- Generate the htc files for all the corresponding load cases based on the master file and the tags defined in the exchange files. Besides the HAWC2 htc input file, a corresponding pbs script is created that includes the instructions to execute the relevant HAWC2 simulation on a cluster node. This includes copying the model to the node scratch disc, executing HAWC2, copying the results from the node scratch disc back to the network drive.
- Submit all the load cases (or the pbs launch scripts) to the cluster queueing system. This is also referred to as launching the jobs.
Important note regarding file names. On Linux, file names and paths are case
sensitive, but on Windows they are not. Additionally, HAWC2 will always generate
result and log files with lower case file names, regardless of the user input.
Hence, in order to avoid possible ambiguities at all times, make sure that there
are no upper case symbols defined in the value of the following tags (as defined
in the Excel spreadsheets): [Case folder]
, [Case id.]
, and
[Turb base name]
.
The system will always force the values of the tags to be lower case anyway, and when working on Windows, this might cause some confusing and unexpected behaviour. The tags themselves can have lower and upper case characters as can be seen in the example above.
Notice that throughout the document $USER
refers the your user name. You can
either let the system fill that in for you (by using the variable $USER
),
or explicitly user your user name instead. This user name is the same as your
DTU account name (or student account/number).
This document refers to commands to be entered in the terminal on Gorm when the
line starts with g-000 $
. The command that needs to be entered starts
after the $
.
Connecting to the cluster
You connect to the cluster via an SSH terminal. SSH is supported out of the box for Linux and Mac OSX terminals (such as bash), but requires a separate terminal client under Windows. Windows users are advised to use PuTTY and can be downloaded at: http://www.chiark.greenend.org.uk/~sgtatham/putty/. Here's a random tutorial, you can use your favourite search engine if you need more or different instructions. More answers regarding PuTTY can also be found in the online documentation.
The cluster that is setup for using the pre- and post-processing tools for HAWC2
has the following address: gorm.risoe.dk
.
On Linux/Mac connecting to the cluster is as simple as running the following command in the terminal:
g-000 $ ssh $USER@gorm.risoe.dk
Use your DTU password when asked. This will give you terminal access to the cluster called Gorm.
The cluster can only be reached when on the DTU network (wired, or only from a DTU computer when using a wireless connection), when connected to the DTU VPN, or from one of the DTU databars.
More information about the cluster can be found on the Gorm-wiki
Mounting the cluster discs
You need to be connected to the DTU network in order for this to work. You can also connect to the DTU network over VPN.
When doing the HAWC2 simulations, you will interact regularly with the cluster
file system and discs. It is convenient to map these discs as network
drives (in Windows terms). Map the following network drives (replace $USER
with your user name):
\\mimer\hawc2sim
\\gorm\$USER # this is your Gorm home directory
Alternatively, on Windows you can use WinSCP to interact with the cluster discs.
Note that by default Windows Explorer will hide some of the files you will need edit. In order to show all files on your Gorm home drive, you need to un-hide system files: Explorer > Organize > Folder and search options > select tab "view" > select the option to show hidden files and folders.
From Linux/Mac, you should be able to mount using either of the following addresses:
//mimer.risoe.dk/hawc2sim
//mimer.risoe.dk/well/hawc2sim
//gorm.risoe.dk/$USER
You can use either sshfs
or mount -t cifs
to mount the discs.
Preparation
Add the cluster-tools script to your system's PATH of you Gorm environment,
by editing the file .bash_profile
file in your Gorm’s home directory
(/home/$USER/.bash_profile
), and add the following lines (add at the end,
or create a new file with this file name in case it doesn't exist):
export PATH=$PATH:/home/MET/STABCON/repositories/toolbox/pbsutils/
(The corresponding open repository is on the DTU Wind Energy Gitlab server: pbsutils. Please considering reporting bugs and/or suggest improvements there. You're contributions are much appreciated!)
If you have been using an old version of this how-to, you might be pointing
to an earlier version of these tools/utils and its reference should be removed
from your .bash_profile
file:
export PATH=$PATH:/home/MET/STABCON/repositories/cluster-tools/
After modifying .bash_profile
, save and close it. Then, in the terminal, run the command:
g-000 $ source ~/.bash_profile
In order for any changes made in .bash_profile
to take effect, you need to either source
it (as shown above), or log out and in again.
You will also need to configure wine and place the HAWC2 executables in a directory that wine knows about. First, activate the correct wine environment by typing in a shell in the Gorm's home directory (it can be activated with ssh (Linux, Mac) or putty (MS Windows)):
g-000 $ WINEARCH=win32 WINEPREFIX=~/.wine32 wine test.exe
Optionally, you can also make an alias (a short format for a longer, more complex
command). In the .bashrc
file in your home directory
(/home/$USER/.bash_profile
), add at the bottom of the file:
alias wine32='WINEARCH=win32 WINEPREFIX=~/.wine32 wine'
And now copy all the HAWC2 executables, DLL's (including the license manager)
to your wine directory. You can copy all the required executables, dll's and
the license manager are located at /home/MET/hawc2exe
. The following
command will do this copying:
g-000 $ cp /home/MET/hawc2exe/* /home/$USER/.wine32/drive_c/windows/system32
Notice that the HAWC2 executable names are hawc2-latest.exe
,
hawc2-118.exe
, etc. By default the latest version will be used and the user
does not need to specify this. However, when you need to compare different version
you can easily do so by specifying which case should be run with which
executable. The file hawc2-latest.exe
will always be the latest HAWC2
version at /home/MET/hawc2exe/
. When a new HAWC2 is released you can
simply copy all the files from there again to update.
Log out and in again from the cluster (close and restart PuTTY).
At this stage you can run HAWC2 as follows:
g-000 $ wine32 hawc2-latest htc/some-intput-file.htc
Method A: Generating htc input files on the cluster
Use ssh (Linux, Mac) or putty (MS Windows) to connect to the cluster.
With qsub-wrap.py the user can wrap a PBS launch script around any executable or
Python/Matlab/... script. In doing so, the executable/Python script will be
immediately submitted to the cluster for execution. By default, the Anaconda
Python environment in /home/MET/STABCON/miniconda
will be activated. The
Anaconda Python environment is not relevant, and can be safely ignored, if the
executable does not have anything to do with Python.
In order to see the different options of this qsub-wrap utility, do:
g-000 $ qsub-wrap.py --help
For example, in order to generate the default IEC DLCs:
g-000 $ cd path/to/HAWC2/model # folder where the hawc2 model is located
g-000 $ qsub-wrap.py -f /home/MET/STABCON/repositories/prepost/dlctemplate.py -c python --prep
Note that the following folder structure for the HAWC2 model is assumed:
|-- control
| |-- ...
|-- data
| |-- ...
|-- htc
| |-- DLCs
| | |-- dlc12_iec61400-1ed3.xlsx
| | |-- dlc13_iec61400-1ed3.xlsx
| | |-- ...
| |-- _master
| | `-- dtu10mw_master_C0013.htc
The load case definitions should be placed in Excel spreadsheets with a
*.xlsx
extension. The above example shows one possible scenario whereby
all the load case definitions are placed in htc/DLCs
(all folder names
are case sensitive). Alternatively, one can also place the spreadsheets in
separate sub folders, for example:
|-- control
| |-- ...
|-- data
| |-- ...
|-- htc
| |-- dlc12_iec61400-1ed3
| | |-- dlc12_iec61400-1ed3.xlsx
| |-- dlc13_iec61400-1ed3
| | |-- dlc13_iec61400-1ed3.xlsx
In order to use this auto-configuration mode, there can only be one master file
in _master
that contains _master_
in its file name.
For the NREL5MW and the DTU10MW HAWC2 models, you can find their respective master files and DLC definition spreadsheet files on Mimer. When connected to Gorm over SSH/PuTTY, you will find these files at:
/mnt/mimer/hawc2sim # (when on Gorm)
Method B: Generating htc input files interactively on the cluster
Use ssh (Linux, Mac) or putty (MS Windows) to connect to the cluster.
This approach gives you more flexibility, but requires more commands, and is hence considered more difficult compared to method A.
First activate the Anaconda Python environment by typing:
# add the Anaconda Python environment paths to the system PATH
g-000 $ export PATH=/home/MET/STABCON/miniconda/bin:$PATH
# activate the custom python environment:
g-000 $ source activate anaconda
# add the Pythone libraries to the PYTHONPATH
g-000 $ export PYTHONPATH=/home/MET/STABCON/repositories/prepost:$PYTHONPATH
g-000 $ export PYTHONPATH=/home/MET/STABCON/repositories/pythontoolbox/fatigue_tools:$PYTHONPATH
g-000 $ export PYTHONPATH=/home/MET/STABCON/repositories/pythontoolbox:$PYTHONPATH
g-000 $ export PYTHONPATH=/home/MET/STABCON/repositories/MMPE:$PYTHONPATH
For example, launch the auto-generation of DLCs input files:
g-000 $ cd path/to/HAWC2/model # folder where the hawc2 model is located
g-000 $ python /home/MET/STABCON/repositories/prepost/dlctemplate.py --prep
Or start an interactive IPython shell:
g-000 $ ipython
Users should be aware that running computational heavy loads on the login node is strictly discouraged. By overloading the login node other users will experience slow login procedures, and the whole cluster could potentially be jammed.
Method C: Generating htc input files locally
This approach gives you total freedom, but is also more difficult since you will have to have fully configured Python environment installed locally. Additionally, you need access to the cluster discs from your local workstation. Method C is not documented yet.
Optional configuration
Optional tags that can be set in the Excel spreadsheet, and their corresponding default values are given below. Beside a replacement value in the master htc file, there are also special actions connected to these values. Consequently, these tags have to be present. When removed, the system will stop working properly.
Relevant for the generation of the PBS launch scripts (*.p
files):
[walltime] = '04:00:00' (format: HH:MM:SS)
[hawc2_exe] = 'hawc2-latest'
Following directories have to be defined, and their default values are used when they are not set explicitly in the spreadsheets.
[animation_dir] = 'animation/'
-
[control_dir] = 'control/'
, all files and sub-folders copied to node -
[data_dir] = 'data/'
, all files and sub-folders copied to node [eigenfreq_dir] = False
[htc_dir] = 'htc/'
[log_dir] = 'logfiles/'
[res_dir] = 'res/'
[turb_dir] = 'turb/'
[turb_db_dir] = '../turb/'
[turb_base_name] = 'turb_'
Required, and used for the PBS output and post-processing
[pbs_out_dir] = 'pbs_out/'
[iter_dir] = 'iter/'
Optional