-
David Verelst authoredDavid Verelst authored
Auto-generation of Design Load Cases
WARNING: these notes contain configuration settings that are specif to the DTU Wind Energy cluster Gorm. Only follow this guide in another environment if you know what you are doing!
Introduction
For the auto generation of load cases and the corresponding execution on the cluster, the following events will take place:
- Create an htc master file, and define the various tags in the exchange files (spreadsheets).
- Generate the htc files for all the corresponding load cases based on the master file and the tags defined in the exchange files. Besides the HAWC2 htc input file, a corresponding pbs script is created that includes the instructions to execute the relevant HAWC2 simulation on a cluster node. This includes copying the model to the node scratch disc, executing HAWC2, copying the results from the node scratch disc back to the network drive.
- Submit all the load cases (or the pbs launch scripts) to the cluster queueing system. This is also referred to as launching the jobs.
Important note regarding file names. On Linux, file names and paths are case
sensitive, but on Windows they are not. Additionally, HAWC2 will always generate
result and log files with lower case file names, regardless of the user input.
Hence, in order to avoid possible ambiguities at all times, make sure that there
are no upper case symbols defined in the value of the following tags (as defined
in the Excel spreadsheets): [Case folder]
, [Case id.]
, and
[Turb base name]
.
The system will always force the values of the tags to be lower case anyway, and when working on Windows, this might cause some confusing and unexpected behaviour. The tags themselves can have lower and upper case characters as can be seen in the example above.
Notice that throughout the document $USER
refers the your user name. You can
either let the system fill that in for you (by using the variable $USER
),
or explicitly user your user name instead. This user name is the same as your
DTU account name (or student account/number).
This document refers to commands to be entered in the terminal on Gorm when the
line starts with g-000 $
. The command that needs to be entered starts
after the $
.
Connecting to the cluster
You connect to the cluster via an SSH terminal. SSH is supported out of the box for Linux and Mac OSX terminals (such as bash), but requires a separate terminal client under Windows. Windows users are advised to use PuTTY and can be downloaded at: http://www.chiark.greenend.org.uk/~sgtatham/putty/. Here's a random tutorial, you can use your favourite search engine if you need more or different instructions. More answers regarding PuTTY can also be found in the online documentation.
The cluster that is setup for using the pre- and post-processing tools for HAWC2
has the following address: gorm.risoe.dk
.
On Linux/Mac connecting to the cluster is as simple as running the following command in the terminal:
ssh $USER@gorm.risoe.dk
Use your DTU password when asked. This will give you terminal access to the cluster called Gorm.
The cluster can only be reached when on the DTU network (wired, or only from a DTU computer when using a wireless connection), when connected to the DTU VPN, or from one of the DTU databars.
More information about the cluster can be found on the Gorm-wiki
Mounting the cluster discs
You need to be connected to the DTU network in order for this to work. You can also connect to the DTU network over VPN.
When doing the HAWC2 simulations, you will interact regularly with the cluster
file system and discs. It is convenient to map these discs as network
drives (in Windows terms). Map the following network drives (replace $USER
with your user name):