Skip to content
Snippets Groups Projects
howto-make-dlcs.md 33.38 KiB

Auto-generation of Design Load Cases

WARNING: these notes contain configuration settings that are specif to the DTU Wind Energy cluster Gorm. Only follow this guide in another environment if you know what you are doing!

Introduction

For the auto generation of load cases and the corresponding execution on the cluster, the following events will take place:

  • Create an htc master file, and define the various tags in the exchange files (spreadsheets).
  • Generate the htc files for all the corresponding load cases based on the master file and the tags defined in the exchange files. Besides the HAWC2 htc input file, a corresponding pbs script is created that includes the instructions to execute the relevant HAWC2 simulation on a cluster node. This includes copying the model to the node scratch disc, executing HAWC2, copying the results from the node scratch disc back to the network drive.
  • Submit all the load cases (or the pbs launch scripts) to the cluster queueing system. This is also referred to as launching the jobs.

Important note regarding file names. On Linux, file names and paths are case sensitive, but on Windows they are not. Additionally, HAWC2 will always generate result and log files with lower case file names, regardless of the user input. Hence, in order to avoid possible ambiguities at all times, make sure that there are no upper case symbols defined in the value of the following tags (as defined in the Excel spreadsheets): [Case folder], [Case id.], and [Turb base name].

The system will always force the values of the tags to be lower case anyway, and when working on Windows, this might cause some confusing and unexpected behavior. The tags themselves can have lower and upper case characters as can be seen in the example above.

Notice that throughout the document $USER refers the your user name. You can either let the system fill that in for you (by using the variable $USER), or explicitly user your user name instead. This user name is the same as your DTU account name (or student account/number).

This document refers to commands to be entered in the terminal on Gorm when the line starts with g-000 $. The command that needs to be entered starts after the $.

Pdap

You can also use the Pdap for post-processing, which includes a MS Word report generator based on a full DLB, a GUI for easy plotting of HAWC2 result files, and a Python scripting interface:

Connecting to the cluster

We provide here an overview of how to connect to the cluster, but general, up-to-date information can be found in the HPC documentation or on the Gorm wiki. Note that the information from the Gorm wiki will be migrated into the HPC documentation over time.

You connect to the cluster via an SSH terminal, and there are different SSH terminals based on your operating system (see the platform-specific instructions in the next subsections). The cluster can only be reached when on the DTU network (wired, or only from a DTU computer when using a wireless connection), when connected to the DTU VPN, or from one of the DTU databars.

Windows

Windows users are advised to use PuTTY, which can be downloaded from this link.

Once you have installed PuTTY and placed the executable somewhere convenient (e.g., the Desktop), double click on the executable. In the window that opens up, enter/verify the following settings:

  • Session > Host Name: gorm.risoe.dk
  • Session > Port: 22
  • Session > Connection type: SSH
  • Session > Saved Sessions: Gorm
  • Connection > Data > Auto-login username: your DTU username
  • Connection > Data > When username is not specified: Use system username
  • Window > Colours > Select a colour to adjust > ANSI Blue: RGB = 85, 85, 255
  • Window > Colours > Select a colour to adjust > ANSI Bold Blue: RGB = 128, 128, 255

Note that these last two options are optional. We've found that the default color for comments, ANSI Blue, is too dark to be seen on the black background. The last two options in the list set ANSI Blue and ANSI Blue Bold to be lighter and therefore easier to read when working in the terminal. Once you have entered these options, click "Save" on the "Session" tab and close the window.

With PuTTY configured, you can connect to Gorm by double-clicking the PuTTY executable; then, in the window that opens select "Gorm" in "Saved Sessions", click the "Load" button, and finally click the "Open" button. A terminal window will open up. Type your DTU password in this new window when prompted (your text will not appear in the window) and then hit the Enter key. You should now be logged into Gorm.

To close the PuTTY window, you can either hit the red "X" in the upper-right corner of the window or type "exit" in the terminal and hit enter.

More information on using PuTTY and how it works can be found in this PuTTY tutorial or in the online documentation. You are also welcome to use Google and read the many online resources.

Unix

Unlike Windows, SSH is supported out of the box for Linux and Mac OSX terminals. To connect to the cluster, enter the following command into the terminal:

ssh $USER@gorm.risoe.dk

Enter your DTU password when prompted. This will give you terminal access to the Gorm cluster.

Mounting the cluster discs

When doing the HAWC2 simulations, you will interact regularly with the cluster file system and discs. Thus, it can be very useful to have two discs mounted locally so you can easily access them: 1) your home directory on Gorm and 2) the HAWC2 simulation folder on Mimer.

You need to be connected to the DTU network (either directly or via VPN) for the following instructions to work.

Windows

On Windows, we recommend mapping the two drives to local network drives, which means that you can navigate/copy/paste to/from them in Windows Explorer just as you would do with normal folders on your computer. You may also use WinSCP to interact with the cluster discs if you are more familiar with that option.

Here we provide instructions for mapping network drives in Windows 7. If these instructions don't work for you, you can always find directions for your version of Windows by Googling "map network drive windows $WIN_VERSION", where $WIN_VERSION is your version number.

In Windows 7, you can map a network drive in the following steps:

  1. Open a Windows Explorer window
  2. Right-click on "Computer" and select "Map network drive"
  3. Select any unused drive and type \\gorm.risoe.dk\$USER into the folder field, replacing "$USER" with your DTU username (e.g., DTU user "ABCD" has a Gorm home drive of \\gorm.risoe.dk\abcd)
  4. Check the "Reconnect at logon" box if you want to connect to this drive every time you log into your computer (recommended)
  5. Click the Finish button
  6. Repeat Steps 1 through 5, replacing the Gorm home address in Step 3 with the HAWC2 simulation folder address: \\mimer.risoe.dk\hawc2sim

Note that by default Windows Explorer will hide some of the files you will need edit. In order to show all files on your Gorm home drive, you need to un-hide system files: Explorer > Organize > Folder and search options > "View" tab > Hidden files and folders > "Show hidden files, folders, and drives".

Unix

From Linux/Mac, you should be able to mount using either of the following addresses:

//mimer.risoe.dk/hawc2sim
//gorm.risoe.dk/$USER

You can use either sshfs or mount -t cifs to mount the discs.

Preparation

Add the cluster-tools script to your system's PATH of you Gorm environment, by editing the file .bash_profile file in your Gorm’s home directory (/home/$USER/.bash_profile), and add the following lines (add at the end, or create a new file with this file name in case it doesn't exist):

export PATH=$PATH:/home/MET/repositories/toolbox/pbsutils/

(The corresponding open repository is on the DTU Wind Energy Gitlab server: pbsutils. Please considering reporting bugs and/or suggest improvements there. You're contributions are much appreciated!)

If you have been using an old version of this how-to, you might be pointing to an earlier version of these tools/utils and any references containing cluster-tools or prepost should be removed from your .bash_profile and/or .bashrc file on your gorm home drive.

After modifying .bash_profile, save and close it. Then, in the terminal, run the command (or logout and in again to be safe):

g-000 $ source ~/.bash_profile
g-000 $ source ~/.bashrc

You will also need to configure wine and place the HAWC2 executables in your local wine directory, which by default is assumed to be ~/.wine32, and pbsutils contains and automatic configuration script you can run:

g-000 $ /home/MET/repositories/toolbox/pbsutils/config-wine-hawc2.sh

If you need more information on what is going on, you can read a more detailed description [here] (https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/configure-wine.md).

All your HAWC2 executables and DLL's are now located at /home/$USER/wine_exe/win32.

Notice that the HAWC2 executable names are hawc2-latest.exe, hawc2-118.exe, etc. By default the latest version will be used and the user does not need to specify this. However, when you need to compare different version you can easily do so by specifying which case should be run with which executable.

Alternatively you can also include all the DLL's and executables in the root of your HAWC2 model folder. Executables and DLL's placed in the root folder take precedence over the ones placed in /home/$USER/wine_exe/win32.

IMPORTANT: log out and in again from the cluster (close and restart PuTTY) before trying to see if you can run HAWC2.

At this stage you can run HAWC2 as follows:

g-000 $ wine32 hawc2-latest htc/some-intput-file.htc

Updating local HAWC2 executables

When there is a new version of HAWC2, or when a new license manager is released, you can update your local wine directory as follows:

g-000 $ rsync -au /home/MET/hawc2exe/win32 /home/$USER/wine_exe/win32 --progress

The file hawc2-latest.exe will always be the latest HAWC2 version at /home/MET/hawc2exe/. When a new HAWC2 is released you can simply copy all the files from there again to update.

HAWC2 model folder structure and results on mimer/hawc2sim

See [house rules on mimer/hawc2sim] (https://gitlab.windenergy.dtu.dk/toolbox/WindEnergyToolbox/blob/master/docs/houserules-mimerhawc2sim.md) for a more detailed description.

Method A: Generating htc input files on the cluster (recommended)

Use ssh (Linux, Mac) or putty (MS Windows) to connect to the cluster.

In order to simplify things, we're using qsub-wrap.py from pbsutils (which we added under the [preparation]/(#preparation) section) in order to generate the htc files. It will execute, on a compute node, any given Python script in a pre-installed Python environment that has the Wind Energy Toolbox installed.

For the current implementation of the DLB the following template is available:

/home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/dlctemplate.py

And the corresponding definitions of all the different load cases can be copied from here (valid for the DTU10MW):

/mnt/mimer/hawc2sim/DTU10MW/C0020/htc/DLCs

Note that dlctemplate.py does not require any changes or modifications if you are only interested in running the standard DLB as explained here.