diff --git a/docs/getting-started-with-dlbs.md b/docs/getting-started-with-dlbs.md
new file mode 100644
index 0000000000000000000000000000000000000000..7bf3d227bec56c450b96d8cbbac556dbf5d73eef
--- /dev/null
+++ b/docs/getting-started-with-dlbs.md
@@ -0,0 +1,152 @@
+# Getting started with generating DLBs for HAWC2
+
+Note that DLB stands for Design Load Basis. It refers to a set of cases that are
+used to evaluate the fitness of a certain design. An example of a DLB definition
+is the IEC 61400-1ed3.
+
+
+## Overview
+
+This document intends to provide an extremely brief overview of how to run a set
+of HAWC2 simulations using the Gorm cluster at DTU and the Mimer storage.
+This document is a work in progress, and is by no means exhaustive.
+
+
+## Resources
+
+The majority of this information can be found in the Wind Energy Toolbox
+documentation. In particular, [generate-spreadsheet](docs/generate-spreadsheet.md)
+discusses how to use a "master" Excel spreadsheet to generate the subordinate
+Excel spreadsheets that will later be used to create the necessary HTC files.
+[howto-make-dlcs](docs/howto-make-dlcs.md) discusses how to create htc files
+from the subordinate spreadsheets, submit those HTC files to the cluster,
+and post-process results.
+[houserules-mimerhawc2sim](docs/houserules-mimerhawc2sim.md) has some
+"house rules" on storing simulations on mimer.
+[using-statistics-df.md](docs/using-statistics-df) has some information
+on loading the post-processing statistics using Python.
+
+
+## Steps
+
+##### 1. Make sure that you can access the cluster/mimer.
+See the instructions on [this page](docs/howto-make-dlcs.md).
+
+##### 2. Create a Set ID folder for this project/simulation.
+You should find that, within a given turbine model, the folder structure is
+similar to the following:
+
+```
+|-- DTU10MW/
+|   |-- AA0001
+|   |   |-- ...
+|   |-- AA0002
+|   |   |-- ...
+|   |-- ...
+|   |-- AB0001
+|   |-- ...
+|-- AA_log_DTUMW.xlsx
+|-- AB_log_DTUMW.xlsx
+|-- ...
+```
+
+Here, each of these alphanumeric folders are "set IDs", and you should have a
+unique set ID for each set of simulations. Detailed house rules on how you
+should store data on mimer can be found in the
+[houserules-mimerhawc2sim](docs/houserules-mimerhawc2sim.md) document.
+
+There are two steps to creating your new set ID folder:
+1. Determine if you need to create a new turbine model folder. You should only
+do this when the turbulence box size changes (e.g., if the rotor size changes)
+or if you have a model that's never been simulated on mimer.
+2. Determine your set ID code. There are two scenarios:
+    * No one else in your project has run simulations on mimer. In this case,
+    create a new set ID alpha code (e.g., "AA", "AB", etc.). 
+    * Simulations for this project/turbine configuration already exist. In this
+    case, use a pre-existing set ID alpha code and add one to the most recent
+    Set ID (e.g., if "AB0008" exists, your new folder should be "AB0009").  
+    
+##### 3. Add proper log files for your Set ID folder.
+See the [house rules](docs/houserules-mimerhawc2sim.md) regarding log files.
+
+##### 4. Add your model files.
+Within your new Set ID folder, add your HAWC2 model files. Keep a folder
+structure similar to this:
+
+```
+|-- control/
+|   |-- ...
+|-- data/
+|   |-- ...
+|-- htc/
+|   |-- _master/
+|   |   |-- TURB_master_AA0001.htc
+|   |-- DLCs.xlsx
+```
+
+Your master htc file, stored in ```htc/_master/```, can take any desired naming
+convention, but it must have ```_master_``` in the name or future scripts will
+abort. ```htc/DLCs.xlsx``` is your master Excel file that will create the 
+subordinate Excel files in the coming steps.
+
+##### 5. Create your subordinate Excel files.
+From a terminal, change to your htc directory. Then run the following code:
+
+```
+$ export PATH=/home/python/miniconda3/bin:$PATH
+$ source activate wetb_py3
+$ python /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/GenerateDLCs.py --folder=DLCs
+$ source deactivate
+```
+
+This will create a subfolders DLCs and fill that new subfolder with the created
+subordinate Excel files.
+
+##### 6. Move your DLCs.xlsx file from the htc folder to the ```_master``` folder.
+It will cause errors in later scripts if left in the htc folder.
+
+##### 7. Create your htc files and PBS job scripts .
+These files and scripts are generated from the subordinate Excel files from
+Step 5. To do this, in the terminal, change up a level to your Set ID folder
+(e.g., to folder "AB0001"). Then run this code
+
+```
+$ qsub-wrap.py -f /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/dlctemplate.py --prep
+```
+
+Your htc files should now be placed in subfolders in the htc folder, and PBS
+job files should be in folder ```pbs_in```.
+
+##### 8. Launch the htc files to the cluster.
+Use the ```launch.py``` function to launch the jobs on the cluster.
+For example, the following code will launch the jobs in folder ```pbs_in``` on
+100 nodes. You must be in the top-level Set ID folder for this to work (e.g.,
+in folder "AB0001").
+
+```
+$ launch.py -n 100 -p pbs_in/
+```
+
+There are many launch options available. You can read more about the options
+and querying the cluster configurations/status/etc. on 
+[this page](docs/howto-make-dlcs.md), or you can use the ```launchy.py```
+help function to print available launch options:
+
+```
+$ launch.py --help
+```
+
+##### 9. Post-process results.
+
+The wetb function ```qsub-wrap.py``` can not only generate htc files but also 
+post-process results. For example, here is code to check the log files
+and calculate the statistics, the AEP and the lifetime equivalent loads
+(must be executed from the top-level Set ID folder):
+
+```
+$ qsub-wrap.py -f /home/MET/repositories/toolbox/WindEnergyToolbox/wetb/prepost/dlctemplate.py --years=25 --neq=1e7 --stats --check_logs --fatigue
+```
+
+More details regarding loading the post-processed with statistics dataframes
+can be found here: [using-statistics-df](docs/using-statistics-df.md).
+
diff --git a/docs/howto-make-dlcs.md b/docs/howto-make-dlcs.md
index ffb46e824f3cf5922aecd7fd0ebf37d10c01c59a..cdd40cd2867027cff434e1b08f1c2e5cd1d6e0ee 100644
--- a/docs/howto-make-dlcs.md
+++ b/docs/howto-make-dlcs.md
@@ -8,7 +8,7 @@ do as on Arch Linux wiki: top line is the file name where you need to add stuff
 explain the difference in the paths seen from a windows computer and the cluster
 
 DONE:
-- putty reference and instructions (fill in username in the address 
+- putty reference and instructions (fill in username in the address
   username@gorm) [rink]
 - how to mount gorm home on windows [rink]
 - point to the gorm/jess wiki's [rink]
@@ -74,7 +74,7 @@ Connecting to the cluster
 
 We provide here an overview of how to connect to the cluster, but general,
 up-to-date information can be found in the [HPC documentation](https://docs.hpc.ait.dtu.dk)
-or on the [Gorm wiki](http://gorm.risoe.dk/gormwiki). Note that the 
+or on the [Gorm wiki](http://gorm.risoe.dk/gormwiki). Note that the
 information from the Gorm wiki will be migrated into the HPC documentation
 over time.
 
@@ -82,7 +82,7 @@ You connect to the cluster via an SSH terminal, and there are different SSH
 terminals based on your operating system (see the platform-specific
 instructions in the next subsections). The cluster can only be reached when
 on the DTU network (wired, or only from a DTU computer when using a wireless
-connection), when connected to the DTU VPN, or from one of the DTU 
+connection), when connected to the DTU VPN, or from one of the DTU
 [databars](http://www.databar.dtu.dk/).
 
 ### Windows
@@ -93,18 +93,18 @@ be downloaded from
 
 Once you have installed PuTTY and placed the executable somewhere convenient
 (e.g., the Desktop), double click on the executable. In the window that opens
-up, enter/verify the following settings:  
-* Session > Host Name: gorm.risoe.dk  
+up, enter/verify the following settings:
+* Session > Host Name: gorm.risoe.dk
 * Session > Port: 22
-* Session > Connection type: SSH  
-* Session > Saved Sessions: Gorm  
-* Connection > Data > Auto-login username: your DTU username  
+* Session > Connection type: SSH
+* Session > Saved Sessions: Gorm
+* Connection > Data > Auto-login username: your DTU username
 * Connection > Data > When username is not specified: Use system username
-* Window > Colours > Select a colour to adjust > ANSI Blue: RGB = 85, 85, 255  
-* Window > Colours > Select a colour to adjust > ANSI Bold Blue: RGB = 128, 128, 255  
+* Window > Colours > Select a colour to adjust > ANSI Blue: RGB = 85, 85, 255
+* Window > Colours > Select a colour to adjust > ANSI Bold Blue: RGB = 128, 128, 255
 
 Note that these last two options are optional. We've found that the default
-color for comments, ANSI Blue, is too dark to be seen on the black 
+color for comments, ANSI Blue, is too dark to be seen on the black
 background. The last two options in the list set ANSI Blue and ANSI Blue Bold
 to be lighter and therefore easier to read when working in the terminal. Once
 you have entered these options, click "Save" on the "Session" tab and close
@@ -129,7 +129,7 @@ You are also welcome to use Google and read the many online resources.
 ### Unix
 
 Unlike Windows, SSH is supported out of the box for Linux and Mac OSX
-terminals. To connect to the cluster, enter the following command into 
+terminals. To connect to the cluster, enter the following command into
 the terminal:
 
 ```
@@ -143,42 +143,42 @@ to the Gorm cluster.
 Mounting the cluster discs
 --------------------------
 
-When doing the HAWC2 simulations, you will interact regularly with the cluster 
-file system and discs. Thus, it can be very useful to have two discs mounted 
-locally so you can easily access them: 1) your home directory on Gorm and 2) 
+When doing the HAWC2 simulations, you will interact regularly with the cluster
+file system and discs. Thus, it can be very useful to have two discs mounted
+locally so you can easily access them: 1) your home directory on Gorm and 2)
 the HAWC2 simulation folder on Mimer.
 
-You need to be connected to the DTU network (either directly or via VPN) for 
-the following instructions to work. 
+You need to be connected to the DTU network (either directly or via VPN) for
+the following instructions to work.
 
 
 ### Windows
 
-On Windows, we recommend mapping the two drives to local network drives, which 
-means that you can navigate/copy/paste to/from them in Windows Explorer just as 
-you would do with normal folders on your computer. You may also use [WinSCP](http://winscp.net)  
+On Windows, we recommend mapping the two drives to local network drives, which
+means that you can navigate/copy/paste to/from them in Windows Explorer just as
+you would do with normal folders on your computer. You may also use [WinSCP](http://winscp.net)
 to interact with the cluster discs if you are more familiar with that option.
 
-Here we provide instructions for mapping network drives in Windows 7. If these 
-instructions don't work for you, you can always find directions for your 
-version of Windows by Googling "map network drive windows $WIN_VERSION", where 
+Here we provide instructions for mapping network drives in Windows 7. If these
+instructions don't work for you, you can always find directions for your
+version of Windows by Googling "map network drive windows $WIN_VERSION", where
 $WIN_VERSION is your version number.
 
-In Windows 7, you can map a network drive in the following steps:  
-1. Open a Windows Explorer window  
-2. Right-click on "Computer" and select "Map network drive"  
-3. Select any unused drive and type ```\\gorm.risoe.dk\$USER``` into the folder field, 
-replacing "$USER" with your DTU username (e.g., DTU user "ABCD" has a Gorm home 
-drive of ```\\gorm.risoe.dk\abcd```)  
-4. Check the "Reconnect at logon" box if you want to connect to this drive 
-every time you log into your computer (recommended)  
-5. Click the Finish button  
-6. Repeat Steps 1 through 5, replacing the Gorm home address in Step 3 with the 
+In Windows 7, you can map a network drive in the following steps:
+1. Open a Windows Explorer window
+2. Right-click on "Computer" and select "Map network drive"
+3. Select any unused drive and type ```\\gorm.risoe.dk\$USER``` into the folder field,
+replacing "$USER" with your DTU username (e.g., DTU user "ABCD" has a Gorm home
+drive of ```\\gorm.risoe.dk\abcd```)
+4. Check the "Reconnect at logon" box if you want to connect to this drive
+every time you log into your computer (recommended)
+5. Click the Finish button
+6. Repeat Steps 1 through 5, replacing the Gorm home address in Step 3 with the
 HAWC2 simulation folder address: ```\\mimer.risoe.dk\hawc2sim```
 
-Note that by default Windows Explorer will hide some of the files you will need 
-edit. In order to show all files on your Gorm home drive, you need to un-hide 
-system files: Explorer > Organize > Folder and search options > "View" tab > 
+Note that by default Windows Explorer will hide some of the files you will need
+edit. In order to show all files on your Gorm home drive, you need to un-hide
+system files: Explorer > Organize > Folder and search options > "View" tab >
 Hidden files and folders > "Show hidden files, folders, and drives".
 
 ### Unix
@@ -218,6 +218,7 @@ After modifying ```.bash_profile```, save and close it. Then, in the terminal,
 run the command (or logout and in again to be safe):
 ```
 g-000 $ source ~/.bash_profile
+g-000 $ source ~/.bashrc
 ```
 
 You will also need to configure wine and place the HAWC2 executables in your
@@ -245,7 +246,8 @@ Alternatively you can also include all the DLL's and executables in the root of
 your HAWC2 model folder. Executables and DLL's placed in the root folder take
 precedence over the ones placed in ```/home/$USER/wine_exe/win32```.
 
-Log out and in again from the cluster (close and restart PuTTY).
+> IMPORTANT: log out and in again from the cluster (close and restart PuTTY)
+> before trying to see if you can run HAWC2.
 
 At this stage you can run HAWC2 as follows:
 
@@ -487,7 +489,7 @@ met:
 ```
 nr_cpus > cpu's used by user
 AND cpu's free on cluster > cpu_free
-AND jobs queued by user < cpu_user_queue)
+AND jobs queued by user < cpu_user_queue
 ```
 
 the program will sleep 5 seconds before trying to launch a new job again.
diff --git a/wetb/prepost/Simulations.py b/wetb/prepost/Simulations.py
index 3ebb7a34a2435d1474672b2b95afb46ae04d272a..6b80b1b2b70b91eb06e860fe0455049601e4a49a 100755
--- a/wetb/prepost/Simulations.py
+++ b/wetb/prepost/Simulations.py
@@ -1942,18 +1942,16 @@ class PBS(object):
         self.silent = silent
         self.pyenv = pyenv
         self.pyenv_cmd = 'source /home/python/miniconda3/bin/activate'
+        self.winebase = 'time WINEARCH=win32 WINEPREFIX=~/.wine32 '
+        self.wine = self.winebase + 'wine'
+        self.winenumactl = self.winebase + 'numactl --physcpubind=$CPU_NR wine'
 
-#        if server == 'thyra':
-#            self.maxcpu = 4
-#            self.secperiter = 0.020
         if server == 'gorm':
             self.maxcpu = 1
             self.secperiter = 0.012
-            self.wine = 'time WINEARCH=win32 WINEPREFIX=~/.wine32 wine'
         elif server == 'jess':
             self.maxcpu = 1
             self.secperiter = 0.012
-            self.wine = 'time WINEARCH=win32 WINEPREFIX=~/.wine32 wine'
         else:
             raise UserWarning('server support only for jess or gorm')
 
@@ -2296,7 +2294,9 @@ class PBS(object):
             self.pbs += '# ' + '-'*78 + '\n'
             self.pbs += '# find+xargs mode: 1 PBS job, multiple cases\n'
             self.pbs += "else\n"
-            param = (self.wine, hawc2_exe, self.htc_dir+case, self.wine_appendix)
+            # numactl --physcpubind=$CPU_NR
+            param = (self.winenumactl, hawc2_exe, self.htc_dir+case,
+                     self.wine_appendix)
             self.pbs += '  echo "execute HAWC2, do not fork and wait"\n'
             self.pbs += "  %s %s ./%s %s\n" % param
             self.pbs += '  echo "POST-PROCESSING"\n'
@@ -3835,8 +3835,10 @@ class Cases(object):
                 else:
                     tmp1, tmp2, tmp3 = self.load_stats()
                     self.stats_df = self.stats_df.append(tmp1)
-                    self.Leq_df = self.Leq_df.append(tmp2)
-                    self.AEP_df = self.AEP_df.append(tmp3)
+                    if isinstance(self.Leq_df, pd.DataFrame):
+                        self.Leq_df = self.Leq_df.append(tmp2)
+                    if isinstance(self.AEP_df, pd.DataFrame):
+                        self.AEP_df = self.AEP_df.append(tmp3)
 
         self.cases = cases_merged
         self.cases_fail = cases_fail_merged
diff --git a/wetb/prepost/dlctemplate.py b/wetb/prepost/dlctemplate.py
index 97a03824506b3d557765662744c5e2cbf0a9f8d1..dc5a6f430ba32580bff16c2c01c8e94b213e01f6 100644
--- a/wetb/prepost/dlctemplate.py
+++ b/wetb/prepost/dlctemplate.py
@@ -390,7 +390,7 @@ def post_launch(sim_id, statistics=True, rem_failed=True, check_logs=True,
         add = None
         # general statistics for all channels channel
         # set neq=None here to calculate 1Hz equivalent loads
-        df_stats = cc.statistics(calc_mech_power=True, i0=i0, i1=i1,
+        df_stats = cc.statistics(calc_mech_power=False, i0=i0, i1=i1,
                                  tags=tags, add_sensor=add, ch_fatigue=None,
                                  update=update, saveinterval=saveinterval,
                                  suffix=suffix, save_new_sigs=save_new_sigs,
diff --git a/wetb/prepost/tests/data/demo_dlc/ref/pbs_in/dlc01_demos/dlc01_steady_wsp10_s100.p b/wetb/prepost/tests/data/demo_dlc/ref/pbs_in/dlc01_demos/dlc01_steady_wsp10_s100.p
index c04a711b4c273f494106b628a16f3f3ec9750352..5fed071645069f93274c75bea5670a86862c0732 100644
--- a/wetb/prepost/tests/data/demo_dlc/ref/pbs_in/dlc01_demos/dlc01_steady_wsp10_s100.p
+++ b/wetb/prepost/tests/data/demo_dlc/ref/pbs_in/dlc01_demos/dlc01_steady_wsp10_s100.p
@@ -68,7 +68,7 @@ if [ -z ${LAUNCH_PBS_MODE+x} ] ; then
 # find+xargs mode: 1 PBS job, multiple cases
 else
   echo "execute HAWC2, do not fork and wait"
-  time WINEARCH=win32 WINEPREFIX=~/.wine32 wine hawc2-latest ./htc/dlc01_demos/dlc01_steady_wsp10_s100.htc 
+  time WINEARCH=win32 WINEPREFIX=~/.wine32 numactl --physcpubind=$CPU_NR wine hawc2-latest ./htc/dlc01_demos/dlc01_steady_wsp10_s100.htc 
   echo "POST-PROCESSING"
   python -c "from wetb.prepost import statsdel; statsdel.logcheck('logfiles/dlc01_demos/dlc01_steady_wsp10_s100.log')"
   python -c "from wetb.prepost import statsdel; statsdel.calc('res/dlc01_demos/dlc01_steady_wsp10_s100', no_bins=46, m=[3, 4, 6, 8, 10, 12], neq=20.0, i0=0, i1=None, ftype='.csv')"
diff --git a/wetb/prepost/tests/data/demo_dlc/ref/pbs_in/dlc01_demos/dlc01_steady_wsp11_s101.p b/wetb/prepost/tests/data/demo_dlc/ref/pbs_in/dlc01_demos/dlc01_steady_wsp11_s101.p
index 18df76b1fd637bc961b1f2acda5e9a037643cbe3..b2db53ba4f4a4b07fe83d39dec0f3b9b8b695c86 100644
--- a/wetb/prepost/tests/data/demo_dlc/ref/pbs_in/dlc01_demos/dlc01_steady_wsp11_s101.p
+++ b/wetb/prepost/tests/data/demo_dlc/ref/pbs_in/dlc01_demos/dlc01_steady_wsp11_s101.p
@@ -68,7 +68,7 @@ if [ -z ${LAUNCH_PBS_MODE+x} ] ; then
 # find+xargs mode: 1 PBS job, multiple cases
 else
   echo "execute HAWC2, do not fork and wait"
-  time WINEARCH=win32 WINEPREFIX=~/.wine32 wine hawc2-latest ./htc/dlc01_demos/dlc01_steady_wsp11_s101.htc 
+  time WINEARCH=win32 WINEPREFIX=~/.wine32 numactl --physcpubind=$CPU_NR wine hawc2-latest ./htc/dlc01_demos/dlc01_steady_wsp11_s101.htc 
   echo "POST-PROCESSING"
   python -c "from wetb.prepost import statsdel; statsdel.logcheck('logfiles/dlc01_demos/dlc01_steady_wsp11_s101.log')"
   python -c "from wetb.prepost import statsdel; statsdel.calc('res/dlc01_demos/dlc01_steady_wsp11_s101', no_bins=46, m=[3, 4, 6, 8, 10, 12], neq=20.0, i0=0, i1=None, ftype='.csv')"
diff --git a/wetb/prepost/tests/data/demo_dlc/ref/pbs_in/dlc01_demos/dlc01_steady_wsp8_noturb.p b/wetb/prepost/tests/data/demo_dlc/ref/pbs_in/dlc01_demos/dlc01_steady_wsp8_noturb.p
index d7db435f247fcdf476520ada8978a98aa25308fe..4fe00308ffdf95d83e3b21a96f3e6072be7d9831 100644
--- a/wetb/prepost/tests/data/demo_dlc/ref/pbs_in/dlc01_demos/dlc01_steady_wsp8_noturb.p
+++ b/wetb/prepost/tests/data/demo_dlc/ref/pbs_in/dlc01_demos/dlc01_steady_wsp8_noturb.p
@@ -68,7 +68,7 @@ if [ -z ${LAUNCH_PBS_MODE+x} ] ; then
 # find+xargs mode: 1 PBS job, multiple cases
 else
   echo "execute HAWC2, do not fork and wait"
-  time WINEARCH=win32 WINEPREFIX=~/.wine32 wine hawc2-latest ./htc/dlc01_demos/dlc01_steady_wsp8_noturb.htc 
+  time WINEARCH=win32 WINEPREFIX=~/.wine32 numactl --physcpubind=$CPU_NR wine hawc2-latest ./htc/dlc01_demos/dlc01_steady_wsp8_noturb.htc 
   echo "POST-PROCESSING"
   python -c "from wetb.prepost import statsdel; statsdel.logcheck('logfiles/dlc01_demos/dlc01_steady_wsp8_noturb.log')"
   python -c "from wetb.prepost import statsdel; statsdel.calc('res/dlc01_demos/dlc01_steady_wsp8_noturb', no_bins=46, m=[3, 4, 6, 8, 10, 12], neq=20.0, i0=0, i1=None, ftype='.csv')"
diff --git a/wetb/prepost/tests/data/demo_dlc/ref/pbs_in/dlc01_demos/dlc01_steady_wsp9_noturb.p b/wetb/prepost/tests/data/demo_dlc/ref/pbs_in/dlc01_demos/dlc01_steady_wsp9_noturb.p
index 40d3d16d6d97d2e805c0943f33fe61263f286451..237970bb3702b5fb37eeeae9cf553fe2930e3c68 100644
--- a/wetb/prepost/tests/data/demo_dlc/ref/pbs_in/dlc01_demos/dlc01_steady_wsp9_noturb.p
+++ b/wetb/prepost/tests/data/demo_dlc/ref/pbs_in/dlc01_demos/dlc01_steady_wsp9_noturb.p
@@ -68,7 +68,7 @@ if [ -z ${LAUNCH_PBS_MODE+x} ] ; then
 # find+xargs mode: 1 PBS job, multiple cases
 else
   echo "execute HAWC2, do not fork and wait"
-  time WINEARCH=win32 WINEPREFIX=~/.wine32 wine hawc2-latest ./htc/dlc01_demos/dlc01_steady_wsp9_noturb.htc 
+  time WINEARCH=win32 WINEPREFIX=~/.wine32 numactl --physcpubind=$CPU_NR wine hawc2-latest ./htc/dlc01_demos/dlc01_steady_wsp9_noturb.htc 
   echo "POST-PROCESSING"
   python -c "from wetb.prepost import statsdel; statsdel.logcheck('logfiles/dlc01_demos/dlc01_steady_wsp9_noturb.log')"
   python -c "from wetb.prepost import statsdel; statsdel.calc('res/dlc01_demos/dlc01_steady_wsp9_noturb', no_bins=46, m=[3, 4, 6, 8, 10, 12], neq=20.0, i0=0, i1=None, ftype='.csv')"