Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found
Select Git revision

Target

Select target project
  • toolbox/WindEnergyToolbox
  • tlbl/WindEnergyToolbox
  • cpav/WindEnergyToolbox
  • frza/WindEnergyToolbox
  • borg/WindEnergyToolbox
  • mmpe/WindEnergyToolbox
  • ozgo/WindEnergyToolbox
  • dave/WindEnergyToolbox
  • mmir/WindEnergyToolbox
  • wluo/WindEnergyToolbox
  • welad/WindEnergyToolbox
  • chpav/WindEnergyToolbox
  • rink/WindEnergyToolbox
  • shfe/WindEnergyToolbox
  • shfe1/WindEnergyToolbox
  • acdi/WindEnergyToolbox
  • angl/WindEnergyToolbox
  • wliang/WindEnergyToolbox
  • mimc/WindEnergyToolbox
  • wtlib/WindEnergyToolbox
  • cmos/WindEnergyToolbox
  • fabpi/WindEnergyToolbox
22 results
Select Git revision
Show changes
Commits on Source (34)
Showing
with 1319 additions and 315 deletions
before_script: before_script:
- apt-get update - apt-get update
# uncomment first time
#- rm -rf TestFiles #- rm -rf TestFiles
- git submodule sync
#- git submodule update --init #- git submodule update --init
- git submodule sync
- git submodule update - git submodule update
......
...@@ -15,9 +15,9 @@ This tool comes handy in the following scenarios: ...@@ -15,9 +15,9 @@ This tool comes handy in the following scenarios:
The generator of the cases uses an input spreadsheet where the cases are defined The generator of the cases uses an input spreadsheet where the cases are defined
in a more compact way. in a more compact way.
The tool is based on the "tags" concept that is used for the genetaion of the htc files. The tool is based on the "tags" concept that is used for the generation of the htc files.
Main spreatsheet Main spreadsheet
---------------- ----------------
A main spreadsheet is used to defines all the DLC of the DLB. The file specifies the tags that are then required in the htc files. A main spreadsheet is used to defines all the DLC of the DLB. The file specifies the tags that are then required in the htc files.
...@@ -26,13 +26,38 @@ The file has: ...@@ -26,13 +26,38 @@ The file has:
* a Main sheet where some wind turbines parameters are defined, the tags are initialized, and the definitions of turbulence and gusts are given. * a Main sheet where some wind turbines parameters are defined, the tags are initialized, and the definitions of turbulence and gusts are given.
* a series of other sheets, each defining a DLC. In these sheets the tags that changes in that DLC are defined. * a series of other sheets, each defining a DLC. In these sheets the tags that changes in that DLC are defined.
The tags are devided into three possible different categories: The tags are divided into three possible different categories:
* Constants (C). Constants are tags that do not change in a DLC, e.g. simulation time, output format, ...; * Constants (C). Constants are tags that do not change in a DLC, e.g. simulation time, output format, ...;
* Variables (V). Variables are tags that define the number of cases in a DLC through their combinations, e.g. wind speed, number of turbilence seeds, wind direction, ..; * Variables (V). Variables are tags that define the number of cases in a DLC through their combinations, e.g. wind speed, number of turbulence seeds, wind direction, ..;
* Functions (F). Functions are tags that depend on other tags through an expression, e.g. turbulence intensity, case name, .... * Functions (F). Functions are tags that depend on other tags through an expression, e.g. turbulence intensity, case name, ....
In each sheet the type of tag is defined in the line above the tag by typing one of the letters C, V, or F. In each sheet the type of tag is defined in the line above the tag by typing one of the letters C, V, or F.
Functions (F) tags
------------------
* Numbers can be converted to strings (for example when a tag refers to a file name)
by using double quotes ```"``` for Functions (F):
* ```"wdir_[wdir]deg_wsp_[wsp]ms"``` will result in the tags ``` [wdir]```
and ```[wsp]``` being replaced with formatted text.
* following formatting rules are used:
* ```[wsp]```, ```[gridgustdelay]``` : ```02i```
* ```[wdir]```, ```[G_phi0]``` : ```03i```
* ```[Hs]```, ```[Tp]``` : ```05.02f```
* all other tags: ```04i```
* Only numbers in tags with double quotes are formatted. In all other cases
there is no formatting taking place and hence no loss of precision occurs.
* In this context, when using quotes, always use double quotes like ```"```.
Do not use single quotes ```'``` or any other quote character.
Variable (V) tags
-----------------
* ```[seed]``` and ```[wave_seed]``` are special variable tags. Instead of defining
a range of seeds, the user indicates the number of seeds to be used.
* ```[wsp]``` is a required variable tag
* ```[seed]``` should be placed in a column BEFORE ```[wsp]```
Generate the files Generate the files
------------------ ------------------
......
...@@ -263,7 +263,7 @@ When there is a new version of HAWC2, or when a new license manager is released, ...@@ -263,7 +263,7 @@ When there is a new version of HAWC2, or when a new license manager is released,
you can update your local wine directory as follows: you can update your local wine directory as follows:
``` ```
g-000 $ cp /home/MET/hawc2exe/* /home/$USER/wine_exe/win32/ g-000 $ rsync -au /home/MET/hawc2exe/win32 /home/$USER/wine_exe/win32 --progress
``` ```
The file ```hawc2-latest.exe``` will always be the latest HAWC2 The file ```hawc2-latest.exe``` will always be the latest HAWC2
......
# Add your requirements here like: # Add your requirements here like:
six six
cython
numpy>=1.4 numpy>=1.4
scipy>=0.9 scipy>=0.9
matplotlib matplotlib
......
...@@ -44,6 +44,7 @@ from .gtsdf import compress2statistics ...@@ -44,6 +44,7 @@ from .gtsdf import compress2statistics
class Dataset(object): class Dataset(object):
def __init__(self, filename): def __init__(self, filename):
self.filename = filename
self.time, self.data, self.info = load(filename) self.time, self.data, self.info = load(filename)
def __call__(self, id): def __call__(self, id):
if isinstance(id, str): if isinstance(id, str):
......
...@@ -67,10 +67,10 @@ class AEFile(object): ...@@ -67,10 +67,10 @@ class AEFile(object):
ae_data = self.ae_sets[set_nr] ae_data = self.ae_sets[set_nr]
index = np.searchsorted(ae_data[:, 0], radius) index = np.searchsorted(ae_data[:, 0], radius)
index = max(1, index) index = max(1, index)
setnr1, setnr2 = ae_data[index - 1:index + 1, 3] setnrs = ae_data[index - 1:index + 1, 3]
if setnr1 != setnr2: if setnrs[0] != setnrs[-1]:
raise NotImplementedError raise NotImplementedError
return setnr1 return setnrs[0]
......
...@@ -49,8 +49,9 @@ class AtTimeFile(object): ...@@ -49,8 +49,9 @@ class AtTimeFile(object):
self.blade_radius = bladetip_radius self.blade_radius = bladetip_radius
with open(filename, encoding='utf-8') as fid: with open(filename, encoding='utf-8') as fid:
lines = fid.readlines() lines = fid.readlines()
atttribute_name_line = [l.startswith("# Radius_s") for l in lines].index(True) atttribute_name_line = [l.strip().startswith("# Radius_s") for l in lines].index(True)
self.attribute_names = lines[atttribute_name_line].lower().replace("#", "").split() #self.attribute_names = lines[atttribute_name_line].lower().replace("#", "").split()
self.attribute_names = [n.strip() for n in lines[atttribute_name_line].lower().split("#")[1:]]
data = np.array([[float(l) for l in lines[i].split() ] for i in range(atttribute_name_line+1, len(lines))]) data = np.array([[float(l) for l in lines[i].split() ] for i in range(atttribute_name_line+1, len(lines))])
self.data = data self.data = data
def func_factory(column): def func_factory(column):
...@@ -97,10 +98,15 @@ class AtTimeFile(object): ...@@ -97,10 +98,15 @@ class AtTimeFile(object):
if __name__ == "__main__": if __name__ == "__main__":
at = AtTimeFile(r"tests/test_files/at.dat", 86.3655) # load file at = AtTimeFile(r"tests/test_files/at.dat", 86.3655) # load file
at = AtTimeFile(r'U:\hama\HAWC2-AVATAR\res/avatar-7ntm-scaled-rad.dat')
at.attribute_names # Attribute names at.attribute_names # Attribute names
at[:3,1] # first 3 twist rows at[:3,1] # first 3 twist rows
at.twist()[:3] # Twist first 3 twist rows print (len(at.attribute_names))
print (at.twist(36, curved_length=True)) # Twist at curved_length = 36 (interpolated) print ("\n".join(at.attribute_names))
print (at.twist(36)) # Twist at 36 (interpolated) print (at.data.shape)
#at.twist()[:3] # Twist first 3 twist rows
#print (at.twist(36, curved_length=True)) # Twist at curved_length = 36 (interpolated)
#print (at.twist(36)) # Twist at 36 (interpolated)
...@@ -175,6 +175,16 @@ class HTCSection(HTCContents): ...@@ -175,6 +175,16 @@ class HTCSection(HTCContents):
s += "".join([c.__str__(level + 1) for c in self]) s += "".join([c.__str__(level + 1) for c in self])
s += "%send %s;%s\n" % (" "*level, self.name_, (("", "\t" + self.end_comments)[self.end_comments.strip() != ""]).replace("\t\n","\n")) s += "%send %s;%s\n" % (" "*level, self.name_, (("", "\t" + self.end_comments)[self.end_comments.strip() != ""]).replace("\t\n","\n"))
return s return s
def get_subsection_by_name(self, name, field='name'):
lst = [s for s in self if field in s and s[field][0]==name]
if len(lst)==1:
return lst[0]
else:
if len(lst)==0:
raise ValueError("subsection '%s' not found"%name)
else:
raise NotImplementedError()
class HTCLine(HTCContents): class HTCLine(HTCContents):
values = None values = None
...@@ -316,130 +326,4 @@ class HTCSensor(HTCLine): ...@@ -316,130 +326,4 @@ class HTCSensor(HTCLine):
("", "\t" + self.str_values())[bool(self.values)], ("", "\t" + self.str_values())[bool(self.values)],
("", "\t" + self.comments)[bool(self.comments.strip())]) ("", "\t" + self.comments)[bool(self.comments.strip())])
class HTCDefaults(object):
empty_htc = """begin simulation;
time_stop 600;
solvertype 1; (newmark)
on_no_convergence continue;
convergence_limits 1E3 1.0 1E-7; ; . to run again, changed 07/11
begin newmark;
deltat 0.02;
end newmark;
end simulation;
;
;----------------------------------------------------------------------------------------------------------------------------------------------------------------
;
begin new_htc_structure;
begin orientation;
end orientation;
begin constraint;
end constraint;
end new_htc_structure;
;
;----------------------------------------------------------------------------------------------------------------------------------------------------------------
;
begin wind ;
density 1.225 ;
wsp 10 ;
tint 1;
horizontal_input 1 ; 0=false, 1=true
windfield_rotations 0 0.0 0.0 ; yaw, tilt, rotation
center_pos0 0 0 -30 ; hub heigth
shear_format 1 0;0=none,1=constant,2=log,3=power,4=linear
turb_format 0 ; 0=none, 1=mann,2=flex
tower_shadow_method 0 ; 0=none, 1=potential flow, 2=jet
end wind;
;
;----------------------------------------------------------------------------------------------------------------------------------------------------------------
;
begin dll;
end dll;
;
;----------------------------------------------------------------------------------------------------------------------------------------------------------------
;
begin output;
general time;
end output;
exit;"""
def add_mann_turbulence(self, L=29.4, ae23=1, Gamma=3.9, seed=1001, high_frq_compensation=True,
filenames=None,
no_grid_points=(16384, 32, 32), box_dimension=(6000, 100, 100),
dont_scale=False,
std_scaling=None):
wind = self.add_section('wind')
wind.turb_format = 1
mann = wind.add_section('mann')
if 'create_turb_parameters' in mann:
mann.create_turb_parameters.values = [L, ae23, Gamma, seed, int(high_frq_compensation)]
else:
mann.add_line('create_turb_parameters', [L, ae23, Gamma, seed, int(high_frq_compensation)], "L, alfaeps, gamma, seed, highfrq compensation")
if filenames is None:
fmt = "mann_l%.1f_ae%.2f_g%.1f_h%d_%dx%dx%d_%.3fx%.2fx%.2f_s%04d%c.turb"
import numpy as np
dxyz = tuple(np.array(box_dimension) / no_grid_points)
filenames = ["./turb/" + fmt % ((L, ae23, Gamma, high_frq_compensation) + no_grid_points + dxyz + (seed, uvw)) for uvw in ['u', 'v', 'w']]
if isinstance(filenames, str):
filenames = ["./turb/%s_s%04d%s.bin" % (filenames, seed, c) for c in ['u', 'v', 'w']]
for filename, c in zip(filenames, ['u', 'v', 'w']):
setattr(mann, 'filename_%s' % c, filename)
for c, n, dim in zip(['u', 'v', 'w'], no_grid_points, box_dimension):
setattr(mann, 'box_dim_%s' % c, "%d %.4f" % (n, dim / (n - 1)))
if dont_scale:
mann.dont_scale = 1
else:
try:
del mann.dont_scale
except KeyError:
pass
if std_scaling is not None:
mann.std_scaling = "%f %f %f" % std_scaling
else:
try:
del mann.std_scaling
except KeyError:
pass
def add_turb_export(self, filename="export_%s.turb", samplefrq = None):
exp = self.wind.add_section('turb_export', allow_duplicate=True)
for uvw in 'uvw':
exp.add_line('filename_%s'%uvw, [filename%uvw])
sf = samplefrq or max(1,int( self.wind.mann.box_dim_u[1]/(self.wind.wsp[0] * self.deltat())))
exp.samplefrq = sf
if "time" in self.output:
exp.time_start = self.output.time[0]
else:
exp.time_start = 0
exp.nsteps = (self.simulation.time_stop[0]-exp.time_start[0]) / self.deltat()
for vw in 'vw':
exp.add_line('box_dim_%s'%vw, self.wind.mann['box_dim_%s'%vw].values)
def import_dtu_we_controller_input(self, filename):
dtu_we_controller = [dll for dll in self.dll if dll.name[0] == 'dtu_we_controller'][0]
with open (filename) as fid:
lines = fid.readlines()
K_r1 = float(lines[1].replace("K = ", '').replace("[Nm/(rad/s)^2]", ''))
Kp_r2 = float(lines[4].replace("Kp = ", '').replace("[Nm/(rad/s)]", ''))
Ki_r2 = float(lines[5].replace("Ki = ", '').replace("[Nm/rad]", ''))
Kp_r3 = float(lines[7].replace("Kp = ", '').replace("[rad/(rad/s)]", ''))
Ki_r3 = float(lines[8].replace("Ki = ", '').replace("[rad/rad]", ''))
KK = lines[9].split("]")
KK1 = float(KK[0].replace("K1 = ", '').replace("[deg", ''))
KK2 = float(KK[1].replace(", K2 = ", '').replace("[deg^2", ''))
cs = dtu_we_controller.init
cs.constant__11.values[1] = "%.6E" % K_r1
cs.constant__12.values[1] = "%.6E" % Kp_r2
cs.constant__13.values[1] = "%.6E" % Ki_r2
cs.constant__16.values[1] = "%.6E" % Kp_r3
cs.constant__17.values[1] = "%.6E" % Ki_r3
cs.constant__21.values[1] = "%.6E" % KK1
cs.constant__22.values[1] = "%.6E" % KK2
'''
Created on 20/01/2014
@author: MMPE
See documentation of HTCFile below
'''
from __future__ import division
from __future__ import unicode_literals
from __future__ import print_function
from __future__ import absolute_import
from builtins import zip
from builtins import int
from builtins import str
from future import standard_library
import os
from wetb.wind.shear import log_shear, power_shear
standard_library.install_aliases()
class HTCDefaults(object):
empty_htc = """begin simulation;
time_stop 600;
solvertype 1; (newmark)
on_no_convergence continue;
convergence_limits 1E3 1.0 1E-7; ; . to run again, changed 07/11
begin newmark;
deltat 0.02;
end newmark;
end simulation;
;
;----------------------------------------------------------------------------------------------------------------------------------------------------------------
;
begin new_htc_structure;
begin orientation;
end orientation;
begin constraint;
end constraint;
end new_htc_structure;
;
;----------------------------------------------------------------------------------------------------------------------------------------------------------------
;
begin wind ;
density 1.225 ;
wsp 10 ;
tint 1;
horizontal_input 1 ; 0=false, 1=true
windfield_rotations 0 0.0 0.0 ; yaw, tilt, rotation
center_pos0 0 0 -30 ; hub heigth
shear_format 1 0;0=none,1=constant,2=log,3=power,4=linear
turb_format 0 ; 0=none, 1=mann,2=flex
tower_shadow_method 0 ; 0=none, 1=potential flow, 2=jet
end wind;
;
;----------------------------------------------------------------------------------------------------------------------------------------------------------------
;
begin dll;
end dll;
;
;----------------------------------------------------------------------------------------------------------------------------------------------------------------
;
begin output;
general time;
end output;
exit;"""
def add_mann_turbulence(self, L=29.4, ae23=1, Gamma=3.9, seed=1001, high_frq_compensation=True,
filenames=None,
no_grid_points=(16384, 32, 32), box_dimension=(6000, 100, 100),
dont_scale=False,
std_scaling=None):
wind = self.add_section('wind')
wind.turb_format = 1
mann = wind.add_section('mann')
if 'create_turb_parameters' in mann:
mann.create_turb_parameters.values = [L, ae23, Gamma, seed, int(high_frq_compensation)]
else:
mann.add_line('create_turb_parameters', [L, ae23, Gamma, seed, int(high_frq_compensation)], "L, alfaeps, gamma, seed, highfrq compensation")
if filenames is None:
fmt = "mann_l%.1f_ae%.2f_g%.1f_h%d_%dx%dx%d_%.3fx%.2fx%.2f_s%04d%c.turb"
import numpy as np
dxyz = tuple(np.array(box_dimension) / no_grid_points)
filenames = ["./turb/" + fmt % ((L, ae23, Gamma, high_frq_compensation) + no_grid_points + dxyz + (seed, uvw)) for uvw in ['u', 'v', 'w']]
if isinstance(filenames, str):
filenames = ["./turb/%s_s%04d%s.bin" % (filenames, seed, c) for c in ['u', 'v', 'w']]
for filename, c in zip(filenames, ['u', 'v', 'w']):
setattr(mann, 'filename_%s' % c, filename)
for c, n, dim in zip(['u', 'v', 'w'], no_grid_points, box_dimension):
setattr(mann, 'box_dim_%s' % c, "%d %.4f" % (n, dim / (n - 1)))
if dont_scale:
mann.dont_scale = 1
else:
try:
del mann.dont_scale
except KeyError:
pass
if std_scaling is not None:
mann.std_scaling = "%f %f %f" % std_scaling
else:
try:
del mann.std_scaling
except KeyError:
pass
def add_turb_export(self, filename="export_%s.turb", samplefrq = None):
exp = self.wind.add_section('turb_export', allow_duplicate=True)
for uvw in 'uvw':
exp.add_line('filename_%s'%uvw, [filename%uvw])
sf = samplefrq or max(1,int( self.wind.mann.box_dim_u[1]/(self.wind.wsp[0] * self.deltat())))
exp.samplefrq = sf
if "time" in self.output:
exp.time_start = self.output.time[0]
else:
exp.time_start = 0
exp.nsteps = (self.simulation.time_stop[0]-exp.time_start[0]) / self.deltat()
for vw in 'vw':
exp.add_line('box_dim_%s'%vw, self.wind.mann['box_dim_%s'%vw].values)
def import_dtu_we_controller_input(self, filename):
dtu_we_controller = [dll for dll in self.dll if dll.name[0] == 'dtu_we_controller'][0]
with open (filename) as fid:
lines = fid.readlines()
K_r1 = float(lines[1].replace("K = ", '').replace("[Nm/(rad/s)^2]", ''))
Kp_r2 = float(lines[4].replace("Kp = ", '').replace("[Nm/(rad/s)]", ''))
Ki_r2 = float(lines[5].replace("Ki = ", '').replace("[Nm/rad]", ''))
Kp_r3 = float(lines[7].replace("Kp = ", '').replace("[rad/(rad/s)]", ''))
Ki_r3 = float(lines[8].replace("Ki = ", '').replace("[rad/rad]", ''))
KK = lines[9].split("]")
KK1 = float(KK[0].replace("K1 = ", '').replace("[deg", ''))
KK2 = float(KK[1].replace(", K2 = ", '').replace("[deg^2", ''))
cs = dtu_we_controller.init
cs.constant__11.values[1] = "%.6E" % K_r1
cs.constant__12.values[1] = "%.6E" % Kp_r2
cs.constant__13.values[1] = "%.6E" % Ki_r2
cs.constant__16.values[1] = "%.6E" % Kp_r3
cs.constant__17.values[1] = "%.6E" % Ki_r3
cs.constant__21.values[1] = "%.6E" % KK1
cs.constant__22.values[1] = "%.6E" % KK2
class HTCExtensions(object):
def get_shear(self):
shear_type, parameter = self.wind.shear_format.values
z0 = -self.wind.center_pos0[2]
wsp = self.wind.wsp[0]
if shear_type==1: #constant
return lambda z : parameter
elif shear_type==3:
return power_shear(parameter, z0, wsp)
else:
raise NotImplementedError
\ No newline at end of file
...@@ -18,8 +18,8 @@ from wetb.utils.cluster_tools.cluster_resource import unix_path_old ...@@ -18,8 +18,8 @@ from wetb.utils.cluster_tools.cluster_resource import unix_path_old
standard_library.install_aliases() standard_library.install_aliases()
from collections import OrderedDict from collections import OrderedDict
from wetb.hawc2.htc_contents import HTCContents, HTCSection, HTCLine, \ from wetb.hawc2.htc_contents import HTCContents, HTCSection, HTCLine
HTCDefaults from wetb.hawc2.htc_extensions import HTCDefaults, HTCExtensions
import os import os
from copy import copy from copy import copy
...@@ -27,7 +27,7 @@ from copy import copy ...@@ -27,7 +27,7 @@ from copy import copy
def fmt_path(path): def fmt_path(path):
return path.lower().replace("\\","/") return path.lower().replace("\\","/")
class HTCFile(HTCContents, HTCDefaults): class HTCFile(HTCContents, HTCDefaults, HTCExtensions):
"""Wrapper for HTC files """Wrapper for HTC files
Examples: Examples:
...@@ -197,21 +197,21 @@ class HTCFile(HTCContents, HTCDefaults): ...@@ -197,21 +197,21 @@ class HTCFile(HTCContents, HTCDefaults):
with open(filename, 'w', encoding='cp1252') as fid: with open(filename, 'w', encoding='cp1252') as fid:
fid.write(str(self)) fid.write(str(self))
def set_name(self, name, htc_folder="htc", log_folder="log", res_folder="res", animation_folder='animation', visualization_folder="visualization"): def set_name(self, name, subfolder=''):
#if os.path.isabs(folder) is False and os.path.relpath(folder).startswith("htc" + os.path.sep): #if os.path.isabs(folder) is False and os.path.relpath(folder).startswith("htc" + os.path.sep):
self.contents #load if not loaded self.contents #load if not loaded
fmt_folder = lambda folder : "./" + os.path.relpath(folder).replace("\\", "/") fmt_folder = lambda folder, subfolder : "./" + os.path.relpath(os.path.join(folder, subfolder)).replace("\\", "/")
self.filename = os.path.abspath(os.path.join(self.modelpath, fmt_folder(htc_folder), "%s.htc" % name)).replace("\\", "/") self.filename = os.path.abspath(os.path.join(self.modelpath, fmt_folder('htc', subfolder), "%s.htc" % name)).replace("\\", "/")
if 'simulation' in self and 'logfile' in self.simulation: if 'simulation' in self and 'logfile' in self.simulation:
self.simulation.logfile = os.path.join(fmt_folder(log_folder), "%s.log" % name).replace("\\", "/") self.simulation.logfile = os.path.join(fmt_folder('log', subfolder), "%s.log" % name).replace("\\", "/")
if 'animation' in self.simulation:
self.simulation.animation = os.path.join(fmt_folder('animation', subfolder), "%s.dat" % name).replace("\\", "/")
if 'visualization' in self.simulation:
self.simulation.visualization = os.path.join(fmt_folder('visualization', subfolder), "%s.hdf5" % name).replace("\\", "/")
elif 'test_structure' in self and 'logfile' in self.test_structure: # hawc2aero elif 'test_structure' in self and 'logfile' in self.test_structure: # hawc2aero
self.test_structure.logfile = os.path.join(fmt_folder(log_folder), "%s.log" % name).replace("\\", "/") self.test_structure.logfile = os.path.join(fmt_folder('log', subfolder), "%s.log" % name).replace("\\", "/")
if 'simulation' in self and 'animation' in self.simulation: self.output.filename = os.path.join(fmt_folder('res', subfolder), "%s" % name).replace("\\", "/")
self.simulation.animation = os.path.join(fmt_folder(animation_folder), "%s.dat" % name).replace("\\", "/")
if 'simulation' in self and 'visualization' in self.simulation:
self.simulation.visualization = os.path.join(fmt_folder(visualization_folder), "%s.hdf5" % name).replace("\\", "/")
self.output.filename = os.path.join(fmt_folder(res_folder), "%s" % name).replace("\\", "/")
def set_time(self, start=None, stop=None, step=None): def set_time(self, start=None, stop=None, step=None):
self.contents # load if not loaded self.contents # load if not loaded
...@@ -262,6 +262,14 @@ class HTCFile(HTCContents, HTCDefaults): ...@@ -262,6 +262,14 @@ class HTCFile(HTCContents, HTCDefaults):
if 'soil' in self: if 'soil' in self:
if 'soil_element' in self.soil: if 'soil_element' in self.soil:
files.append(self.soil.soil_element.get('datafile', [None])[0]) files.append(self.soil.soil_element.get('datafile', [None])[0])
try:
dtu_we_controller = self.dll.get_subsection_by_name('dtu_we_controller')
theta_min = dtu_we_controller.init.constant__5[1]
files.append(os.path.join(os.path.dirname(dtu_we_controller.filename[0]), "wpdata.%d"%theta_min).replace("\\","/"))
except:
pass
try: try:
files.append(self.force.dll.dll[0]) files.append(self.force.dll.dll[0])
except: except:
...@@ -362,16 +370,18 @@ class HTCFile(HTCContents, HTCDefaults): ...@@ -362,16 +370,18 @@ class HTCFile(HTCContents, HTCDefaults):
def deltat(self): def deltat(self):
return self.simulation.newmark.deltat[0] return self.simulation.newmark.deltat[0]
def get_body(self, name):
lst = [b for b in self.new_htc_structure if b.name_=="main_body" and b.name[0]==name] #
if len(lst)==1: # def get_body(self, name):
return lst[0] # lst = [b for b in self.new_htc_structure if b.name_=="main_body" and b.name[0]==name]
else: # if len(lst)==1:
if len(lst)==0: # return lst[0]
raise ValueError("Body '%s' not found"%name) # else:
else: # if len(lst)==0:
raise NotImplementedError() # raise ValueError("Body '%s' not found"%name)
# else:
# raise NotImplementedError()
#
class H2aeroHTCFile(HTCFile): class H2aeroHTCFile(HTCFile):
def __init__(self, filename=None, modelpath=None): def __init__(self, filename=None, modelpath=None):
......
...@@ -10,62 +10,168 @@ from __future__ import absolute_import ...@@ -10,62 +10,168 @@ from __future__ import absolute_import
from builtins import range from builtins import range
from io import open from io import open
from future import standard_library from future import standard_library
from wetb.hawc2.htc_file import HTCFile
standard_library.install_aliases() standard_library.install_aliases()
import numpy as np import numpy as np
import os import os
def save(filename, x_coordinates, z_coordinates, u=None, v=None, w=None):
"""
Parameters
----------
filename : str
filename
x_coordinates : array_like
lateral coordinates
z_coordinates : array_like
vertical coordinates
u : array_like, optional
shear_u component, normalized with U_mean\n
shape must be (#z_coordinates, #x_coordinates) or (#z_coordinates,)
v : array_like, optional
shear_v component, normalized with U_mean\n
shape must be (#z_coordinates, #x_coordinates) or (#z_coordinates,)
w : array_like, optional
shear_w component, normalized with U_mean\n
shape must be (#z_coordinates, #x_coordinates) or (#z_coordinates,)
"""
shape = (len(z_coordinates), len(x_coordinates)) class ShearFile(object):
vuw = [v, u, w] """HAWC2 user defined shear file
for i in range(3):
if vuw[i] is None: Examples:
if i == 1: ---------
vuw[i] = np.ones((shape))
>>> sf = ShearFile([-55, 55], [30, 100, 160] , u=np.array([[0.7, 1, 1.3], [0.7, 1, 1.3]]).T)
>>> print (sf.uvw([-55,0],[65,135])) #uvw factors
[array([ 0.85 , 1.175]), array([ 0., 0.]), array([ 0., 0.])]
>>> from wetb.wind.shear import power_shear
>>> print (sf.uvw([-55,0],[65,135], shear=power_shear(.2,100,10))) # uvw wind speeds
[array([ 7.79832978, 12.47684042]), array([ 0., 0.]), array([ 0., 0.])]
>>> sf.save('test.dat')
"""
def __init__(self,v_positions, w_positions, u=None, v=None, w=None, shear=None):
"""
Parameters
----------
v_positions : array_like
lateral coordinates
w_positions : array_like
vertical coordinates
u : array_like, optional
shear_u component, normalized with U_mean\n
shape must be (#w_positions, #v_positions) or (#w_positions,)
v : array_like, optional
shear_v component, normalized with U_mean\n
shape must be (#w_positions, #v_positions) or (#w_positions,)
w : array_like, optional
shear_w component, normalized with U_mean\n
shape must be (#w_positions, #v_positions) or (#w_positions,)
"""
self.v_positions = v_positions
self.w_positions = w_positions
shape = (len(w_positions), len(v_positions))
uvw = [u, v, w]
for i in range(3):
if uvw[i] is None:
if i == 0:
uvw[i] = np.ones((shape))
else:
uvw[i] = np.zeros((shape))
else: else:
vuw[i] = np.zeros((shape)) uvw[i] = np.array(uvw[i])
else: if len(uvw[i].shape) == 1 and uvw[i].shape[0] == shape[0]:
vuw[i] = np.array(vuw[i]) uvw[i] = np.repeat(np.atleast_2d(uvw[i]).T, shape[1], 1)
if len(vuw[i].shape) == 1 and vuw[i].shape[0] == shape[0]:
vuw[i] = np.repeat(np.atleast_2d(vuw[i]).T, shape[1], 1) assert uvw[i].shape == shape, (i, uvw[i].shape, shape)
self.u, self.v, self.w = uvw
self.shear = shear
def uvw(self, v,w,shear=None):
"""Calculate u,v,w wind speeds at position(s) (v,w)
Parameters
----------
v : int, float or array_like
v-coordinate(s)
w : int, float or array_like
w-coordinates(s)
shear : function or None
if function: f(height)->wsp
if None: self.shear is used, if not None.
Otherwise wind speed factors instead of absolute wind speeds are returned
Returns
-------
u,v,w
wind speed(s) or wind speed factor(s) if shear not defined
"""
shear = shear or self.shear or (lambda z: 1)
from scipy.interpolate import RegularGridInterpolator
wv = np.array([w,v]).T
return [RegularGridInterpolator((self.w_positions, self.v_positions), uvw)(wv)*shear(w)
for uvw in [self.u, self.v, self.w] if uvw is not None]
def save(self, filename):
"""Save user defined shear file
Parameters
----------
filename : str:
Filename
"""
# exist_ok does not exist in Python27
filename = os.path.abspath(filename)
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))#, exist_ok=True)
with open(filename, 'w', encoding='utf-8') as fid:
fid.write(" # autogenerated shear file\n")
fid.write(" %d %d\n" % (len(self.v_positions), len(self.w_positions)))
for i, (l,vuw) in enumerate(zip(['v', 'u', 'w'],[self.v,self.u,self.w])):
fid.write(" # shear %s component\n " % l)
fid.write("\n ".join([" ".join(["%.10f" % v for v in r ]) for r in vuw]))
fid.write("\n")
for yz, coor in (['v', self.v_positions], ['w', self.w_positions]):
fid.write(" # %s coordinates\n " % yz)
fid.write("\n ".join("%.10f" % v for v in coor))
fid.write("\n")
@staticmethod
def load(filename):
"""Load shear file
Parameters
----------
filename : str
Filename
Returns
-------
shear file : ShearFile-object
"""
with open(filename) as fid:
lines = fid.readlines()
no_V,no_W=map(int,lines[1].split())
v,u,w = [np.array([row.split() for row in lines[3+(no_W+1)*i:3+(no_W+1)*(i+1)-1] ],dtype=float) for i in range(3)]
v_positions = np.array(lines[3+(no_W+1)*3:3+(no_W+1)*3+no_V],dtype=float)
w_positions = np.array(lines[3+(no_W+1)*3+(no_V+1):3+(no_W+1)*3+(no_V+1)+no_W],dtype=float)
return ShearFile(v_positions,w_positions,u,v,w)
@staticmethod
def load_from_htc(htc_file):
"""Load shear file from HTC file including shear function
Parameters
----------
htc_file : str or HTCFile
Filename or HTCFile
Returns
-------
shear file : ShearFile-object
"""
if isinstance(htc_file,str):
htc_file = HTCFile(htc_file)
user_defined_shear_filename = os.path.join(htc_file.modelpath, htc_file.wind.user_defined_shear[0])
shear_file = ShearFile.load(user_defined_shear_filename)
shear_file.shear = htc_file.get_shear()
return shear_file
def save(filename, v_coordinates, w_coordinates, u=None, v=None, w=None):
"""Save shear file (deprecated)"""
ShearFile(v_coordinates, w_coordinates,u,v,w).save(filename)
assert vuw[i].shape == shape, (i, vuw[i].shape, shape)
# exist_ok does not exist in Python27
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))#, exist_ok=True)
with open(filename, 'w', encoding='utf-8') as fid:
fid.write(" # autogenerated shear file\n")
fid.write(" %d %d\n" % (len(x_coordinates), len(z_coordinates)))
for i, l in enumerate(['v', 'u', 'w']):
fid.write(" # shear %s component\n " % l)
fid.write("\n ".join([" ".join(["%.10f" % v for v in r ]) for r in vuw[i]]))
fid.write("\n")
for yz, coor in (['v', x_coordinates], ['w', z_coordinates]):
fid.write(" # %s coordinates\n " % yz)
fid.write("\n ".join("%.10f" % v for v in coor))
fid.write("\n")
if __name__ == "__main__": if __name__ == "__main__":
save("test.dat", [-55, 55], [30, 100, 160] , u=np.array([[0.7, 1, 1.3], [0.7, 1, 1.3]]).T) from wetb.wind.shear import power_shear
sf = ShearFile([-55, 55], [30, 100, 160] , u=np.array([[0.7, 1, 1.3], [0.7, 1, 1.3]]).T)
print (sf.uvw([-55,0],[65,135])) #uvw factors
#[array([ 0.85 , 1.175]), array([ 0., 0.]), array([ 0., 0.])]
print (sf.uvw([-55,0],[65,135], shear=power_shear(.2,100,10))) # uvw wind speeds
#[array([ 7.79832978, 12.47684042]), array([ 0., 0.]), array([ 0., 0.])]
sf.save('test.dat')
# autogenerated shear file
2 3
# shear v component
0.0000000000 0.0000000000
0.0000000000 0.0000000000
0.0000000000 0.0000000000
# shear u component
0.8000000000 0.6000000000
1.0000000000 1.0000000000
1.2000000000 1.4000000000
# shear w component
0.0000000000 0.0000000000
0.0000000000 0.0000000000
0.0000000000 0.0000000000
# v coordinates
-55.0000000000
55.0000000000
# w coordinates
30.0000000000
100.0000000000
160.0000000000
This diff is collapsed.
...@@ -260,6 +260,7 @@ begin wind; ...@@ -260,6 +260,7 @@ begin wind;
tower_shadow_method 3; 0=none, 1=potential flow, 2=jet tower_shadow_method 3; 0=none, 1=potential flow, 2=jet
scale_time_start 0; scale_time_start 0;
wind_ramp_factor 0.0 100 0.8 1.0; wind_ramp_factor 0.0 100 0.8 1.0;
user_defined_shear ./data/user_shear.dat;
; iec_gust; ; iec_gust;
; ;
; wind_ramp_abs 400.0 401.0 0.0 1.0; wsp. after the step: 5.0 ; wind_ramp_abs 400.0 401.0 0.0 1.0; wsp. after the step: 5.0
......
'''
Created on 17/07/2014
@author: MMPE
'''
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import division
from __future__ import absolute_import
from io import open
from builtins import str
from builtins import zip
from future import standard_library
standard_library.install_aliases()
import os
import unittest
from datetime import datetime
from wetb.hawc2.htc_file import HTCFile, HTCLine
import numpy as np
tfp = os.path.join(os.path.dirname(__file__), 'test_files/htcfiles/') # test file path
class TestHtcFile(unittest.TestCase):
def test_get_shear(self):
htc = HTCFile(tfp+'test.htc')
self.assertEqual(htc.get_shear()(100), 10*(100/119)**.2)
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
...@@ -107,12 +107,15 @@ class TestHtcFile(unittest.TestCase): ...@@ -107,12 +107,15 @@ class TestHtcFile(unittest.TestCase):
def test_htcfile_setname(self): def test_htcfile_setname(self):
htcfile = HTCFile(self.testfilepath + "test.htc") htcfile = HTCFile(self.testfilepath + "test.htc")
htcfile.set_name("mytest", htc_folder="htcfiles") htcfile.set_name("mytest")
self.assertEqual(os.path.relpath(htcfile.filename, self.testfilepath), r'mytest.htc') self.assertEqual(os.path.relpath(htcfile.filename, self.testfilepath).replace("\\","/"), r'../htc/mytest.htc')
self.assertEqual(htcfile.simulation.logfile[0], './log/mytest.log') self.assertEqual(htcfile.simulation.logfile[0], './log/mytest.log')
self.assertEqual(htcfile.output.filename[0], './res/mytest') self.assertEqual(htcfile.output.filename[0], './res/mytest')
htcfile.set_name("mytest", 'subfolder')
self.assertEqual(os.path.relpath(htcfile.filename, self.testfilepath).replace("\\","/"), r'../htc/subfolder/mytest.htc')
self.assertEqual(htcfile.simulation.logfile[0], './log/subfolder/mytest.log')
self.assertEqual(htcfile.output.filename[0], './res/subfolder/mytest')
def test_set_time(self): def test_set_time(self):
htcfile = HTCFile(self.testfilepath + "test.htc") htcfile = HTCFile(self.testfilepath + "test.htc")
...@@ -241,6 +244,7 @@ end turb_export;""" ...@@ -241,6 +244,7 @@ end turb_export;"""
'./control/mech_brake.dll', './control/mech_brake.dll',
'./control/servo_with_limits.dll', './control/servo_with_limits.dll',
'./control/towclearsens.dll', './control/towclearsens.dll',
'./data/user_shear.dat',
self.testfilepath.replace("\\","/") + 'test.htc' self.testfilepath.replace("\\","/") + 'test.htc'
]: ]:
try: try:
...@@ -248,6 +252,9 @@ end turb_export;""" ...@@ -248,6 +252,9 @@ end turb_export;"""
except ValueError: except ValueError:
raise ValueError(f + " is not in list") raise ValueError(f + " is not in list")
self.assertFalse(input_files) self.assertFalse(input_files)
htcfile = HTCFile(self.testfilepath + "DTU_10MW_RWT.htc")
self.assertTrue('./control/wpdata.100' in htcfile.input_files())
def test_input_files2(self): def test_input_files2(self):
htcfile = HTCFile(self.testfilepath + "ansi.htc",'../') htcfile = HTCFile(self.testfilepath + "ansi.htc",'../')
...@@ -307,7 +314,6 @@ end turb_export;""" ...@@ -307,7 +314,6 @@ end turb_export;"""
htc = HTCFile(self.testfilepath + "test_2xoutput.htc","../") htc = HTCFile(self.testfilepath + "test_2xoutput.htc","../")
self.assertEqual(len(htc.res_file_lst()), 4) self.assertEqual(len(htc.res_file_lst()), 4)
if __name__ == "__main__": if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName'] #import sys;sys.argv = ['', 'Test.testName']
......
...@@ -9,18 +9,19 @@ from __future__ import division ...@@ -9,18 +9,19 @@ from __future__ import division
from __future__ import absolute_import from __future__ import absolute_import
from io import open from io import open
from future import standard_library from future import standard_library
from wetb.hawc2.shear_file import ShearFile
standard_library.install_aliases() standard_library.install_aliases()
import unittest import unittest
from wetb.hawc2 import shear_file from wetb.hawc2 import shear_file
import numpy as np import numpy as np
import os import os
import shutil import shutil
testfilepath = 'test_files/' tfp = os.path.join(os.path.dirname(__file__), 'test_files/')
class TestShearFile(unittest.TestCase): class TestShearFile(unittest.TestCase):
def test_shearfile(self): def test_shearfile_save(self):
f = testfilepath + "tmp_shearfile1.dat" f = tfp + "tmp_shearfile1.dat"
shear_file.save(f, [-55, 55], [30, 100, 160] , u=np.array([[0.7, 1, 1.3], [0.7, 1, 1.3]]).T) shear_file.save(f, [-55, 55], [30, 100, 160] , u=np.array([[0.7, 1, 1.3], [0.7, 1, 1.3]]).T)
with open(f) as fid: with open(f) as fid:
self.assertEqual(fid.read(), self.assertEqual(fid.read(),
...@@ -50,7 +51,7 @@ class TestShearFile(unittest.TestCase): ...@@ -50,7 +51,7 @@ class TestShearFile(unittest.TestCase):
def test_shearfile2(self): def test_shearfile2(self):
f = testfilepath + "tmp_shearfile2.dat" f = tfp + "tmp_shearfile2.dat"
shear_file.save(f, [-55, 55], [30, 100, 160] , u=np.array([0.7, 1, 1.3]).T) shear_file.save(f, [-55, 55], [30, 100, 160] , u=np.array([0.7, 1, 1.3]).T)
with open(f) as fid: with open(f) as fid:
self.assertEqual(fid.read(), self.assertEqual(fid.read(),
...@@ -79,10 +80,20 @@ class TestShearFile(unittest.TestCase): ...@@ -79,10 +80,20 @@ class TestShearFile(unittest.TestCase):
os.remove(f) os.remove(f)
def test_shear_makedirs(self): def test_shear_makedirs(self):
f = testfilepath + "shear/tmp_shearfile2.dat" f = tfp + "shear/tmp_shearfile2.dat"
shear_file.save(f, [-55, 55], [30, 100, 160] , u=np.array([0.7, 1, 1.3]).T) shear_file.save(f, [-55, 55], [30, 100, 160] , u=np.array([0.7, 1, 1.3]).T)
shutil.rmtree(testfilepath + "shear") shutil.rmtree(tfp + "shear")
def test_shear_load(self):
shear_file = ShearFile.load(tfp+"data/user_shear.dat")
np.testing.assert_array_equal(shear_file.w_positions, [30,100,160])
self.assertEqual(shear_file.uvw(0,65)[0],.85)
self.assertEqual(shear_file.uvw(-55,65)[0],.9)
np.testing.assert_array_equal(shear_file.uvw([0,-55],[65,65])[0],[.85,.9])
shear_file = ShearFile.load_from_htc(tfp+"htcfiles/test.htc")
np.testing.assert_array_equal(shear_file.w_positions, [30,100,160])
np.testing.assert_array_almost_equal(shear_file.uvw([0,-55],[65,65])[0],np.array([.85,.9])*8.860807038)
if __name__ == "__main__": if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.test_shearfile'] #import sys;sys.argv = ['', 'Test.test_shearfile']
unittest.main() unittest.main()
...@@ -79,32 +79,33 @@ class GeneralDLC(object): ...@@ -79,32 +79,33 @@ class GeneralDLC(object):
cases_len.append(len(v)) cases_len.append(len(v))
cases_index = multi_for(list(map(range, cases_len))) cases_index = multi_for(list(map(range, cases_len)))
# for irow, row in enumerate(cases_index): # when no seeds are used, otherwise i_seed is not set
# counter = floor(irow/len(variables['[wsp]']))+1 i_seed = -1
# for icol, col in enumerate(row): if '[wsp]' in variables_order:
# if variables_order[icol] == '[seed]': i_wsp = variables_order.index('[wsp]')
# value = '%4.4i' % (1000*counter + row[variables_order.index('[wsp]')]+1) len_wsp = len(variables['[wsp]'])
# elif variables_order[icol] == '[wave_seed]': #shfe: wave_seed else:
# value = '%4.4i' % (1000*counter + row[variables_order.index('[wsp]')]+1) raise ValueError('Missing VARIABLE (V) [wsp] tag!')
# else: if '[seed]' in variables_order:
# value = variables[variables_order[icol]][col] i_seed = variables_order.index('[seed]')
# if not isinstance(value, float) and not isinstance(value, int): if '[wave_seed]' in variables_order:
# value = str(value) i_wave_seed = variables_order.index('[wave_seed]')
# dlc[variables_order[icol]].append(value) if i_seed > i_wsp:
raise ValueError('column [seed] should come BEFORE [wsp] !!')
for irow, row in enumerate(cases_index): for irow, row in enumerate(cases_index):
counter = floor(irow/len(variables['[wsp]']))+1
for icol, col in enumerate(row): for icol, col in enumerate(row):
if variables_order[icol] == '[seed]': if variables_order[icol] == '[seed]':
value = '%4.4i' % (1000*counter + row[variables_order.index('[wsp]')]+1) counter = floor(irow/len_wsp) + 1
value = '%4.4i' % (1000*counter + row[i_wsp] + 1)
elif variables_order[icol] == '[wave_seed]': elif variables_order[icol] == '[wave_seed]':
value = '%4.4i' % ( 100*(row[variables_order.index('[wsp]')]+1) + \ value = '%4.4i' % (100*(row[i_wsp]+1) + row[i_wave_seed] + 1)
row[variables_order.index('[wave_seed]')]+1) # value = '%4.4i' % (1000*counter + row[i_wsp] + 101)
# value = '%4.4i' % (irow+1) # value = '%4.4i' % (irow+1)
# value = '%4.4i' % (10000*(row[variables_order.index('[wave_dir]')]+1) + \ # value = '%4.4i' % (10000*(row[i_wave_dir])] + 1) + \
# 1000*(row[variables_order.index('[Hs]')]+1) + \ # 1000*(row[i_Hs])] + 1) + \
# 10*(row[variables_order.index('[Tp]')]+1) +\ # 10*(row[i_Tp])] + 1) +\
# row[variables_order.index('[seed]')]+1) # row[i_seed])] + 1)
else: else:
value = variables[variables_order[icol]][col] value = variables[variables_order[icol]][col]
...@@ -153,8 +154,8 @@ class GeneralDLC(object): ...@@ -153,8 +154,8 @@ class GeneralDLC(object):
# specify the precision of the tag as used in the formulas # specify the precision of the tag as used in the formulas
# this does NOT affect the precision of the tag itself, only when used # this does NOT affect the precision of the tag itself, only when used
# in a formula based tag. # in a formula based tag.
formats = {'[wsp]':'%2.2i', '[gridgustdelay]':'%2.2i', formats = {'[wsp]':'%02i', '[gridgustdelay]':'%02i',
'[wdir]':'%3.3i', '[G_phi0]':'%3.3i', '[wdir]':'%03i', '[G_phi0]':'%03i',
'[sign]':'%s', '[sign]':'%s',
'[Hs]':'%05.02f', '[Tp]':'%05.02f'} '[Hs]':'%05.02f', '[Tp]':'%05.02f'}
...@@ -168,7 +169,7 @@ class GeneralDLC(object): ...@@ -168,7 +169,7 @@ class GeneralDLC(object):
try: try:
fmt = formats[key] fmt = formats[key]
except KeyError: except KeyError:
fmt = '%4.4i' fmt = '%04i'
try: try:
value = float(dlc[key][i]) value = float(dlc[key][i])
except ValueError: except ValueError:
...@@ -209,14 +210,15 @@ class GenerateDLCCases(GeneralDLC): ...@@ -209,14 +210,15 @@ class GenerateDLCCases(GeneralDLC):
""" """
def execute(self, filename='DLCs.xlsx', folder=''): def execute(self, filename='DLCs.xlsx', folder='', isheets=None):
book = xlrd.open_workbook(filename) book = xlrd.open_workbook(filename)
nsheets = book.nsheets if isheets is None:
isheets = list(range(1, book.nsheets))
# Loop through all the sheets. Each sheet correspond to a DLC. # Loop through all the sheets. Each sheet correspond to a DLC.
for isheet in range(1, nsheets): for isheet in isheets:
# Read all the initialization constants and functions in the # Read all the initialization constants and functions in the
# first sheet # first sheet
...@@ -270,6 +272,7 @@ class GenerateDLCCases(GeneralDLC): ...@@ -270,6 +272,7 @@ class GenerateDLCCases(GeneralDLC):
self.add_constants_tag(dlc, constants) self.add_constants_tag(dlc, constants)
self.add_formulas(dlc, formulas) self.add_formulas(dlc, formulas)
self.add_formulas(dlc, general_functions) self.add_formulas(dlc, general_functions)
# TODO: before eval, check if all tags in formula's are present
self.eval_formulas(dlc) self.eval_formulas(dlc)
df = pd.DataFrame(dlc) df = pd.DataFrame(dlc)
if not os.path.exists(folder): if not os.path.exists(folder):
...@@ -277,29 +280,6 @@ class GenerateDLCCases(GeneralDLC): ...@@ -277,29 +280,6 @@ class GenerateDLCCases(GeneralDLC):
df.to_excel(os.path.join(folder, sheet.name+'.xlsx'), index=False) df.to_excel(os.path.join(folder, sheet.name+'.xlsx'), index=False)
class RunTest():
"""
Class to perform basic testing of the GenerateDLCCases class. It writes the
spreadsheets and compare them with a reference set.
"""
def execute(self):
from pandas.util.testing import assert_frame_equal
a = GenerateDLCCases()
a.execute()
book = xlrd.open_workbook('DLCs.xlsx')
nsheets = book.nsheets
for isheet in range(1, nsheets):
sheet = book.sheets()[isheet]
print('Sheet #%i' % isheet, sheet.name)
book1 = pd.read_excel('Reference/'+sheet.name+'.xlsx')
book2 = pd.read_excel(sheet.name+'.xls')
book2 = book2[book1.columns]
assert_frame_equal(book1, book2, check_dtype=False)
if __name__ == '__main__': if __name__ == '__main__':
parser = ArgumentParser(description = "generator of DLB spreadsheets") parser = ArgumentParser(description = "generator of DLB spreadsheets")
......
...@@ -21,7 +21,7 @@ import gc ...@@ -21,7 +21,7 @@ import gc
import numpy as np import numpy as np
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import matplotlib as mpl #import matplotlib as mpl
#from matplotlib.figure import Figure #from matplotlib.figure import Figure
#from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigCanvas #from matplotlib.backends.backend_qt4agg import FigureCanvasQTAgg as FigCanvas
#from scipy import interpolate as interp #from scipy import interpolate as interp
...@@ -45,8 +45,8 @@ plt.rc('xtick', labelsize=10) ...@@ -45,8 +45,8 @@ plt.rc('xtick', labelsize=10)
plt.rc('ytick', labelsize=10) plt.rc('ytick', labelsize=10)
plt.rc('axes', labelsize=12) plt.rc('axes', labelsize=12)
# do not use tex on Gorm and or Jess # do not use tex on Gorm and or Jess
if not socket.gethostname()[:2] in ['g-', 'je', 'j-']: #if not socket.gethostname()[:2] in ['g-', 'je', 'j-']:
plt.rc('text', usetex=True) # plt.rc('text', usetex=True)
plt.rc('legend', fontsize=11) plt.rc('legend', fontsize=11)
plt.rc('legend', numpoints=1) plt.rc('legend', numpoints=1)
plt.rc('legend', borderaxespad=0) plt.rc('legend', borderaxespad=0)
...@@ -55,6 +55,9 @@ plt.rc('legend', borderaxespad=0) ...@@ -55,6 +55,9 @@ plt.rc('legend', borderaxespad=0)
def merge_sim_ids(sim_ids, post_dirs, post_dir_save=False): def merge_sim_ids(sim_ids, post_dirs, post_dir_save=False):
""" """
""" """
cols_extra = ['[run_dir]', '[res_dir]', '[wdir]', '[DLC]', '[Case folder]']
# map the run_dir to the same order as the post_dirs, labels # map the run_dir to the same order as the post_dirs, labels
run_dirs = [] run_dirs = []
# avoid saving merged cases if there is only one! # avoid saving merged cases if there is only one!
...@@ -80,13 +83,18 @@ def merge_sim_ids(sim_ids, post_dirs, post_dir_save=False): ...@@ -80,13 +83,18 @@ def merge_sim_ids(sim_ids, post_dirs, post_dir_save=False):
else: else:
wsp = '[Windspeed]' wsp = '[Windspeed]'
# columns we want to add from cc.cases (cases dict) to stats # columns we want to add from cc.cases (cases dict) to stats
cols_cc = set(['[run_dir]', wsp, '[res_dir]', '[wdir]', '[DLC]']) cols_cc = set(cols_extra + [wsp])
# do not add column twice, some might already be in df stats # do not add column twice, some might already be in df stats
add_cols = list(cols_cc - set(df_stats.columns)) add_cols = list(cols_cc - set(df_stats.columns))
add_cols.append('[case_id]') add_cols.append('[case_id]')
dfc = dfc[add_cols] dfc = dfc[add_cols]
df_stats = pd.merge(df_stats, dfc, on='[case_id]') df_stats = pd.merge(df_stats, dfc, on='[case_id]')
df_stats.rename(columns={wsp:'[Windspeed]'}, inplace=True) # FIXME: this is very messy, we can end up with both [wsp] and
# [Windspeed] columns
if '[Windspeed]' in df_stats.columns and '[wsp]' in df_stats.columns:
df_stats.drop('[wsp]', axis=1, inplace=True)
if wsp != '[Windspeed]':
df_stats.rename(columns={wsp:'[Windspeed]'}, inplace=True)
# map the run_dir to the same order as the post_dirs, labels # map the run_dir to the same order as the post_dirs, labels
run_dirs.append(df_stats['[run_dir]'].unique()[0]) run_dirs.append(df_stats['[run_dir]'].unique()[0])
...@@ -120,8 +128,8 @@ def merge_sim_ids(sim_ids, post_dirs, post_dir_save=False): ...@@ -120,8 +128,8 @@ def merge_sim_ids(sim_ids, post_dirs, post_dir_save=False):
del df_stats, _, cc del df_stats, _, cc
gc.collect() gc.collect()
# and load the reduced combined set # and load the reduced combined set
print('loading merged stats: %s' % fpath) print('loading merged stats: %s' % fmerged)
df_stats = pd.read_hdf(fpath, 'table') df_stats = pd.read_hdf(fmerged, 'table')
else: else:
sim_id = sim_ids sim_id = sim_ids
sim_ids = [sim_id] sim_ids = [sim_id]
...@@ -135,18 +143,21 @@ def merge_sim_ids(sim_ids, post_dirs, post_dir_save=False): ...@@ -135,18 +143,21 @@ def merge_sim_ids(sim_ids, post_dirs, post_dir_save=False):
# stats has only a few columns identifying the different cases # stats has only a few columns identifying the different cases
# add some more for selecting them # add some more for selecting them
dfc = cc.cases2df() dfc = cc.cases2df()
if 'wsp' in dfc.columns: if '[wsp]' in dfc.columns:
wsp = '[wsp]' wsp = '[wsp]'
else: else:
wsp = '[Windspeed]' wsp = '[Windspeed]'
# columns we want to add from cc.cases (cases dict) to stats # columns we want to add from cc.cases (cases dict) to stats
cols_cc = set(['[run_dir]', wsp, '[res_dir]', '[wdir]', '[DLC]']) cols_cc = set(cols_extra + [wsp])
# do not add column twice, some might already be in df stats # do not add column twice, some might already be in df stats
add_cols = list(cols_cc - set(df_stats.columns)) add_cols = list(cols_cc - set(df_stats.columns))
add_cols.append('[case_id]') add_cols.append('[case_id]')
dfc = dfc[add_cols] dfc = dfc[add_cols]
df_stats = pd.merge(df_stats, dfc, on='[case_id]') df_stats = pd.merge(df_stats, dfc, on='[case_id]')
df_stats.rename(columns={wsp:'[Windspeed]'}, inplace=True) if '[Windspeed]' in df_stats.columns and '[wsp]' in df_stats.columns:
df_stats.drop('[wsp]', axis=1, inplace=True)
if wsp != '[Windspeed]':
df_stats.rename(columns={wsp:'[Windspeed]'}, inplace=True)
return run_dirs, df_stats return run_dirs, df_stats
...@@ -155,25 +166,27 @@ def merge_sim_ids(sim_ids, post_dirs, post_dir_save=False): ...@@ -155,25 +166,27 @@ def merge_sim_ids(sim_ids, post_dirs, post_dir_save=False):
# ============================================================================= # =============================================================================
def plot_stats2(sim_ids, post_dirs, plot_chans, fig_dir_base=None, labels=None, def plot_stats2(sim_ids, post_dirs, plot_chans, fig_dir_base=None, labels=None,
post_dir_save=False, dlc_ignore=['00'], figsize=(8,6)): post_dir_save=False, dlc_ignore=['00'], figsize=(8,6),
eps=False, ylabels=None):
""" """
Map which channels have to be compared Map which channels have to be compared
""" """
# reduce required memory, only use following columns # reduce required memory, only use following columns
cols = ['[run_dir]', '[DLC]', 'channel', '[res_dir]', '[Windspeed]', cols = ['[run_dir]', '[DLC]', 'channel', '[res_dir]', '[Windspeed]',
'mean', 'max', 'min', 'std', '[wdir]'] 'mean', 'max', 'min', 'std', '[wdir]', '[Case folder]']
run_dirs, df_stats = merge_sim_ids(sim_ids, post_dirs, run_dirs, df_stats = merge_sim_ids(sim_ids, post_dirs,
post_dir_save=post_dir_save) post_dir_save=post_dir_save)
plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=labels, plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=labels,
figsize=figsize, dlc_ignore=dlc_ignore) figsize=figsize, dlc_ignore=dlc_ignore, eps=eps,
ylabels=ylabels)
def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None, def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None,
figsize=(8,6), dlc_ignore=['00'], run_dirs=None, figsize=(8,6), dlc_ignore=['00'], run_dirs=None,
sim_ids=[]): sim_ids=[], eps=False, ylabels=None):
"""Create for each DLC an overview plot of the statistics. """Create for each DLC an overview plot of the statistics.
df_stats required columns: df_stats required columns:
...@@ -205,7 +218,7 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None, ...@@ -205,7 +218,7 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None,
figsize : tuple, default=(8,6) figsize : tuple, default=(8,6)
dlc_ignore : list, default=['00'] dlc_ignore : list, default=['dlc00']
By default all but dlc00 (stair case, wind ramp) are plotted. Add By default all but dlc00 (stair case, wind ramp) are plotted. Add
more dlc numbers here if necessary. more dlc numbers here if necessary.
...@@ -225,7 +238,8 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None, ...@@ -225,7 +238,8 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None,
mfcs3 = ['r', 'w'] mfcs3 = ['r', 'w']
stds = ['r', 'b'] stds = ['r', 'b']
required = ['[DLC]', '[run_dir]', '[wdir]', '[Windspeed]', '[res_dir]'] required = ['[DLC]', '[run_dir]', '[wdir]', '[Windspeed]', '[res_dir]',
'[Case folder]']
cols = df_stats.columns cols = df_stats.columns
for col in required: for col in required:
if col not in cols: if col not in cols:
...@@ -243,9 +257,15 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None, ...@@ -243,9 +257,15 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None,
sim_ids.append(run_dir.split(os.path.sep)[-2]) sim_ids.append(run_dir.split(os.path.sep)[-2])
# first, take each DLC appart # first, take each DLC appart
for dlc_name, gr_dlc in df_stats.groupby(df_stats['[DLC]']): for gr_name, gr_dlc in df_stats.groupby(df_stats['[Case folder]']):
dlc_name = gr_name
if dlc_name[:3].lower() == 'dlc':
# FIXME: this is messy since this places a hard coded dependency
# between [Case folder] and [Case id.] when the tag [DLC] is
# defined in dlcdefs.py
dlc_name = gr_name.split('_')[0]
# do not plot the stats for dlc00 # do not plot the stats for dlc00
if dlc_name in dlc_ignore: if dlc_name.lower() in dlc_ignore:
continue continue
# cycle through all the target plot channels # cycle through all the target plot channels
for ch_dscr, ch_names in plot_chans.items(): for ch_dscr, ch_names in plot_chans.items():
...@@ -253,8 +273,9 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None, ...@@ -253,8 +273,9 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None,
# identical, we need to manually pick them. # identical, we need to manually pick them.
# figure file name will be the first channel # figure file name will be the first channel
if isinstance(ch_names, list): if isinstance(ch_names, list):
ch_name = ch_names[0]
df_chan = gr_dlc[gr_dlc.channel == ch_names[0]] df_chan = gr_dlc[gr_dlc.channel == ch_names[0]]
fname_base = ch_names[0].replace(' ', '_') fname_base = ch_names[0]#.replace(' ', '_')
try: try:
df2 = gr_dlc[gr_dlc.channel == ch_names[1]] df2 = gr_dlc[gr_dlc.channel == ch_names[1]]
df_chan = pd.concat([df_chan, df2], ignore_index=True) df_chan = pd.concat([df_chan, df2], ignore_index=True)
...@@ -262,9 +283,9 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None, ...@@ -262,9 +283,9 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None,
pass pass
else: else:
ch_name = ch_names ch_name = ch_names
ch_names = [ch_name] ch_names = [ch_names]
df_chan = gr_dlc[gr_dlc.channel == ch_names] df_chan = gr_dlc[gr_dlc.channel == ch_names]
fname_base = ch_names.replace(' ', '_') fname_base = ch_names#.replace(' ', '_')
# if not, than we are missing a channel description, or the channel # if not, than we are missing a channel description, or the channel
# is simply not available in the given result set # is simply not available in the given result set
...@@ -288,7 +309,7 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None, ...@@ -288,7 +309,7 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None,
elif len(ch_names) > 1 and len(lens)==2 and lens[1] < 1: elif len(ch_names) > 1 and len(lens)==2 and lens[1] < 1:
continue continue
print('start plotting: %s %s' % (str(dlc_name).ljust(7), ch_dscr)) print('start plotting: %s %s' % (dlc_name.ljust(10), ch_dscr))
fig, axes = mplutils.make_fig(nrows=1, ncols=1, fig, axes = mplutils.make_fig(nrows=1, ncols=1,
figsize=figsize, dpi=120) figsize=figsize, dpi=120)
...@@ -318,8 +339,10 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None, ...@@ -318,8 +339,10 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None,
# for clarity, set off-set on wind speed when comparing two DLB's # for clarity, set off-set on wind speed when comparing two DLB's
if len(lens)==2: if len(lens)==2:
windoffset = [-0.2, 0.2] windoffset = [-0.2, 0.2]
dirroffset = [-5, 5]
else: else:
windoffset = [0] windoffset = [0]
dirroffset = [0]
# in case of a fully empty plot xlims will remain None and there # in case of a fully empty plot xlims will remain None and there
# is no need to save the plot # is no need to save the plot
xlims = None xlims = None
...@@ -338,8 +361,8 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None, ...@@ -338,8 +361,8 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None,
# sid_names.append(sid_name) # sid_names.append(sid_name)
print(' sim_id/label:', sid_name) print(' sim_id/label:', sid_name)
# FIXME: will this go wrong in PY3? # FIXME: will this go wrong in PY3?
if str(dlc_name) in ['61', '62']: if dlc_name.lower() in ['dlc61', 'dlc62']:
xdata = gr_ch_dlc_sid['[wdir]'].values xdata = gr_ch_dlc_sid['[wdir]'].values + dirroffset[ii]
xlabel = 'wind direction [deg]' xlabel = 'wind direction [deg]'
xlims = [0, 360] xlims = [0, 360]
else: else:
...@@ -354,12 +377,12 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None, ...@@ -354,12 +377,12 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None,
lab1 = 'mean' lab1 = 'mean'
lab2 = 'min' lab2 = 'min'
lab3 = 'max' lab3 = 'max'
lab4 = 'std' # lab4 = 'std'
else: else:
lab1 = 'mean %s' % sid_name lab1 = 'mean %s' % sid_name
lab2 = 'min %s' % sid_name lab2 = 'min %s' % sid_name
lab3 = 'max %s' % sid_name lab3 = 'max %s' % sid_name
lab4 = 'std %s' % sid_name # lab4 = 'std %s' % sid_name
mfc1 = mfcs1[ii] mfc1 = mfcs1[ii]
mfc2 = mfcs2[ii] mfc2 = mfcs2[ii]
mfc3 = mfcs3[ii] mfc3 = mfcs3[ii]
...@@ -393,13 +416,18 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None, ...@@ -393,13 +416,18 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None,
ax.grid() ax.grid()
ax.set_xlim(xlims) ax.set_xlim(xlims)
leg = ax.legend(loc='best', ncol=3) leg = ax.legend(bbox_to_anchor=(1, 1), loc='lower right', ncol=3)
leg.get_frame().set_alpha(0.7) leg.get_frame().set_alpha(0.7)
ax.set_title(r'{DLC%s} %s' % (dlc_name, ch_dscr)) # ax.set_title(r'{%s} %s' % (dlc_name.replace('_', '\\_'), ch_dscr))
# fig.suptitle(r'{%s} %s' % (dlc_name.replace('_', '\\_'), ch_dscr))
fig.suptitle('%s %s' % (dlc_name, ch_dscr))
ax.set_xlabel(xlabel) ax.set_xlabel(xlabel)
if ylabels is not None:
ax.set_ylabel(ylabels[ch_name])
fig.tight_layout() fig.tight_layout()
fig.subplots_adjust(top=0.92) spacing = 0.92 - (0.065 * (ii + 1))
fig_path = os.path.join(fig_dir, 'dlc%s' % dlc_name) fig.subplots_adjust(top=spacing)
fig_path = os.path.join(fig_dir, dlc_name)
if len(sim_ids)==1: if len(sim_ids)==1:
fname = fname_base + '.png' fname = fname_base + '.png'
else: else:
...@@ -408,6 +436,8 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None, ...@@ -408,6 +436,8 @@ def plot_dlc_stats(df_stats, plot_chans, fig_dir_base, labels=None,
os.makedirs(fig_path) os.makedirs(fig_path)
fig_path = os.path.join(fig_path, fname) fig_path = os.path.join(fig_path, fname)
fig.savefig(fig_path)#.encode('latin-1') fig.savefig(fig_path)#.encode('latin-1')
if eps:
fig.savefig(fig_path.replace('.png', '.eps'))
fig.clear() fig.clear()
print('saved: %s' % fig_path) print('saved: %s' % fig_path)
......
...@@ -265,7 +265,8 @@ def variable_tag_func_mod1(master, case_id_short=False): ...@@ -265,7 +265,8 @@ def variable_tag_func_mod1(master, case_id_short=False):
def launch_dlcs_excel(sim_id, silent=False, verbose=False, pbs_turb=False, def launch_dlcs_excel(sim_id, silent=False, verbose=False, pbs_turb=False,
runmethod=None, write_htc=True, zipchunks=False, runmethod=None, write_htc=True, zipchunks=False,
walltime='04:00:00', postpro_node=False): walltime='04:00:00', postpro_node=False,
dlcs_dir='htc/DLCs'):
""" """
Launch load cases defined in Excel files Launch load cases defined in Excel files
""" """
...@@ -279,7 +280,6 @@ def launch_dlcs_excel(sim_id, silent=False, verbose=False, pbs_turb=False, ...@@ -279,7 +280,6 @@ def launch_dlcs_excel(sim_id, silent=False, verbose=False, pbs_turb=False,
pyenv = None pyenv = None
# see if a htc/DLCs dir exists # see if a htc/DLCs dir exists
dlcs_dir = os.path.join(P_SOURCE, 'htc', 'DLCs')
# Load all DLC definitions and make some assumptions on tags that are not # Load all DLC definitions and make some assumptions on tags that are not
# defined # defined
if os.path.exists(dlcs_dir): if os.path.exists(dlcs_dir):
...@@ -458,11 +458,14 @@ def post_launch(sim_id, statistics=True, rem_failed=True, check_logs=True, ...@@ -458,11 +458,14 @@ def post_launch(sim_id, statistics=True, rem_failed=True, check_logs=True,
return df_stats, df_AEP, df_Leq return df_stats, df_AEP, df_Leq
def postpro_node_merge(tqdm=False): def postpro_node_merge(tqdm=False, zipchunks=False):
"""With postpro_node each individual case has a .csv file for the log file """With postpro_node each individual case has a .csv file for the log file
analysis and a .csv file for the statistics tables. Merge all these single analysis and a .csv file for the statistics tables. Merge all these single
files into one table/DataFrame. files into one table/DataFrame.
When using the zipchunk approach, all log file analysis and statistics
are grouped into tar archives in the prepost-data directory.
Parameters Parameters
---------- ----------
...@@ -470,12 +473,18 @@ def postpro_node_merge(tqdm=False): ...@@ -470,12 +473,18 @@ def postpro_node_merge(tqdm=False):
Set to True for displaying a progress bar (provided by the tqdm module) Set to True for displaying a progress bar (provided by the tqdm module)
when merging all csv files into a single table/pd.DataFrame. when merging all csv files into a single table/pd.DataFrame.
zipchunks : boolean, default=False
Set to True if merging post-processing files grouped into tar archives
as generated by the zipchunks approach.
""" """
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
# MERGE POSTPRO ON NODE APPROACH INTO ONE DataFrame # MERGE POSTPRO ON NODE APPROACH INTO ONE DataFrame
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
lf = windIO.LogFile() lf = windIO.LogFile()
path_pattern = os.path.join(P_RUN, 'logfiles', '*', '*.csv') path_pattern = os.path.join(P_RUN, 'logfiles', '*', '*.csv')
if zipchunks:
path_pattern = os.path.join(POST_DIR, 'loganalysis_chnk*.tar.xz')
csv_fname = '%s_ErrorLogs.csv' % sim_id csv_fname = '%s_ErrorLogs.csv' % sim_id
fcsv = os.path.join(POST_DIR, csv_fname) fcsv = os.path.join(POST_DIR, csv_fname)
mdf = AppendDataFrames(tqdm=tqdm) mdf = AppendDataFrames(tqdm=tqdm)
...@@ -489,6 +498,8 @@ def postpro_node_merge(tqdm=False): ...@@ -489,6 +498,8 @@ def postpro_node_merge(tqdm=False):
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
path_pattern = os.path.join(P_RUN, 'res', '*', '*.csv') path_pattern = os.path.join(P_RUN, 'res', '*', '*.csv')
csv_fname = '%s_statistics.csv' % sim_id csv_fname = '%s_statistics.csv' % sim_id
if zipchunks:
path_pattern = os.path.join(POST_DIR, 'statsdel_chnk*.tar.xz')
fcsv = os.path.join(POST_DIR, csv_fname) fcsv = os.path.join(POST_DIR, csv_fname)
mdf = AppendDataFrames(tqdm=tqdm) mdf = AppendDataFrames(tqdm=tqdm)
# individual log file analysis does not have header, make sure to include # individual log file analysis does not have header, make sure to include
...@@ -511,13 +522,22 @@ def postpro_node_merge(tqdm=False): ...@@ -511,13 +522,22 @@ def postpro_node_merge(tqdm=False):
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
# merge missing cols onto stats # merge missing cols onto stats
required = ['[DLC]', '[run_dir]', '[wdir]', '[Windspeed]', '[res_dir]', required = ['[DLC]', '[run_dir]', '[wdir]', '[Windspeed]', '[res_dir]',
'[case_id]'] '[case_id]', '[Case folder]']
df = pd.read_hdf(fdf, 'table') df = pd.read_hdf(fdf, 'table')
# df now has case_id as the path to the statistics file: res/dlc12_xxx/yyy
# while df_tags will have just yyy as case_id
tmp = df['[case_id]'].str.split('/', expand=True)
df['[case_id]'] = tmp[tmp.columns[-1]]
cc = sim.Cases(POST_DIR, sim_id) cc = sim.Cases(POST_DIR, sim_id)
df_tags = cc.cases2df()[required] df_tags = cc.cases2df()[required]
df_stats = pd.merge(df, df_tags, on=['[case_id]']) df_stats = pd.merge(df, df_tags, on=['[case_id]'])
df_stats.to_hdf(fdf, 'table') # if the merge didn't work due to other misaligned case_id tags, do not
df_stats.to_csv(fdf.replace('.h5', '.csv')) # overwrite our otherwise ok tables!
if len(df_stats) == len(df):
df_stats.to_hdf(fdf, 'table', mode='w')
df_stats.to_csv(fdf.replace('.h5', '.csv'))
if __name__ == '__main__': if __name__ == '__main__':
...@@ -582,16 +602,21 @@ if __name__ == '__main__': ...@@ -582,16 +602,21 @@ if __name__ == '__main__':
help='Merge all individual statistics and log file ' help='Merge all individual statistics and log file '
'analysis .csv files into one table/pd.DataFrame. ' 'analysis .csv files into one table/pd.DataFrame. '
'Requires that htc files have been created with ' 'Requires that htc files have been created with '
'--prep --postpro_node.') '--prep --postpro_node. Combine with --zipchunks when '
'--prep --zipchunks was used in for generating and '
'running all simulations.')
parser.add_argument('--gendlcs', default=False, action='store_true', parser.add_argument('--gendlcs', default=False, action='store_true',
help='Generate DLC exchange files based on master DLC ' help='Generate DLC exchange files based on master DLC '
'spreadsheet.') 'spreadsheet.')
parser.add_argument('--dlcmaster', type=str, default='htc/DLCs.xlsx', parser.add_argument('--dlcmaster', type=str, default='htc/DLCs.xlsx',
action='store', dest='dlcmaster', action='store', dest='dlcmaster',
help='Master spreadsheet file location') help='Optionally define an other location of the '
'Master spreadsheet file location, default value is: '
'htc/DLCs.xlsx')
parser.add_argument('--dlcfolder', type=str, default='htc/DLCs/', parser.add_argument('--dlcfolder', type=str, default='htc/DLCs/',
action='store', dest='dlcfolder', help='Destination ' action='store', dest='dlcfolder', help='Optionally '
'folder location of the generated DLC exchange files') 'define an other destination folder location for the '
'generated DLC exchange files, default: htc/DLCs/')
opt = parser.parse_args() opt = parser.parse_args()
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
...@@ -633,7 +658,8 @@ if __name__ == '__main__': ...@@ -633,7 +658,8 @@ if __name__ == '__main__':
print('Start creating all the htc files and pbs_in files...') print('Start creating all the htc files and pbs_in files...')
launch_dlcs_excel(sim_id, silent=False, zipchunks=opt.zipchunks, launch_dlcs_excel(sim_id, silent=False, zipchunks=opt.zipchunks,
pbs_turb=opt.pbs_turb, walltime=opt.walltime, pbs_turb=opt.pbs_turb, walltime=opt.walltime,
postpro_node=opt.postpro_node, runmethod=RUNMETHOD) postpro_node=opt.postpro_node, runmethod=RUNMETHOD,
dlcs_dir=os.path.join(P_SOURCE, 'htc', 'DLCs'))
# post processing: check log files, calculate statistics # post processing: check log files, calculate statistics
if opt.check_logs or opt.stats or opt.fatigue or opt.envelopeblade \ if opt.check_logs or opt.stats or opt.fatigue or opt.envelopeblade \
or opt.envelopeturbine or opt.AEP: or opt.envelopeturbine or opt.AEP:
...@@ -646,7 +672,7 @@ if __name__ == '__main__': ...@@ -646,7 +672,7 @@ if __name__ == '__main__':
envelopeturbine=opt.envelopeturbine, envelopeturbine=opt.envelopeturbine,
envelopeblade=opt.envelopeblade) envelopeblade=opt.envelopeblade)
if opt.postpro_node_merge: if opt.postpro_node_merge:
postpro_node_merge() postpro_node_merge(zipchunks=opt.zipchunks)
if opt.dlcplot: if opt.dlcplot:
plot_chans = {} plot_chans = {}
plot_chans['$B1_{flap}$'] = ['setbeta-bladenr-1-flapnr-1'] plot_chans['$B1_{flap}$'] = ['setbeta-bladenr-1-flapnr-1']
......