Skip to content
GitLab
Explore
Sign in
Register
Primary navigation
Search or go to…
Project
W
WindEnergyToolbox
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container Registry
Model registry
Operate
Environments
Monitor
Incidents
Service Desk
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
wtlib
WindEnergyToolbox
Commits
5df2c692
Commit
5df2c692
authored
8 years ago
by
David Verelst
Browse files
Options
Downloads
Patches
Plain Diff
prepost.simchunks: add chunk fname to index, killall wineservers
parent
352cfc88
No related branches found
No related tags found
No related merge requests found
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
wetb/prepost/simchunks.py
+11
-2
11 additions, 2 deletions
wetb/prepost/simchunks.py
with
11 additions
and
2 deletions
wetb/prepost/simchunks.py
+
11
−
2
View file @
5df2c692
...
@@ -89,8 +89,9 @@ def create_chunks_htc_pbs(cases, sort_by_values=['[Windspeed]'], ppn=20,
...
@@ -89,8 +89,9 @@ def create_chunks_htc_pbs(cases, sort_by_values=['[Windspeed]'], ppn=20,
df_dst
=
df
[
'
[htc_dir]
'
]
+
df
[
'
[case_id]
'
]
df_dst
=
df
[
'
[htc_dir]
'
]
+
df
[
'
[case_id]
'
]
# create an index so given the htc file, we can find the chunk nr
# create an index so given the htc file, we can find the chunk nr
df_index
=
pd
.
DataFrame
(
index
=
df
[
'
[case_id]
'
].
copy
(),
df_index
=
pd
.
DataFrame
(
index
=
df
[
'
[case_id]
'
].
copy
(),
columns
=
[
'
chunk_nr
'
]
,
dtype
=
np
.
int32
)
columns
=
[
'
chunk_nr
'
,
'
name
'
]
)
df_index
[
'
chunk_nr
'
]
=
ii
df_index
[
'
chunk_nr
'
]
=
ii
df_index
[
'
name
'
]
=
os
.
path
.
join
(
chunks_dir
,
'
%s_chunk_%05i
'
%
rpl
)
# Since df_src and df_dst are already Series, iterating is fast an it
# Since df_src and df_dst are already Series, iterating is fast an it
# is slower to first convert to a list
# is slower to first convert to a list
for
src
,
dst_rel
in
zip
(
df_src
,
df_dst
):
for
src
,
dst_rel
in
zip
(
df_src
,
df_dst
):
...
@@ -355,6 +356,11 @@ def create_chunks_htc_pbs(cases, sort_by_values=['[Windspeed]'], ppn=20,
...
@@ -355,6 +356,11 @@ def create_chunks_htc_pbs(cases, sort_by_values=['[Windspeed]'], ppn=20,
pbs
+=
'
source deactivate
\n
'
pbs
+=
'
source deactivate
\n
'
pbs
+=
'
echo
"
DONE !!
"
\n
'
pbs
+=
'
echo
"
DONE !!
"
\n
'
pbs
+=
'
\n
echo
"
%s
"
\n
'
%
(
'
-
'
*
70
)
pbs
+=
'
\n
echo
"
%s
"
\n
'
%
(
'
-
'
*
70
)
pbs
+=
'
# in case wine has crashed, kill any remaining wine servers
\n
'
pbs
+=
'
# caution: ALL the users wineservers will die on this node!
\n
'
pbs
+=
'
echo
"
following wineservers are still running:
"
\n
'
pbs
+=
'
ps -u $USER -U $USER | grep wineserver
\n
'
pbs
+=
'
killall -u $USER wineserver
\n
'
pbs
+=
'
exit
\n
'
pbs
+=
'
exit
\n
'
rpl
=
(
sim_id
,
ii
)
rpl
=
(
sim_id
,
ii
)
...
@@ -525,6 +531,7 @@ def merge_from_tarfiles(df_fname, path, pattern, tarmode='r:xz', tqdm=False,
...
@@ -525,6 +531,7 @@ def merge_from_tarfiles(df_fname, path, pattern, tarmode='r:xz', tqdm=False,
store
.
close
()
store
.
close
()
return
None
,
None
return
None
,
None
# TODO: make this class more general so you can also just give a list of files
# TODO: make this class more general so you can also just give a list of files
# to be merged, excluding the tar archives.
# to be merged, excluding the tar archives.
class
AppendDataFrames
(
object
):
class
AppendDataFrames
(
object
):
...
@@ -546,7 +553,7 @@ class AppendDataFrames(object):
...
@@ -546,7 +553,7 @@ class AppendDataFrames(object):
"""
"""
"""
"""
# TODO: it seems that with treading you could parallelize this kind
# TODO: it seems that with t
h
reading you could parallelize this kind
# of work: http://stackoverflow.com/q/23598063/3156685
# of work: http://stackoverflow.com/q/23598063/3156685
# http://stackoverflow.com/questions/23598063/
# http://stackoverflow.com/questions/23598063/
# multithreaded-web-scraper-to-store-values-to-pandas-dataframe
# multithreaded-web-scraper-to-store-values-to-pandas-dataframe
...
@@ -585,6 +592,8 @@ class AppendDataFrames(object):
...
@@ -585,6 +592,8 @@ class AppendDataFrames(object):
# df = pd.DataFrame()
# df = pd.DataFrame()
return
store
return
store
# FIXME: when merging log file analysis (files with header), we are still
# skipping over one case
def
txt2txt
(
self
,
fjoined
,
path
,
tarmode
=
'
r:xz
'
,
header
=
None
,
sep
=
'
;
'
,
def
txt2txt
(
self
,
fjoined
,
path
,
tarmode
=
'
r:xz
'
,
header
=
None
,
sep
=
'
;
'
,
fname_col
=
False
):
fname_col
=
False
):
"""
Read as strings, write to another file as strings.
"""
Read as strings, write to another file as strings.
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment