Running out of memory with larger grid
Bug report
The data needed for the example and the code used to execute PyConTurb are attached.
All our inputs to PyConTurb have total time of 1800s and discretization from 0.25-2.5Hz. We use 41x41 and 51x51 grids for these. For the case of 51x51 grid we get an out of memory error. We use it on a cluster where the available memory is 128GB RAM so it shouldn’t be the physical memory availability. Do you have any idea if this is a limitation from the python libraries or some parameter that can be easily changed? We also tried increasing the mem_gb variable from default 0.1 up to 10 but it did not fix the issue. Maybe useful observation too: We ran the same inputs through turbsim constrained turbulence generator on the same system and we don’t run out of memory.
PyConTurb verion
PyConTurb version: 2.4.dev0 Editable installation? Yes
Attachments
GridY_DTU10MW_Shear_00_SD1_08_TI00_9P_Cross_result_TstepPat0_5.csv
GridZ_DTU10MW_Shear_00_SD1_08_TI00_9P_Cross_result_TstepPat0_5.csv
Variables_DTU10MW_Shear_00_SD1_08_TI00_9P_Cross_result_TstepPat0_5.csv
con_tc_DTU10MW_Shear_00_SD1_08_TI00_9P_Cross_result_TstepPat0_5.csv