I think I found evidence of a memory leak in the CUDA backend of Hedge.
I run a simulation. After about 30000 steps, 12 GB of RAM are used.
Using the method described here , I observed that Hedge creates
tens of thousands of Event objects in the "callables" list of
It seems that CUDAIntervalTimer.__call__, which cleans
CUDAIntervalTimer.callables is never called and thus
CUDAIntervalTimer.callables keeps growing with no limit.
I am using the following lines in my simulation:
from hedge.timestep import LSRK4TimeStepper
stepper = LSRK4TimeStepper(dtype=dtype)
from hedge.timestep import times_and_steps
step_it = times_and_steps(final_time=final_time, logmgr=None,
max_dt_getter=lambda t: op.estimate_timestep(discr, stepper=stepper,
I wrote a patch  in order not to store all the callables but only
the sum of their values which, according to the code, seems to be the
only wanted data. This might be slightly slower because the callable
is always called, even if CUDAIntervalTimer.__call__ is not...
Can you please have a look at this?
Thanks in advance