I noticed something interesting today.
I am working on an image processing tool which loops several times over
each of a series of images. Everything is done in place and I should not be
growing my memory footprint between iterations.
Now when I tracked the actual GPU memory consumption I found that I would
ultimately I would run out of GPU memory (just a short excerpt):
I double and triple checked that everything is happening in place, started
trying to delete GPU objects as soon as I'm finished with them to try to
trigger the GC, but that only had limited success. I would expect the GC to
kick in before the GPU runs out of memory..
I then started manually calling gc.collect() every few iteration and
suddenly everything started behaving and is now relatively stable. See here
(note the scale difference): http://i.imgur.com/Zzq5YdC.png
Is this normal? Is this a bug?
IDIES/Johns Hopkins University
Performance @ Rational/IBM
(320) 496 6293
To know recursion, you must first know recursion.