Memory leak from using tf.constant in loop in TF 2.16 #68196
Labels
comp:ops
OPs related issues
stat:awaiting tensorflower
Status - Awaiting response from tensorflower
TF 2.16
type:performance
Performance Issue
Issue type
Bug
Have you reproduced the bug with TensorFlow Nightly?
No
Source
binary from pip
TensorFlow version
2.16.1
Custom code
Yes
OS platform and distribution
Linux Ubuntu 24.04
Mobile device
No response
Python version
3.12.3
Bazel version
No response
GCC/compiler version
No response
CUDA/cuDNN version
No response
GPU model and memory
No response
Current behavior?
I believe I tracked down a memory leak in my code to a
tf.constant
creation in a short lived object of a class. I can reproduce it in an even simpler way by just creating the constant in a loop. No@tf.function
decoration or model training necessary to cause it. It seems to happen if I replace thetf.constant
with atf.random.uniform()
. I tried suggestions I've seen elsewhere, like trying to usedel
on the variable followed bygc.collect()
.Some similar looking bug reports:
It looks like upgrading to 2.15 solved it for some people, but it appears to be back in 2.16?
is this expected and I'm doing something wrong?
Standalone code to reproduce the issue
The text was updated successfully, but these errors were encountered: