Hi, when i continuously call the lambda function, the memory increases call after call.
This is how i'm debugging the code:
console.log("imgToTensor: memory before: "+JSON.stringify(tf.memory())); const tensor = await tf.tidy(() => tf.tensor3d(values, [height, width, 3])) console.log("imgToTensor: memory after: "+JSON.stringify(tf.memory()));
The first time i call the function I get this:
imgToTensor: memory before: {"unreliable":true,"numTensors":263,"numDataBuffers":263,"numBytes":47349088}
imgToTensor: memory after: {"unreliable":true,"numTensors":264,"numDataBuffers":264,"numBytes":76663648}
The second time i call the function i get the following:
imgToTensor: memory before: {"unreliable":true,"numTensors":264,"numDataBuffers":264,"numBytes":76663648}
imgToTensor: memory after: {"unreliable":true,"numTensors":265,"numDataBuffers":265,"numBytes":105978208}
Looks like the statement
const tensor = await tf.tidy(() => tf.tensor3d(values, [height, width, 3]))
If you take a look to the "numTensors" property, it's increased after each function call.
After 5 lambda executions my lambda fails with
Error: Runtime exited with error: signal: killed
Is there a way to clean the resources from the previous lambda function call?
Thanks!