GPU
How can I find the memory usage on my GPU?
- For Nvidia GPUs:
nvidia-smi
- For Intel GPUs:
intel_gpu_tools
- For AMD GPUs:
aticonfig --odgc --odgt
- For real time watching – example
sudo watch nvidia-smi
Release GPU
- Terminal
sudo fuser -v /dev/nvidia*
pkill -u PIDs
- Use memory growing:
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config, ...)
- Don’t allocate whole of your GPU memory(e.g. only 90%):
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.9
session = tf.Session(config=config, ...)
- Clear background
keras.backend.clear_session()
- Add this in the notebook
from keras import backend as K
cfg = K.tf.ConfigProto()
cfg.gpu_options.allow_growth = True
K.set_session(K.tf.Session(config=cfg))
-
Added gc.collect() to the end of my custom generator and it helped to get rid of memory errors.
-
Reduce your batch size