官术网_书友最值得收藏!

GPU memory handling

At the start of the TensorFlow session, by default, a session grabs all of the GPU memory, even if the operations and variables are placed only on one GPU in a multi-GPU system. If another session starts execution at the same time, it will receive an out-of-memory error. This can be solved in multiple ways:

  • For multi-GPU systems, set the environment variable CUDA_VISIBLE_DEVICES=<list of device idx>:
os.environ['CUDA_VISIBLE_DEVICES']='0'

The code that's executed after this setting will be able to grab all of the memory of the visible GPU.

  • For letting the session grab a part of the memory of the GPU, use the config option per_process_gpu_memory_fraction to allocate a percentage of the memory:
config.gpu_options.per_process_gpu_memory_fraction = 0.5

This will allocate 50% of the memory in all of the GPU devices.

  • By combining both of the preceding strategies, you can make only a certain percentage, alongside just some of the GPU, visible to the process.
  • Limit the TensorFlow process to grab only the minimum required memory at the start of the process. As the process executes further, set a config option to allow for the growth of this memory:
config.gpu_options.allow_growth = True

This option only allows for the allocated memory to grow, so the memory is never released back.

To find out more about learning techniques for distributing computation across multiple compute devices, refer to our book,  Mastering TensorFlow.
主站蜘蛛池模板: 易门县| 博湖县| 玉门市| 赞皇县| 新化县| 依兰县| 阆中市| 饶阳县| 田东县| 西吉县| 扎兰屯市| 黑水县| 仪征市| 鸡泽县| 巴林左旗| 修水县| 黔江区| 郎溪县| 连平县| 广元市| 咸丰县| 奉节县| 若尔盖县| 临泽县| 承德市| 大兴区| 奉贤区| 海盐县| 通州市| 黄浦区| 南靖县| 兴国县| 四川省| 大悟县| 成武县| 黄浦区| 体育| 壤塘县| 萍乡市| 昌平区| 原阳县|