官术网_书友最值得收藏!

GPU memory handling

At the start of the TensorFlow session, by default, a session grabs all of the GPU memory, even if the operations and variables are placed only on one GPU in a multi-GPU system. If another session starts execution at the same time, it will receive an out-of-memory error. This can be solved in multiple ways:

  • For multi-GPU systems, set the environment variable CUDA_VISIBLE_DEVICES=<list of device idx>:
os.environ['CUDA_VISIBLE_DEVICES']='0'

The code that's executed after this setting will be able to grab all of the memory of the visible GPU.

  • For letting the session grab a part of the memory of the GPU, use the config option per_process_gpu_memory_fraction to allocate a percentage of the memory:
config.gpu_options.per_process_gpu_memory_fraction = 0.5

This will allocate 50% of the memory in all of the GPU devices.

  • By combining both of the preceding strategies, you can make only a certain percentage, alongside just some of the GPU, visible to the process.
  • Limit the TensorFlow process to grab only the minimum required memory at the start of the process. As the process executes further, set a config option to allow for the growth of this memory:
config.gpu_options.allow_growth = True

This option only allows for the allocated memory to grow, so the memory is never released back.

To find out more about learning techniques for distributing computation across multiple compute devices, refer to our book,  Mastering TensorFlow.
主站蜘蛛池模板: 永清县| 贵定县| 满城县| 桑日县| 哈密市| 滦南县| 徐州市| 高安市| 长乐市| 娄底市| 四会市| 高安市| 高雄县| 乌苏市| 酒泉市| 永登县| 崇左市| 塘沽区| 若羌县| 黄石市| 房产| 湘潭县| 望奎县| 刚察县| 乐平市| 乐东| 米脂县| 金湖县| 韶山市| 都匀市| 宁蒗| 惠安县| 鱼台县| 青海省| 集安市| 潍坊市| 尤溪县| 德州市| 株洲县| 湖北省| 灌南县|