8Answers
  • 5
name

A PHP Error was encountered

Severity: Notice

Message: Undefined index: userid

Filename: views/question.php

Line Number: 191

Backtrace:

File: /home/prodcxja/public_html/questions/application/views/question.php
Line: 191
Function: _error_handler

File: /home/prodcxja/public_html/questions/application/controllers/Questions.php
Line: 433
Function: view

File: /home/prodcxja/public_html/questions/index.php
Line: 315
Function: require_once

name Punditsdkoslkdosdkoskdo

How to run Tensorflow on CPU

I have installed the GPU version of tensorflow on an Ubuntu 14.04.

I am on a GPU server where tensorflow can access the available GPUs.

I want to run tensorflow on the CPUs.

Normally I can use env CUDA_VISIBLE_DEVICES=0 to run on GPU no. 0.

How can I pick between the CPUs instead?

I am not intersted in rewritting my code with with tf.device("/cpu:0"):

You can also set the environment variable to

CUDA_VISIBLE_DEVICES=""

without having to modify the source code.

  • 172
Reply Report
      • 1
    • Someone said running neural nets on CPUs after the training phase is as performant as running them on GPUs -- i.e., only the training phrase really needs the GPU. Do you know if this is true? Thanks!
      • 1
    • @Crashalot: This is not true. Look for various benchmarks for interference, CPUs are an order of magnitude slower there too.
      • 2
    • @Thomas thanks. suggestions on which benchmarks to consider? probably also varies on workload and nature of the neural nets, right? apparently the google translate app runs some neural nets directly on smartphones, presumably on the cpu and not gpu?
      • 1
    • This did not work for me. :/ set the environment variable but tensorflow still uses the GPU, I'm using conda virtual env, does this make a diference?

You can apply device_count parameter per tf.Session:

config = tf.ConfigProto(
        device_count = {'GPU': 0}
    )
sess = tf.Session(config=config)

See also protobuf config file:

tensorflow/core/framework/config.proto

  • 115
Reply Report
      • 1
    • Someone said running neural nets on CPUs after the training phase is as efficient as running them on GPUs -- i.e., only the training phrase really needs the GPU. Do you know if this is true? Thanks!
    • @Nandeesh I guess it depends on your needs. So far there are at least 53 people who feel more into environment variables and 35 who prefer to set number of devices in code. The advantage of first is simplicity and of another is more explicit control over (multiple) sessions from within the python program itself (that zero is not necessary to be hardcoded, it can be a variable).
      • 1
    • @Crashalot it depends on the nature of the network. For example, RNNs can be faster on CPUs for small batch sizes due to its sequential nature. CNNs will still benefit from a GPU in inference mode, but since you only need to run them once per example, a CPU may be fast enough for many practical purposes.

For me, only setting CUDA_VISIBLE_DEVICES to precisely -1 works:

Works:

import os
import tensorflow as tf

os.environ['CUDA_VISIBLE_DEVICES'] = '-1'

if tf.test.gpu_device_name():
    print('GPU found')
else:
    print("No GPU found")

# No GPU found

Does not work:

import os
import tensorflow as tf

os.environ['CUDA_VISIBLE_DEVICES'] = ''    

if tf.test.gpu_device_name():
    print('GPU found')
else:
    print("No GPU found")

# GPU found
  • 22
Reply Report

In some systems one have to specify:

import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]=""  # or even "-1"

BEFORE importing tensorflow.

  • 0
Reply Report

You could use tf.config.set_visible_devices. One possible function that allows you to set if and which GPUs to use is:

import tensorflow as tf

def set_gpu(gpu_ids_list):
    gpus = tf.config.list_physical_devices('GPU')
    if gpus:
        try:
            gpus_used = [gpus[i] for i in gpu_ids_list]
            tf.config.set_visible_devices(gpus_used, 'GPU')
            logical_gpus = tf.config.experimental.list_logical_devices('GPU')
            print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPU")
        except RuntimeError as e:
            # Visible devices must be set before GPUs have been initialized
            print(e)

Suppose you are on a system with 4 GPUs and you want to use only two GPUs, the one with id = 0 and the one with id = 2, then the first command of your code, immediately after importing the libraries, would be:

set_gpu([0, 2])

In your case, to use only the CPU, you can invoke the function with an empty list:

set_gpu([])

For completeness, if you want to avoid that the runtime initialization will allocate all memory on the device, you can use tf.config.experimental.set_memory_growth. Finally, the function to manage which devices to use, occupying the GPUs memory dynamically, becomes:

import tensorflow as tf

def set_gpu(gpu_ids_list):
    gpus = tf.config.list_physical_devices('GPU')
    if gpus:
        try:
            gpus_used = [gpus[i] for i in gpu_ids_list]
            tf.config.set_visible_devices(gpus_used, 'GPU')
            for gpu in gpus_used:
                tf.config.experimental.set_memory_growth(gpu, True)
            logical_gpus = tf.config.experimental.list_logical_devices('GPU')
            print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPU")
        except RuntimeError as e:
            # Visible devices must be set before GPUs have been initialized
            print(e)
  • 0
Reply Report

Another possible solution on installation level would be to look for the CPU only variant: https://www.tensorflow.org/install/pip#package-location

In my case, this gives right now:

pip3 install https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow_cpu-2.2.0-cp38-cp38-win_amd64.whl

Just select the correct version. Bonus points for using a venv like explained eg in this answer.

  • 0
Reply Report

Trending Tags