草庐IT

python - TensorFlow:没有可用的 GPU 设备支持的内核

coder 2023-08-24 原文

下面的代码是我模型的一部分,它试图进行线性插值,类似于 numpy.interp()。 在我的模型中,t 的形状为 (64,64)。 x 的形状为 (91,)。 y 的形状为 (91,)。

def tf_interp(b, x, y):
    xaxis_pad = tf.concat([[tf.minimum(b, tf.gather(x, 0))], x, [tf.maximum(b, tf.gather(x, height_sino - 1))]],
                          axis=0)
    yaxis_pad = tf.concat([[0.0], y, [0.0]], axis=0)

    cmp = tf.cast(b > xaxis_pad, dtype=tf.float32)
    diff = cmp[1:] - cmp[:-1]
    idx = tf.argmin(diff)

    # Interpolate
    alpha = (b - xaxis_pad[idx]) / (xaxis_pad[idx + 1] - xaxis_pad[idx])
    res = alpha * yaxis_pad[idx + 1] + (1 - alpha) * yaxis_pad[idx]

    #def f1(): return 0.0

    #def f2(): return alpha * yaxis_pad[idx + 1] + (1 - alpha) * yaxis_pad[idx]

    #with tf.device('/gpu:0'):
        #res = tf.cond(pred=tf.is_nan(res), true_fn=f1, false_fn=f2)

    return res


def tf_interpolation(t, x, y):
    t1 = tf.reshape(t, [-1, ])
    t_return = tf.map_fn(lambda b: tf_interp(b, x, y), t1, dtype=tf.float32, name='t_return')
    t_return = tf.reshape(t_return, [width, height])
    return t_return

当我尝试为我的模型定义 Adam 优化器时。出现以下错误:

Traceback (most recent call last):
  File "net_training_new.py", line 411, in <module>
    sess.run(tf.global_variables_initializer())
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 900, in run
    run_metadata_ptr)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1135, in _run
    feed_dict_tensor, options, run_metadata)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
    run_metadata)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation 'gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/f_acc': Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
Colocation Debug Info:
Colocation group had the following types and devices: 
TensorArrayWriteV3: GPU CPU 
Add: GPU CPU 
Range: GPU CPU 
Const: GPU CPU 
Enter: GPU CPU 
StackPushV2: GPU CPU 
StackV2: GPU CPU 
TensorArrayV3: GPU CPU 
TensorArrayScatterV3: GPU CPU 
StackPopV2: CPU 
TensorArrayGatherV3: GPU CPU 
Identity: GPU CPU 
TensorArrayGradV3: GPU CPU 
Exit: GPU CPU 
TensorArrayReadV3: GPU CPU 
TensorArraySizeV3: GPU CPU 

Colocation members and user-requested devices:
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/Const (Const) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/f_acc (StackV2) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/StackPopV2/Enter (Enter) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter (Enter) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayScatter/TensorArrayScatterV3/Const (Const) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayScatter/TensorArrayScatterV3/f_acc (StackV2) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayScatter/TensorArrayScatterV3/StackPopV2/Enter (Enter) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayScatter/TensorArrayScatterV3/Enter (Enter) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/Const_1 (Const) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/f_acc_1 (StackV2) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/StackPopV2_1/Enter (Enter) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter_1 (Enter) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/Const (Const) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/f_acc (StackV2) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/StackPopV2/Enter (Enter) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter (Enter) 
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/StackPopV2 (StackPopV2) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter_1 (Enter) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayScatter/TensorArrayScatterV3/StackPopV2 (StackPopV2) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/StackPopV2_1 (StackPopV2) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/StackPopV2 (StackPopV2) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3 (TensorArrayGradV3) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/gradient_flow (Identity) 
  generator_model/backprojected/while/t_return/TensorArrayStack/range/delta (Const) 
  generator_model/backprojected/while/t_return/TensorArrayStack/range/start (Const) 
  generator_model/backprojected/while/t_return/TensorArray_1 (TensorArrayV3) 
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/StackPushV2 (StackPushV2) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/StackPushV2 (StackPushV2) 
  generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3/Enter (Enter) /device:GPU:0
  generator_model/backprojected/while/t_return/while/Exit_1 (Exit) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/StackPushV2_1 (StackPushV2) 
  generator_model/backprojected/while/t_return/TensorArrayStack/TensorArraySizeV3 (TensorArraySizeV3) 
  generator_model/backprojected/while/t_return/TensorArrayStack/range (Range) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayScatter/TensorArrayScatterV3/StackPushV2 (StackPushV2) 
  generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3 (TensorArrayGatherV3) 
  generator_model/backprojected/while/t_return/while/add_7 (Add) /device:GPU:0
  generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3 (TensorArrayWriteV3) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayScatter/TensorArrayScatterV3 (TensorArrayScatterV3) 
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3 (TensorArrayGradV3) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/gradient_flow (Identity) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3 (TensorArrayReadV3) 
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/tuple/control_dependency (Identity) 
  gradients_1/generator_model/backprojected/while/t_return/while/add_7_grad/tuple/control_dependency_1 (Identity) 
  gradients_1/generator_model/backprojected/while/t_return/while/add_7_grad/tuple/control_dependency (Identity) 

Registered kernels:
  device='GPU'
  device='CPU'

     [[Node: gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/f_acc = StackV2[_class=["loc:@generator_model/backprojected/while/t_return/TensorArray_1", "loc:@generator_model/backprojected/while/t_return/while/add_7"], elem_type=DT_RESOURCE, stack_name="", _device="/device:GPU:0"](gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/Const)]]

Caused by op 'gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/f_acc', defined at:
  File "net_training_new.py", line 397, in <module>
    g_trainer = tf.train.AdamOptimizer(learning_rate=lr).minimize(g_loss, var_list=gen_variables)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py", line 414, in minimize
    grad_loss=grad_loss)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py", line 526, in compute_gradients
    colocate_gradients_with_ops=colocate_gradients_with_ops)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py", line 494, in gradients
    gate_gradients, aggregation_method, stop_gradients)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py", line 636, in _GradientsHelper
    lambda: grad_fn(op, *out_grads))
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py", line 385, in _MaybeCompile
    return grad_fn()  # Exit early
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py", line 636, in <lambda>
    lambda: grad_fn(op, *out_grads))
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/tensor_array_grad.py", line 130, in _TensorArrayWriteGrad
    .grad(source=grad_source, flow=flow))
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/tensor_array_ops.py", line 849, in grad
    return self._implementation.grad(source, flow=flow, name=name)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/tensor_array_ops.py", line 241, in grad
    handle=self._handle, source=source, flow_in=flow, name=name)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 6229, in tensor_array_grad_v3
    name=name)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
    op_def=op_def)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1760, in __init__
    self._control_flow_post_processing()
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1771, in _control_flow_post_processing
    self._control_flow_context.AddOp(self)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2445, in AddOp
    self._AddOpInternal(op)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2466, in _AddOpInternal
    real_x = self.AddValue(x)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2398, in AddValue
    real_val = grad_ctxt.grad_state.GetRealValue(val)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 1155, in GetRealValue
    history_value = cur_grad_state.AddForwardAccumulator(cur_value)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 1022, in AddForwardAccumulator
    max_size=max_size, elem_type=value.dtype.base_dtype, name="f_acc")
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 5033, in stack_v2
    stack_name=stack_name, name=name)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
    op_def=op_def)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1718, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

...which was originally created as op 'generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3', defined at:
  File "net_training_new.py", line 365, in <module>
    Gz = generator(LD_placeholder)
  File "net_training_new.py", line 212, in generator
    backprojected = tf.map_fn(lambda s: tf_interpolation(t, x, s[:, i]), radon_filtered, name='backprojected')
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/functional_ops.py", line 423, in map_fn
    swap_memory=swap_memory)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 3224, in while_loop
    result = loop_context.BuildLoop(cond, body, loop_vars, shape_invariants)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2956, in BuildLoop
    pred, body, original_loop_vars, loop_vars, shape_invariants)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2893, in _BuildLoop
    body_result = body(*packed_vars_for_body)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/functional_ops.py", line 413, in compute
    packed_fn_values = fn(packed_values)
  File "net_training_new.py", line 212, in <lambda>
    backprojected = tf.map_fn(lambda s: tf_interpolation(t, x, s[:, i]), radon_filtered, name='backprojected')
  File "net_training_new.py", line 79, in tf_interpolation
    t_return = tf.map_fn(lambda b: tf_interp(b, x, y), t1, dtype=tf.float32, name='t_return')
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/functional_ops.py", line 423, in map_fn
    swap_memory=swap_memory)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 3224, in while_loop
    result = loop_context.BuildLoop(cond, body, loop_vars, shape_invariants)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2956, in BuildLoop
    pred, body, original_loop_vars, loop_vars, shape_invariants)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 2893, in _BuildLoop
    body_result = body(*packed_vars_for_body)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/functional_ops.py", line 416, in compute
    tas = [ta.write(i, value) for (ta, value) in zip(tas, flat_fn_values)]
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/functional_ops.py", line 416, in <listcomp>
    tas = [ta.write(i, value) for (ta, value) in zip(tas, flat_fn_values)]
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py", line 118, in wrapped
    return _add_should_use_warning(fn(*args, **kwargs))
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/tensor_array_ops.py", line 879, in write
    return self._implementation.write(index, value, name=name)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py", line 118, in wrapped
    return _add_should_use_warning(fn(*args, **kwargs))
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/tensor_array_ops.py", line 278, in write
    name=name)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/ops/gen_data_flow_ops.py", line 7344, in tensor_array_write_v3
    flow_in=flow_in, name=name)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
    op_def=op_def)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3392, in create_op
    op_def=op_def)
  File "/home/xiehuidong/anaconda3/envs/CtProject/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1718, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): Cannot assign a device for operation 'gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/f_acc': Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
Colocation Debug Info:
Colocation group had the following types and devices: 
TensorArrayWriteV3: GPU CPU 
Add: GPU CPU 
Range: GPU CPU 
Const: GPU CPU 
Enter: GPU CPU 
StackPushV2: GPU CPU 
StackV2: GPU CPU 
TensorArrayV3: GPU CPU 
TensorArrayScatterV3: GPU CPU 
StackPopV2: CPU 
TensorArrayGatherV3: GPU CPU 
Identity: GPU CPU 
TensorArrayGradV3: GPU CPU 
Exit: GPU CPU 
TensorArrayReadV3: GPU CPU 
TensorArraySizeV3: GPU CPU 

Colocation members and user-requested devices:
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/Const (Const) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/f_acc (StackV2) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/StackPopV2/Enter (Enter) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter (Enter) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayScatter/TensorArrayScatterV3/Const (Const) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayScatter/TensorArrayScatterV3/f_acc (StackV2) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayScatter/TensorArrayScatterV3/StackPopV2/Enter (Enter) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayScatter/TensorArrayScatterV3/Enter (Enter) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/Const_1 (Const) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/f_acc_1 (StackV2) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/StackPopV2_1/Enter (Enter) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter_1 (Enter) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/Const (Const) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/f_acc (StackV2) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/StackPopV2/Enter (Enter) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter (Enter) 
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/StackPopV2 (StackPopV2) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/Enter_1 (Enter) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayScatter/TensorArrayScatterV3/StackPopV2 (StackPopV2) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/StackPopV2_1 (StackPopV2) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/StackPopV2 (StackPopV2) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3 (TensorArrayGradV3) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/gradient_flow (Identity) 
  generator_model/backprojected/while/t_return/TensorArrayStack/range/delta (Const) 
  generator_model/backprojected/while/t_return/TensorArrayStack/range/start (Const) 
  generator_model/backprojected/while/t_return/TensorArray_1 (TensorArrayV3) 
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/StackPushV2 (StackPushV2) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/StackPushV2 (StackPushV2) 
  generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3/Enter (Enter) /device:GPU:0
  generator_model/backprojected/while/t_return/while/Exit_1 (Exit) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayGrad/TensorArrayGradV3/StackPushV2_1 (StackPushV2) 
  generator_model/backprojected/while/t_return/TensorArrayStack/TensorArraySizeV3 (TensorArraySizeV3) 
  generator_model/backprojected/while/t_return/TensorArrayStack/range (Range) 
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayScatter/TensorArrayScatterV3/StackPushV2 (StackPushV2) 
  generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3 (TensorArrayGatherV3) 
  generator_model/backprojected/while/t_return/while/add_7 (Add) /device:GPU:0
  generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3 (TensorArrayWriteV3) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/TensorArrayStack/TensorArrayGatherV3_grad/TensorArrayScatter/TensorArrayScatterV3 (TensorArrayScatterV3) 
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3 (TensorArrayGradV3) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/gradient_flow (Identity) /device:GPU:0
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayReadV3 (TensorArrayReadV3) 
  gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/tuple/control_dependency (Identity) 
  gradients_1/generator_model/backprojected/while/t_return/while/add_7_grad/tuple/control_dependency_1 (Identity) 
  gradients_1/generator_model/backprojected/while/t_return/while/add_7_grad/tuple/control_dependency (Identity) 

Registered kernels:
  device='GPU'
  device='CPU'

     [[Node: gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/f_acc = StackV2[_class=["loc:@generator_model/backprojected/while/t_return/TensorArray_1", "loc:@generator_model/backprojected/while/t_return/while/add_7"], elem_type=DT_RESOURCE, stack_name="", _device="/device:GPU:0"](gradients_1/generator_model/backprojected/while/t_return/while/TensorArrayWrite/TensorArrayWriteV3_grad/TensorArrayGrad/TensorArrayGradV3/Const)]]

它说问题出在这一行

t_return = tf.map_fn(lambda b: tf_interp(b, x, y), t1, dtype=tf.float32, name='t_return') 

TensorFlow 无法为此操作分配设备,因为没有可用的 GPU 设备支持内核。

我可以完美地在 CPU 上运行它。 为什么我不能在 GPU 上运行这条线?

最佳答案

试试这个,让我们看看它是否有效:

with tf.device('/device:GPU:2'):
    t1 = tf.reshape(t, [-1, ])
    t_return = tf.map_fn(lambda b: tf_interp(b, x, y), t1, dtype=tf.float32, name='t_return')
    t_return = tf.reshape(t_return, [width, height])
    return t_return

此外,尝试使用这段代码来初始化您的 session 对象。

with tf.Session(config=tf.ConfigProto(allow_soft_placement=True, \
                  log_device_placement=True)) as sess:
    sess.run(tf.global_variables_initializer())
    ...

If the device you have specified does not exist, you will get InvalidArgumentError:

> InvalidArgumentError: Invalid argument: Cannot assign a device to node
> 'b': Could not satisfy explicit device specification '/device:GPU:2'  
> [[Node: b = Const[dtype=DT_FLOAT, value=Tensor<type: float shape:
> [3,2]    values: 1 2 3...>, _device="/device:GPU:2"]()]]

If you would like TensorFlow to automatically choose an existing and supported device to run the operations in case the specified one doesn't exist, you can set allow_soft_placement to True in the configuration option when creating the session.

关于python - TensorFlow:没有可用的 GPU 设备支持的内核,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/51387027/

有关python - TensorFlow:没有可用的 GPU 设备支持的内核的更多相关文章

  1. python - 如何使用 Ruby 或 Python 创建一系列高音调和低音调的蜂鸣声? - 2

    关闭。这个问题是opinion-based.它目前不接受答案。想要改进这个问题?更新问题,以便editingthispost可以用事实和引用来回答它.关闭4年前。Improvethisquestion我想在固定时间创建一系列低音和高音调的哔哔声。例如:在150毫秒时发出高音调的蜂鸣声在151毫秒时发出低音调的蜂鸣声200毫秒时发出低音调的蜂鸣声250毫秒的高音调蜂鸣声有没有办法在Ruby或Python中做到这一点?我真的不在乎输出编码是什么(.wav、.mp3、.ogg等等),但我确实想创建一个输出文件。

  2. ruby - 难道Lua没有和Ruby的method_missing相媲美的东西吗? - 2

    我好像记得Lua有类似Ruby的method_missing的东西。还是我记错了? 最佳答案 表的metatable的__index和__newindex可以用于与Ruby的method_missing相同的效果。 关于ruby-难道Lua没有和Ruby的method_missing相媲美的东西吗?,我们在StackOverflow上找到一个类似的问题: https://stackoverflow.com/questions/7732154/

  3. ruby-on-rails - rails 目前在重启后没有安装 - 2

    我有一个奇怪的问题:我在rvm上安装了ruby​​onrails。一切正常,我可以创建项目。但是在我输入“railsnew”时重新启动后,我有“程序'rails'当前未安装。”。SystemUbuntu12.04ruby-v"1.9.3p194"gemlistactionmailer(3.2.5)actionpack(3.2.5)activemodel(3.2.5)activerecord(3.2.5)activeresource(3.2.5)activesupport(3.2.5)arel(3.0.2)builder(3.0.0)bundler(1.1.4)coffee-rails(

  4. ruby - 在没有 sass 引擎的情况下使用 sass 颜色函数 - 2

    我想在一个没有Sass引擎的类中使用Sass颜色函数。我已经在项目中使用了sassgem,所以我认为搭载会像以下一样简单:classRectangleincludeSass::Script::FunctionsdefcolorSass::Script::Color.new([0x82,0x39,0x06])enddefrender#hamlengineexecutedwithcontextofself#sothatwithintemlateicouldcall#%stop{offset:'0%',stop:{color:lighten(color)}}endend更新:参见上面的#re

  5. ruby-on-rails - 如何使辅助方法在 Rails 集成测试中可用? - 2

    我在app/helpers/sessions_helper.rb中有一个帮助程序文件,其中包含一个方法my_preference,它返回当前登录用户的首选项。我想在集成测试中访问该方法。例如,这样我就可以在测试中使用getuser_path(my_preference)。在其他帖子中,我读到这可以通过在测试文件中包含requiresessions_helper来实现,但我仍然收到错误NameError:undefinedlocalvariableormethod'my_preference'.我做错了什么?require'test_helper'require'sessions_hel

  6. 没有类的 Ruby 方法? - 2

    大家好!我想知道Ruby中未使用语法ClassName.method_name调用的方法是如何工作的。我头脑中的一些是puts、print、gets、chomp。可以在不使用点运算符的情况下调用这些方法。为什么是这样?他们来自哪里?我怎样才能看到这些方法的完整列表? 最佳答案 Kernel中的所有方法都可用于Object类的所有对象或从Object派生的任何类。您可以使用Kernel.instance_methods列出它们。 关于没有类的Ruby方法?,我们在StackOverflow

  7. ruby-on-rails - Rails 3,嵌套资源,没有路由匹配 [PUT] - 2

    我真的为这个而疯狂。我一直在搜索答案并尝试我找到的所有内容,包括相关问题和stackoverflow上的答案,但仍然无法正常工作。我正在使用嵌套资源,但无法使表单正常工作。我总是遇到错误,例如没有路线匹配[PUT]"/galleries/1/photos"表格在这里:/galleries/1/photos/1/edit路线.rbresources:galleriesdoresources:photosendresources:galleriesresources:photos照片Controller.rbdefnew@gallery=Gallery.find(params[:galle

  8. Python 相当于 Perl/Ruby ||= - 2

    这个问题在这里已经有了答案:关闭10年前。PossibleDuplicate:Pythonconditionalassignmentoperator对于这样一个简单的问题表示歉意,但是谷歌搜索||=并不是很有帮助;)Python中是否有与Ruby和Perl中的||=语句等效的语句?例如:foo="hey"foo||="what"#assignfooifit'sundefined#fooisstill"hey"bar||="yeah"#baris"yeah"另外,类似这样的东西的通用术语是什么?条件分配是我的第一个猜测,但Wikipediapage跟我想的不太一样。

  9. ruby-on-rails - 有没有办法为 CarrierWave/Fog 设置上传进度指示器? - 2

    我在Rails应用程序中使用CarrierWave/Fog将视频上传到AmazonS3。有没有办法判断上传的进度,让我可以显示上传进度如何? 最佳答案 CarrierWave和Fog本身没有这种功能;你需要一个前端uploader来显示进度。当我不得不解决这个问题时,我使用了jQueryfileupload因为我的堆栈中已经有jQuery。甚至还有apostonCarrierWaveintegration因此您只需按照那里的说明操作即可获得适用于您的应用的进度条。 关于ruby-on-r

  10. ruby - 没有类方法获取 Ruby 类名 - 2

    如何在Ruby中获取BasicObject实例的类名?例如,假设我有这个:classMyObjectSystem我怎样才能使这段代码成功?编辑:我发现Object的实例方法class被定义为returnrb_class_real(CLASS_OF(obj));。有什么方法可以从Ruby中使用它? 最佳答案 我花了一些时间研究irb并想出了这个:classBasicObjectdefclassklass=class这将为任何从BasicObject继承的对象提供一个#class您可以调用的方法。编辑评论中要求的进一步解释:假设你有对象

随机推荐