首页 > 其他分享 >NotImplementedError: Cannot convert a symbolic Tensor (sequential_1/simple_rnn_1/strided_slice:0) to

NotImplementedError: Cannot convert a symbolic Tensor (sequential_1/simple_rnn_1/strided_slice:0) to

时间:2023-02-16 20:11:29浏览次数:49  
标签:mydlenv convert slice Tensor lib python self py tensorflow

  model.fit   NotImplementedError: Cannot convert a symbolic Tensor   to a numpy array.

 
Epoch 1/100
 
---------------------------------------------------------------------------
NotImplementedError                       Traceback (most recent call last)
Cell In[4], line 1
----> 1 history = model.fit(x_train, y_train, batch_size=32, epochs=100, callbacks=[cp_callback])

File /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:66, in enable_multi_worker.<locals>._method_wrapper(self, *args, **kwargs)
     64 def _method_wrapper(self, *args, **kwargs):
     65   if not self._in_multi_worker_mode():  # pylint: disable=protected-access
---> 66     return method(self, *args, **kwargs)
     68   # Running inside `run_distribute_coordinator` already.
     69   if dc_context.get_current_worker_context():

File /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:848, in Model.fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
    841 with traceme.TraceMe(
    842     'TraceContext',
    843     graph_type='train',
    844     epoch_num=epoch,
    845     step_num=step,
    846     batch_size=batch_size):
    847   callbacks.on_train_batch_begin(step)
--> 848   tmp_logs = train_function(iterator)
    849   # Catch OutOfRangeError for Datasets of unknown size.
    850   # This blocks until the batch has finished executing.
    851   # TODO(b/150292341): Allow multiple async steps here.
    852   if not data_handler.inferred_steps:

File /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py:580, in Function.__call__(self, *args, **kwds)
    578     xla_context.Exit()
    579 else:
--> 580   result = self._call(*args, **kwds)
    582 if tracing_count == self._get_tracing_count():
    583   self._call_counter.called_without_tracing()

File /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py:627, in Function._call(self, *args, **kwds)
    624 try:
    625   # This is the first call of __call__, so we have to initialize.
    626   initializers = []
--> 627   self._initialize(args, kwds, add_initializers_to=initializers)
    628 finally:
    629   # At this point we know that the initialization is complete (or less
    630   # interestingly an exception was raised) so we no longer need a lock.
    631   self._lock.release()

File /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py:505, in Function._initialize(self, args, kwds, add_initializers_to)
    502 self._lifted_initializer_graph = lifted_initializer_graph
    503 self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph)
    504 self._concrete_stateful_fn = (
--> 505     self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
    506         *args, **kwds))
    508 def invalid_creator_scope(*unused_args, **unused_kwds):
    509   """Disables variable creation."""

File /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/eager/function.py:2446, in Function._get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
   2444   args, kwargs = None, None
   2445 with self._lock:
-> 2446   graph_function, _, _ = self._maybe_define_function(args, kwargs)
   2447 return graph_function

File /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/eager/function.py:2777, in Function._maybe_define_function(self, args, kwargs)
   2774   return self._define_function_with_shape_relaxation(args, kwargs)
   2776 self._function_cache.missed.add(call_context_key)
-> 2777 graph_function = self._create_graph_function(args, kwargs)
   2778 self._function_cache.primary[cache_key] = graph_function
   2779 return graph_function, args, kwargs

File /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/eager/function.py:2657, in Function._create_graph_function(self, args, kwargs, override_flat_arg_shapes)
   2652 missing_arg_names = [
   2653     "%s_%d" % (arg, i) for i, arg in enumerate(missing_arg_names)
   2654 ]
   2655 arg_names = base_arg_names + missing_arg_names
   2656 graph_function = ConcreteFunction(
-> 2657     func_graph_module.func_graph_from_py_func(
   2658         self._name,
   2659         self._python_function,
   2660         args,
   2661         kwargs,
   2662         self.input_signature,
   2663         autograph=self._autograph,
   2664         autograph_options=self._autograph_options,
   2665         arg_names=arg_names,
   2666         override_flat_arg_shapes=override_flat_arg_shapes,
   2667         capture_by_value=self._capture_by_value),
   2668     self._function_attributes,
   2669     # Tell the ConcreteFunction to clean up its graph once it goes out of
   2670     # scope. This is not the default behavior since it gets used in some
   2671     # places (like Keras) where the FuncGraph lives longer than the
   2672     # ConcreteFunction.
   2673     shared_func_graph=False)
   2674 return graph_function

File /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py:981, in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
    978 else:
    979   _, original_func = tf_decorator.unwrap(python_func)
--> 981 func_outputs = python_func(*func_args, **func_kwargs)
    983 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
    984 # TensorArrays and `None`s.
    985 func_outputs = nest.map_structure(convert, func_outputs,
    986                                   expand_composites=True)

File /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py:441, in Function._defun_with_scope.<locals>.wrapped_fn(*args, **kwds)
    426 # We register a variable creator with reduced priority. If an outer
    427 # variable creator is just modifying keyword arguments to the variable
    428 # constructor, this will work harmoniously. Since the `scope` registered
   (...)
    436 # better than the alternative, tracing the initialization graph but giving
    437 # the user a variable type they didn't want.
    438 with ops.get_default_graph()._variable_creator_scope(scope, priority=50):  # pylint: disable=protected-access
    439   # __wrapped__ allows AutoGraph to swap in a converted function. We give
    440   # the function a weak reference to itself to avoid a reference cycle.
--> 441   return weak_wrapped_fn().__wrapped__(*args, **kwds)

File /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py:968, in func_graph_from_py_func.<locals>.wrapper(*args, **kwargs)
    966 except Exception as e:  # pylint:disable=broad-except
    967   if hasattr(e, "ag_error_metadata"):
--> 968     raise e.ag_error_metadata.to_exception(e)
    969   else:
    970     raise

NotImplementedError: in user code:

    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:571 train_function  *
        outputs = self.distribute_strategy.run(
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:951 run  **
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
        return fn(*args, **kwargs)
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py:531 train_step  **
        y_pred = self(x, training=True)
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:927 __call__
        outputs = call_fn(cast_inputs, *args, **kwargs)
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/keras/engine/sequential.py:291 call
        outputs = layer(inputs, **kwargs)
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py:654 __call__
        return super(RNN, self).__call__(inputs, **kwargs)
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py:927 __call__
        outputs = call_fn(cast_inputs, *args, **kwargs)
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py:1530 call
        return super(SimpleRNN, self).call(
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py:721 call
        inputs, initial_state, constants = self._process_inputs(
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py:848 _process_inputs
        initial_state = self.get_initial_state(inputs)
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py:636 get_initial_state
        init_state = get_initial_state_fn(
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py:1343 get_initial_state
        return _generate_zero_filled_state_for_cell(self, inputs, batch_size, dtype)
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py:2926 _generate_zero_filled_state_for_cell
        return _generate_zero_filled_state(batch_size, cell.state_size, dtype)
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py:2944 _generate_zero_filled_state
        return create_zeros(state_size)
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/keras/layers/recurrent.py:2939 create_zeros
        return array_ops.zeros(init_state_size, dtype=dtype)
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:2677 wrapped
        tensor = fun(*args, **kwargs)
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:2721 zeros
        output = _constant_if_small(zero, shape, dtype, name)
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:2662 _constant_if_small
        if np.prod(shape) < 1000:
    <__array_function__ internals>:180 prod
        
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/numpy/core/fromnumeric.py:3045 prod
        return _wrapreduction(a, np.multiply, 'prod', axis, dtype, out,
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/numpy/core/fromnumeric.py:86 _wrapreduction
        return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
    /home/software/anaconda3/envs/mydlenv/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:748 __array__
        raise NotImplementedError("Cannot convert a symbolic Tensor ({}) to a numpy"

    NotImplementedError: Cannot convert a symbolic Tensor (sequential_1/simple_rnn_1/strided_slice:0) to a numpy array.



问题原因及解决方法:
numpy版本 过高,应降低其版本。( TensorFlow , numpy, pandas , python 版本要兼容)
以下版本搭配不行:  mydlenv

tensorflow                2.2.0


pandas                    1.5.3


numpy                     1.23.4

 

以下版本搭配可以:

这是numpy版本与tensorflow版本不兼容导致的  tensorflow2.2 + numpy1.19.2  表示没问题。

numpy                              1.18.5
tensorflow                         2.3.0

pandas 1.5.3 requires numpy>=1.20.3

pip uninstall numpy 
pip install numpy==1.19.2 -i https://pypi.douban.com/simple

标签:mydlenv,convert,slice,Tensor,lib,python,self,py,tensorflow
From: https://www.cnblogs.com/emanlee/p/17125214.html

相关文章

  • 基于N卡的TensorFlow安装配置
    TensorFlow相关组件的安装安装Anaconda3安装Anaconda3的时候可以安装在任意磁盘中,在勾选path的时候全部勾选即可。更换Anaconda3的下载源为清华大学源,在此期间不要打开......
  • AttributeError: module 'tensorflow._api.v1.random' has no attribute 'set_seed'
      ---------------------------------------------------------------------------AttributeErrorTraceback(mostrecentcalllast)......
  • APS.NET Core 6.0Json任何类型读取到字符串属性The JSON value could not be converte
    在升级.netsdk到6.0版本后出现TheJSONvaluecouldnotbeconvertedtoSystem.String.原因是我代码定义的类型是string,但是传参的时候写了int,publicoverridevoidC......
  • colab上更换python版本或者换成tensorflow1.x版本
    2023-02-15目前colab已经不支持使用:%tensorflow_version1.x来切换使用tensorflow1.x版本了。解决方法如下:cd/content/drive/MyDrive/#安装python,可选择自己需要的......
  • Tensorflow计算模型——计算图
    1.计算图的概念:计算图:输入和计算函数都以节点的形式出现,而节点的输出项之间的关系以有向线段表示所构成的计算图形。如:向量a,b相加: 2.计算图的使用:注意:Tensor......
  • Tensorflow运行模型——会话
    会话拥有并管理tensorflow程序运行时的所有资源。所有计算完成后需要关闭会话来帮组系统回收资源。使用会话模式有两种:但是,第一种方法有缺陷,所以直接给出下面这种with......
  • golang 切片 slice
    1.基本介绍切片是数组的一个引用,因此切片是引用类型。切片的使用与数组类似,遍历,访问切片元素等都一样。切片是长度是可以变化的,因此切片可以看做是一个动态数组。slice内......
  • pycharm使用tensorflow教程
    https://blog.csdn.net/qq_40901334/article/details/105385288 pycharm使用tensorflow教程最近在学人工智能与大数据管理,环境是python+tensorflow。但配置有些麻烦,记录......
  • javascript 提取字符串方法 slice substr substring
    本文将对javascript提取字符串的三个方法slice/substr/substring,进行分析。这三个方法都具有提取字符串的功能,且都有两个参数。下面将详细介绍三个方法在一些特殊参数值......
  • onnxconverter_common/onnx2py.py
      """Convertsonnxmodelintomodel.pyfileforeasyediting.Resultingmodel.pyfileusesonnx.helperlibrarytorecreatetheoriginalonnxmodel.Const......