首页 > 其他分享 >安装了跑神经网络的环境,所遇到的问题及解惑1

安装了跑神经网络的环境,所遇到的问题及解惑1

时间:2024-09-06 10:03:10浏览次数:5  
标签:39 06 09 xla 神经网络 cuda executor 解惑 安装

cuda:12.2

cudnn:8.9.7

tensorflow库:2.17.0

(python310_test) {9:37}/home/code/python ➭ python mnist_test.py
2024-09-06 09:39:29.473128: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-09-06 09:39:29.473229: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-09-06 09:39:29.474765: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-09-06 09:39:29.486496: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-09-06 09:39:31.073095: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2024-09-06 09:39:33.295797: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:887] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-09-06 09:39:33.356094: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:887] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-09-06 09:39:33.356833: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:887] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-09-06 09:39:33.360581: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:887] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-09-06 09:39:33.361110: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:887] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-09-06 09:39:33.361697: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:887] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-09-06 09:39:36.333888: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:887] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-09-06 09:39:36.334434: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:887] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-09-06 09:39:36.334755: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2022] Could not identify NUMA node of platform GPU id 0, defaulting to 0.  Your kernel may not have been built with NUMA support.
2024-09-06 09:39:36.335313: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:887] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-09-06 09:39:36.335407: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1929] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 1300 MB memory:  -> device: 0, name: NVIDIA GeForce GTX 960, pci bus id: 0000:01:00.0, compute capability: 5.2
Epoch 1/5
2024-09-06 09:39:39.774178: I external/local_xla/xla/service/service.cc:168] XLA service 0x7fac628d10f0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2024-09-06 09:39:39.774274: I external/local_xla/xla/service/service.cc:176]   StreamExecutor device (0): NVIDIA GeForce GTX 960, Compute Capability 5.2
2024-09-06 09:39:39.784928: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:269] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2024-09-06 09:39:40.969447: I external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:454] Loaded cuDNN version 8907
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1725586781.124390    1795 device_compiler.h:186] Compiled cluster using XLA!  This line is logged at most once for the lifetime of the process.
1875/1875 [==============================] - 22s 10ms/step - loss: 0.3729 - accuracy: 0.8871
Epoch 2/5
1875/1875 [==============================] - 18s 10ms/step - loss: 0.1962 - accuracy: 0.9396
Epoch 3/5
1875/1875 [==============================] - 19s 10ms/step - loss: 0.1528 - accuracy: 0.9539
Epoch 4/5
1875/1875 [==============================] - 19s 10ms/step - loss: 0.1269 - accuracy: 0.9600
Epoch 5/5
1875/1875 [==============================] - 19s 10ms/step - loss: 0.1132 - accuracy: 0.9646
313/313 [==============================] - 3s 10ms/step - loss: 0.1134 - accuracy: 0.9652
1/1 [==============================] - 0s 115ms/step
[7 2 1 0 4]
[7 2 1 0 4]

前两种颜色的问题,网上都有相关的,忘了截图了,反正可以不用在意。

后两种颜色的解惑

 https://blog.csdn.net/Deaohst/article/details/125708952

标签:39,06,09,xla,神经网络,cuda,executor,解惑,安装
From: https://www.cnblogs.com/KingZhan/p/18399707

相关文章

  • 【达梦】Docker安装达梦数据库 dm8
    1.docker启动达梦8镜像命令dockerrun-p5236:5236--namedmdb\-eLD_LIBRARY_PATH=/opt/dmdbms/bin\-ePAGE_SIZE=32\-eEXTENT_SIZE=32\-eLOG_SIZE=2048\-eUNICODE_FLAG=1\-eLENGTH_IN_CHAR=1\-eBLANK_PAD_MODE=1\-v/home/docker/dmdbms/data::/opt......
  • 【模仿学习代码复现】环境安装踩坑记录
    (这人怎么又在装环境)下载了一下OpenAI的论文代码,官方readme里的依赖设置如下:*OpenAIGym>=0.1.0,mujoco_py>=0.4.0*numpy>=1.10.4,scipy>=0.17.0,theano>=0.8.2*h5py,pytables,pandas,matplotlib前面都好好的,装到theano突然发现这破玩意不支持3.6及以上版本,......
  • 828华为云征文|华为云Flexus X实例评测使用体验——安装部署discuzQ小程序博客论坛
    使用discuzQ搭建博客/论坛:方便企业开发者搭建博客、论坛、设计作品展示、简历等企业网站,具有较高的性价比。比如个人博主搭建的博客网站,华为云FlexusX实例可以满足日常的文章发布、读者访问等需求,提供流畅的浏览体验,推荐华为云FlexusX的4核12G、3M带宽、100G硬盘的......
  • Docker - 在Rockly Linux 9.4 上安装Docker-CE
    安装Docker-CE修改repo源修改为国内阿里源以提高安装速度sed-e's|^mirrorlist=|#mirrorlist=|g'\-e's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.aliyun.com/rockylinux|g'\-i.bak\/etc/yum.repos.d/rocky*.repo......
  • 三、搭建网站服务器超详细步骤——FinalShell下载安装使用流程(免费国产的SSH工具)+宝塔
    前言本篇博客是搭建网站服务器模块下的第3部分  FinalShell下载安装使用流程  在分享这篇博客之前,首先讲一下,FinalShell软件是干什么用的,用大白话进行说明一下:这个软件是一款远程控制和管理服务器的软件,通过SSH协议与远程服务器进行连接,去操控一系列的命令信息。就像......
  • 《 Kali Linux 安装的具体步骤》
    以下是KaliLinux安装的具体步骤: 准备工作: 1. 下载KaliLinux镜像:从官方网站(https://www.kali.org/downloads/)下载适合您的版本(如64位)的ISO镜像文件。2. 准备安装介质:您可以选择将镜像写入U盘(使用Rufus等工具)或者刻录到DVD光盘。3. 备份重要数据:安装......
  • 【CUDA12安装包】CUDA 12.6.1 及其配套的 cuDNN 8.9.7.29
    CUDA12.6.1及其配套的cuDNN8.9.7.29【均来自英伟达官网】【Windows11】链接:https://pan.baidu.com/s/1wTluMG1-KbOZCOZfcxqsrw?pwd=2rqg提取码:2rqg内容:cuda_12.6.1_560.94_windows.exe(560.94是要求最低显卡驱动版本)cudnn-windows-x86_64-8.9.7.29_cuda12-archive.zip......
  • Elasticsearch 集群 和 Kibana:最新版 8.15.0 手动安装教程
    1.前言Elasticsearch和Kibana是ElasticStack的核心组件,分别扮演着数据存储与检索、分析和数据可视化的角色。‌1.1Elasticsearch‌简介Elasticsearch‌是一个基于JSON的分布式搜索和分析引擎,它提供了一个分布式、多租户能力的全文搜索引擎,具有HTTP网络接口和无模式......
  • 第16篇 如何制作自己的安装程序--使用InnoSetupPE工具
    1.安装去官网下载最新版本:https://jrsoftware.org/isdl.php#stable2.进入应用主界面3.点击file》New【开始打包】直接next4.填写相关数据直接点next5.根据要求填写,然后直接next6.这里默认不改,然后直接next7.这里也默认不改,然后直接next8.这里为空,可以不填,然后......
  • SVI pyro 随机变分推理的提示和技巧 ,贝叶斯神经网络 bnn pytorch python
    SVI第四部分:提示和技巧¶pyro.ai/examples/svi_part_iv.html导致这一个的三个SVI教程(第一部分, 第二部分,& 第三部分)通过使用Pyro做变分推断所涉及的各个步骤。在这个过程中,我们定义了模型和指南(即,变分分布),设置了变分目标(特别是埃尔博斯),以及构造的优化器(pyro.opti......