首页 > 其他分享 >深度学习-卷积神经网络-tensorflow的用法-49

深度学习-卷积神经网络-tensorflow的用法-49

时间:2024-02-29 11:11:55浏览次数:28  
标签:sess 49 卷积 housing print import tf tensorflow data

目录

1. 01_first_graph

import tensorflow as tf

x = tf.Variable(3, name='x')
y = tf.Variable(4, name='y')

f = x * x * y + y + 2

sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

sess.run(x.initializer)
sess.run(y.initializer)

result = sess.run(f)
print(result)
sess.close()

2. session run

import tensorflow as tf

x = tf.Variable(3, name='x')
y = tf.Variable(4, name='y')
f = x*x*y + y + 2

print(f)  # Tensor("add_1:0", shape=(), dtype=int32)  add_1:0代表第二个add操作   shape=() 没有维度 代表输出的是一个数


with tf.Session() as sess:
    x.initializer.run() # 等价 session.run(x.initializer)
    y.initializer.run()
    result = f.eval()  # 等价 session.run(f)

print(result)


3. global_variables_initializer

import tensorflow as tf

x = tf.Variable(3, name='x')
y = tf.Variable(4, name='y')
f = x*x*y + y + 2


init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)

    result = f.eval()

print(result)

4. InteractiveSession

import tensorflow as tf

x = tf.Variable(3, name='x')
y = tf.Variable(4, name='y')

f = x*x*y + y + 2

init = tf.global_variables_initializer()


sess = tf.InteractiveSession()

init.run()

result = f.eval()

print(result)

sess.close()

5. get_default_graph

import tensorflow as tf

x1 = tf.Variable(1)
print(x1.graph is tf.get_default_graph())

graph = tf.Graph()
x3 = tf.Variable(3)
with graph.as_default():
    x2 = tf.Variable(2)


x4 = tf.Variable(3)

print(x2.graph is graph)
print(x2.graph is tf.get_default_graph())

print(x3.graph is tf.get_default_graph())
print(x4.graph is tf.get_default_graph())

6. life_cicycle

import tensorflow as tf

w = tf.Variable(3)

x = w + 2
y = x + 5
z = y + 3

with tf.Session() as sess:
    sess.run(w.initializer)
    sess.run(y)

    sess.run(z)


with tf.Session() as sess:
    sess.run(w.initializer)
    y_val, z_val = sess.run([y, z])
    print(y_val, z_val)

07 linear_regression

import tensorflow as tf
import numpy as np
from sklearn.datasets import fetch_california_housing

housing = fetch_california_housing(data_home="./scikit_learn_data", download_if_missing=True)

m, n = housing.data.shape
print(m, n)
print(housing.data, housing.target)
print(housing.feature_names)

housing_data_plus_bias = np.c_[np.ones((m, 1)), housing.data]

X = tf.constant(housing_data_plus_bias, dtype=tf.float32, name='X')
y = tf.constant(housing.target.reshape(-1, 1), dtype=tf.float32, name='y')

XT = tf.transpose(X)

# (XT*X)**-1*XT*Y
theta = tf.matmul(tf.matmul(tf.matrix_inverse(tf.matmul(XT, X)), XT), y)

with tf.Session() as sess:
    theta_value = sess.run(theta)
print(theta_value)

8. manual_gradient

import tensorflow as tf
import numpy as np
from sklearn.datasets import fetch_california_housing
from sklearn.preprocessing import StandardScaler


housing = fetch_california_housing(data_home="./scikit_learn_data", download_if_missing=True)

n_epochs = 36500
learning_rate = 0.001

m, n = housing.data.shape
housing_data_plus_bias = np.c_[np.ones((m, 1)), housing.data]

# 归一化
scaler = StandardScaler(with_mean=True, with_std=True).fit(housing_data_plus_bias)
scaled_housing_data_plus_bias = scaler.transform(housing_data_plus_bias)
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name='X')
y = tf.constant(housing.target.reshape(-1,1), dtype=tf.float32, name='y')

# 初始化
theta = tf.Variable(tf.random_uniform([n+1, 1], -1, 1), name='theta')
y_pred = tf.matmul(X, theta, name='predictions')

error = y_pred - y

rmse = tf.sqrt(tf.reduce_mean(tf.square(error)), name='rmse')

# 梯度公式 (y_predict - y) * xj
gradients = 2/m*tf.matmul(tf.transpose(X), error)

# 训练的过程 就是更新theta
training_op = tf.assign(theta, theta - learning_rate * gradients)

init = tf.global_variables_initializer()

with tf.Session() as sess:

    sess.run(init)

    for epoch in range(n_epochs):

        if epoch % 100 == 0:
            print("Epoch", epoch, "RMSE=", rmse.eval())
        sess.run(training_op)

    best_theta = theta.eval()
    print(best_theta)




9. auto_diff

import tensorflow as tf
import numpy as np
from sklearn.datasets import fetch_california_housing
from sklearn.preprocessing import StandardScaler


housing = fetch_california_housing(data_home="./scikit_learn_data", download_if_missing=True)

n_epochs = 36500
learning_rate = 0.001

m, n = housing.data.shape
housing_data_plus_bias = np.c_[np.ones((m, 1)), housing.data]

# 归一化
scaler = StandardScaler(with_mean=True, with_std=True).fit(housing_data_plus_bias)
scaled_housing_data_plus_bias = scaler.transform(housing_data_plus_bias)
X = tf.constant(scaled_housing_data_plus_bias, dtype=tf.float32, name='X')
y = tf.constant(housing.target.reshape(-1,1), dtype=tf.float32, name='y')

# 初始化
theta = tf.Variable(tf.random_uniform([n+1, 1], -1, 1), name='theta')
y_pred = tf.matmul(X, theta, name='predictions')

error = y_pred - y

mse = tf.sqrt(tf.reduce_mean(tf.square(error)), name='mse')

# 梯度公式 (y_predict - y) * xj
# gradients = 2/m*tf.matmul(tf.transpose(X), error)
gradients = tf.gradients(mse, [theta])[0]
# 训练的过程 就是更新theta
training_op = tf.assign(theta, theta - learning_rate * gradients)

init = tf.global_variables_initializer()

with tf.Session() as sess:

    sess.run(init)

    for epoch in range(n_epochs):

        if epoch % 100 == 0:
            print("Epoch", epoch, "RMSE=", mse.eval())
        sess.run(training_op)

    best_theta = theta.eval()
    print(best_theta)




12. softmax_regression

from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf

my_mnist = input_data.read_data_sets("MNIST_data_bak/", one_hot=True)

# 55, 000 training data
# 10, 000 test data
# 5, 000 validate data

# 输入的是一堆图片,None表示不限输入条数,784表示每张图片都是一个784个像素值的一维向量 28*28
x = tf.placeholder(dtype=tf.float32, shape=(None, 784))
y = tf.placeholder(dtype=tf.float32, shape=(None, 10))
W = tf.Variable(tf.random_uniform([784, 10]))
b = tf.Variable(tf.zeros([10]))
y_predict = tf.nn.softmax(tf.matmul(x, W) + b)

cross_entropy = tf.reduce_mean(-tf.reduce_sum(y * tf.log(y_predict), reduction_indices=[1]))

train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)

# # tf.argmax()是一个从tensor中寻找最大值的序号,tf.argmax就是求各个预测的数字中概率最大的那一个 y 有归一化
correct_prediction = tf.equal(tf.argmax(y_predict, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

with tf.Session() as sess:
    tf.global_variables_initializer().run()
    for _ in range(10000):
        batch_xs, batch_ys = my_mnist.train.next_batch(100)
        sess.run(train_step, feed_dict={x: batch_xs, y: batch_ys})
        print("TrainSet batch acc : %s  " % accuracy.eval({x: batch_xs, y: batch_ys}))
        print("ValidSet acc : %s" % accuracy.eval({x: my_mnist.validation.images, y: my_mnist.validation.labels}))

    print("Test acc", accuracy.eval({x:my_mnist.test.images, y:my_mnist.test.labels}))

13. convolution

#!/usr/bin/python3
# -*- coding:utf-8 -*-
"""
Copyright (c) Huawei Technologies Co., Ltd. 2021-2022. All rights reserved.
"""

import numpy as np

from sklearn.datasets import load_sample_images
import tensorflow as tf
import matplotlib.pylab as plt

# 两张图片 每张图片 (height, width, channels)
# mini_batch dataset (batch_size, height, width, channels)
dataset = np.array(load_sample_images().images, dtype=np.float32)

batch_size, height, width, channels = dataset.shape
print("m: h: w: c====>", batch_size, height, width, channels)

# plt.imshow(load_sample_images().images[0])
# plt.show()
# plt.imshow(load_sample_images().images[1])
# plt.show()

# 初始化 2个 filters
filters_test = np.zeros(shape=(7, 7, channels, 2), dtype=np.float32)
filters_test[:, 3, :, 0] = 1  # 第一个核 中间列 第4列 设置为1
filters_test[3, :, :, 1] = 1  # 第二个核 中间行 第4行 设置为1

X = tf.placeholder(tf.float32, shape=(None, height, width, channels))
convolution = tf.nn.conv2d(X, filter=filters_test, strides=[1, 1, 1, 1],
                           padding="SAME")

with tf.Session() as sess:
    output = sess.run(convolution, feed_dict={X: dataset})
    print(output.shape)

plt.imshow(load_sample_images().images[0])
plt.show()

plt.imshow(output[0, :, :, 0])  # 第一张图的 第一个 feature_map
plt.show()

plt.imshow(output[0, :, :, 1])  # 第一张图的 第二个 feature_map
plt.show()

###########################

plt.imshow(load_sample_images().images[1])
plt.show()

plt.imshow(output[1, :, :, 0])  # 第二张图的 第一个 feature_map
plt.show()

plt.imshow(output[1, :, :, 1])  # 第二张图的 第二个 feature_map
plt.show()

14.pooling

import numpy as np
from sklearn.datasets import load_sample_images
import tensorflow as tf
import matplotlib.pylab as plt

dataset = np.array(load_sample_images().images, dtype=np.float32)

batch_size, height, width, channels = dataset.shape
print(batch_size, height, width, channels)


X = tf.placeholder(tf.float32, shape=(None, height, width, channels))


max_pool = tf.nn.max_pool(X, ksize=[1, 4, 4, 1], strides=[1, 4, 4, 1], padding="VALID")


with tf.Session() as sess:
    output = sess.run(max_pool, feed_dict={X: dataset})
    print(output.shape)

# 第一张图片 以及 池化后
plt.imshow(dataset[0].astype(np.uint8))
plt.show()
plt.imshow(output[0].astype(np.uint8))
plt.show()

# 第而张图片 以及 池化后
plt.imshow(dataset[1].astype(np.uint8))
plt.show()
plt.imshow(output[1].astype(np.uint8))
plt.show()





标签:sess,49,卷积,housing,print,import,tf,tensorflow,data
From: https://www.cnblogs.com/cavalier-chen/p/18043007

相关文章

  • 基于MATLAB深度学习工具箱的CNN卷积神经网络训练和测试
    一、理论基础    为了尽可能详细地介绍基于MATLAB深度学习工具箱的CNN卷积神经网络训练和测试,本文将按照以下内容进行说明:CNN卷积神经网络的基本原理深度学习工具箱的基本介绍CNN卷积神经网络训练的步骤和方法CNN卷积神经网络的优缺点1.CNN卷积神经网络的基本原理 ......
  • 349. 两个数组的交集C
    /***Note:Thereturnedarraymustbemalloced,assumecallercallsfree().*/int*intersection(int*nums1,intnums1Size,int*nums2,intnums2Size,int*returnSize){inthash1[1001]={0};inthash2[1001]={0};int*tem=(int*)malloc(sizeof......
  • 洛谷题单指南-二分查找与二分答案-P2249 【深基13.例1】查找
    原题链接:https://www.luogu.com.cn/problem/P2249题意解读:找有序数组中某个数第一次出现的位置,二分模版题,由于是二分板块的第一题,有必要对二分的各种模版进行介绍。解题思路:关于二分的一切:1、二分的本质二分的本质,是通过某种判定把目标范围划分成两个区间二分问题通常有两......
  • cf1491h-solution
    CF1491HSolutionlink考虑分块。按照点的编号分块,维护\(b_i\)表示\(i\)往上跳遇到的第一个与\(i\)异块的点。对于散块修改,直接暴力重构整块的\(b\)。重构方式是,如果\(a_i\)与\(i\)异块,则\(b_i\getsa_i\);否则\(b_i\getsb_{a_i}\)。对于整块,打标记维护整体减了多......
  • cf1491e-solution
    CF1491ESolutionlink首先,把一棵大小为\(f_i\)的树切成两棵树只能是切成\(f_{i-1}\)和\(f_{i-2}\)的,而且最多只有两种切的方案。证明考虑分类讨论是否有大小为\(f_{i-1}\)的子树(以\(1\)为根)即可,感性理解就好。接下来你可以选择每次暴力通过两种割边方案递归检验,但是......
  • 深度学习-卷积神经网络-keras的用法-48
    目录1.2.3.4.1.#模型各层之间是线性关系k层k+1层可以加上各种元素来构造神经网络#这些元素可以通过一个列表来制定然后作为参数传递给Sequential来生成模型fromkeras.modelsimportSequentialfromkeras.modelsimportModelfromkeras.layersimportDensefromke......
  • 理解卷积
    早上8点吃早饭 12点吃中饭求14点胃部有多少未消化的食物?已知人消化的能力是固定的函数g,解:14点距离8点,有6小时。14点距离12点2小时那么8点对14点的影响,是8点的进食量f(8)*g(14-8)  同样的12点对14点的影响,是12点的进食量f(12)*g(14-12) 所以总的影响就是两餐饭的叠加!假设是......
  • day43 动态规划part5 代码随想录算法训练营 494. 目标和
    题目:494.目标和我的感悟:加油!理解难点:dp的几种方法的应用记住dp[j]+=dp[j-nums[i]]听课笔记:代码示例:classSolution:deffindTargetSumWays(self,nums:List[int],target:int)->int:total_sum=sum(nums)ifabs(target)>total_sum:......
  • day43 动态规划part5 代码随想录算法训练营 1049. 最后一块石头的重量 II
    题目:1049.最后一块石头的重量II我的感悟:复习了昨天的模板是不一样,今天这个我推出来了。哈哈 理解难点:按照昨天的思路,dp[target]里面是能凑出来的最大值。a是另外能凑出来的和。diff是两者的差。听课笔记:我自己先写出的代码:classSolution:deflastStoneW......
  • 代码随想录 第六天 哈希表理论基础 ● 242.有效的字母异位词 ● 349. 两个数组的交
    LeetCode:242.有效的字母异位词-力扣(LeetCode)思路:既然只判断两个字符串的字母,就一个++,一个--,最后如果二十六个字母都是零,说明两个字符串相等。反思: //charat(i)是返回字符串索引,所以s.charAt(i)-'a'实际上是获取字符串s中第i个字符相对于字母'a'的偏移量。......