import tensorflow as tf
import numpy as np
"""
本例子是用来演示利用TensorFlow训练出假设的权重和偏置
"""
## 使用numpy生成100个随机数
x_data = np.random.rand(100).astype(np.float32)
y_data = x_data*0.1+0.3
# 构造线性模型
Weights = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
biases = tf.Variable(tf.zeros([1]))
y_pred = Weights*x_data+biases
# 二次代价函数
loss = tf.reduce_mean(tf.square(y_pred-y_data))
# 定义一个梯度下降法进行训练
optimizer = tf.train.GradientDescentOptimizer(0.5)
# 最小化代价函数
train = optimizer.minimize(loss)
init = tf.initialize_all_variables()
# define the session
sess = tf.Session()
sess.run(init) # 激活神经网络
# 开始训练
for step in range(201):
sess.run(train)
if step%20 ==0:
print(step,sess.run(Weights),sess.run(biases))
输出结果为:
0 [-0.43459344] [0.7752902]
20 [-0.05772059] [0.38117662]
40 [0.05961926] [0.3207834]
60 [0.08966146] [0.30532113]
80 [0.09735306] [0.30136237]
100 [0.09932232] [0.30034882]
120 [0.09982649] [0.3000893]
140 [0.0999556] [0.30002287]
160 [0.09998864] [0.30000585]
180 [0.09999712] [0.3000015]
200 [0.09999926] [0.3000004]
经过201次迭代得到训练的权重和偏置近似于我们预先的假设函数的设置。
注意:
TensorFlow和Numpy的构造方法千万不要混杂,tf.Variable里面一定用tf的方法不能用np的方法,否则类型不匹配