- TensorFlow 1.x Deep Learning Cookbook
- Antonio Gulli Amita Kapoor
- 269字
- 2021-07-02 22:01:37
How to do it...
Here is how we proceed with the single layer perceptron:
- Import the modules needed:
import tensorflow as tf import numpy as np
- Define the hyperparameters to be used:
# Hyper parameters eta = 0.4 # learning rate parameter epsilon = 1e-03 # minimum accepted error max_epochs = 100 # Maximum Epochs
- Define the threshold function:
# Threshold Activation function def threshold (x): cond = tf.less(x, tf.zeros(tf.shape(x), dtype = x.dtype)) out = tf.where(cond, tf.zeros(tf.shape(x)), tf.ones(tf.shape(x))) return out
- Specify the training data. In this example, we take a three input neuron (A,B,C) and train it to learn the logic AB + BC:
# Training Data Y = AB + BC, sum of two linear functions. T, F = 1., 0. X_in = [ [T, T, T, T], [T, T, F, T], [T, F, T, T], [T, F, F, T], [F, T, T, T], [F, T, F, T], [F, F, T, T], [F, F, F, T], ] Y = [ [T], [T], [F], [F], [T], [F], [F], [F] ]
- Define the variables to be used, the computational graph to compute updates, and finally, execute the computational graph:
W = tf.Variable(tf.random_normal([4,1], stddev=2, seed = 0)) h = tf.matmul(X_in, W) Y_hat = threshold(h) error = Y - Y_hat mean_error = tf.reduce_mean(tf.square(error)) dW = eta * tf.matmul(X_in, error, transpose_a=True) train = tf.assign(W, W+dW) init = tf.global_variables_initializer() err = 1 epoch = 0 with tf.Session() as sess: sess.run(init) while err > epsilon and epoch < max_epochs: epoch += 1 err, _ = sess.run([mean_error, train]) print('epoch: {0} mean error: {1}'.format(epoch, err)) print('Training complete')
The following is the output of the preceding code: