50 real questions An introduction to TensorFlow 2.x

50 real questions An introduction to TensorFlow 2.x

This tutorial is made using the tensorflow low-level api. If you want to learn the high-level api of keras, you can directly read the keras tutorial 40 questions to max out Keras. My life is short and I choose Keras TensorFlow is the second-generation artificial intelligence learning system developed by Google based on DistBelief. Its name comes from its own operating principle. Tensor (tensor) means N-dimensional array, Flow (flow) means calculation based on data flow graph, TensorFlow is the calculation process of tensor flowing from one end of the flow graph to the other end. TensorFlow is a system that transmits complex data structures to artificial intelligence neural networks for analysis and processing. TensorFlow can be used in a variety of machine learning and deep learning fields such as speech recognition or image recognition. The deep learning infrastructure DistBelief developed in 2011 has been improved in various aspects. Thousands of data center servers run on various devices. TensorFlow will be completely open source and anyone can use it. (Copied) In addition, I feel that TensorFlow is very difficult to install, so I suggest that you can use the online environment to learn first, click below to run online??? HTTPS: //www.kesci.com/home/project/5e28030eb8c462002d64c517 copy the code
# Import the necessary libraries Import numpy AS NP Import matplotlib.pyplot AS plt Import os Import pickle copy the code

1. Import the tensorflow library abbreviated as

tf
And output the version

import tensorflow as tf tf.__version__ Copy code

1. Tensor tensor

constant

2. Create a 3x3 0 constant tensor

tf.zeros = C ([ . 3 , . 3 ]) Copy the code

3. According to the shape of the tensor in the previous question, create a constant tensor of the same shape

tf.ones_like(c) Copy code

4. Create a 2x3 constant tensor with all 6 values

tf.fill ([ 2 , . 3 ], 6 ) # 2 X3 are all 6 constant Tensor of duplicated code

5. Create a 3x3 random random array

tf.random.normal ([ . 3 , . 3 ]) Copy the code

6. Create a constant tensor from a two-dimensional array

a = tf.constant([[ 1 , 2 ], [ 3 , 4 ]]) # Two-dimensional constant with shape ( 2 , 2 ) a Copy code

7. Take out the numpy array in the tensor

a.numpy() Copy code

8. Take 5 numbers equally spaced from 1.0-10.0 to form a constant tensor

tf.linspace ( 1.0 , 10.0 , . 5 ) copy the code

9. Start with 1 and take 1 number at intervals of 2 until it is equal to 10

TF. Range (= Start . 1 , limit = 10 , Delta = 2 ) copy the code

Calculation

10. Add two tensors

a + aCopy code

11. Do matrix multiplication of two tensors

tf.matmul(a, a) Copy code

12. Do the dot product of two tensors

tf.multiply(a, a) Copy code

13. Transpose a tensor

tf.linalg.matrix_transpose(c) Copy code

14. Transform a 12x1 tensor into a 3-row tensor

b = tf.linspace( 1.0 , 10.0 , 12 ) tf.reshape(b,[ 3 , 4 ]) # Method Two tf.reshape (B, [ . 3 , -1 ]) Copy the code

2. automatic differentiation

This part will realize the derivative at

variable

15. Create a new 1x1 variable with a value of 1

x = tf.Variable([ 1.0 ]) x Copy code

16. Create a new GradientTape to track the gradient and write the formula to be differentiated in it

with tf.GradientTape() as tape: # Track the gradient y = x * x Copy code

17. Find the derivative of y with respect to x

grad = tape.gradient(y, x) # calculate the gradient grad Copy code

3. linear regression case

In this part, 100 data points along with random noise will be generated, and then these data points will be fitted.

18. Generate X, y data, X is 100 random numbers, y=3X+2+noise, noise is 100 random numbers

X = tf.random.normal([ 100 , 1 ]).numpy() noise = tf.random.normal([ 100 , 1 ]).numpy() = Y . 3 * X-+ 2 + Noise duplicated code

Visualize these points

plt.scatter(X, y) Copy code

19. Create the parameters W, b (variable tensor) that need to be predicted

W = tf.Variable(np.random.randn()) b = tf.Variable(np.random.randn()) Print ( 'W is: F%, B: F%' % (W.numpy (), b.numpy ())) Copy the code

20. Create a linear regression prediction model

linear_regression DEF (X): return W is X + B * copy the code

21. Create a loss function, where the square of the difference between the real value and the predicted value is used. The formula is:

mean_square DEF (y_pred, y_true): return tf.reduce_mean (tf.square (y_pred-y_true)) copying the code

22. Create GradientTape, write the process that needs differentiation

with tf.GradientTape() as tape: pred = linear_regression(X) loss = mean_square(pred, y) Copy code

23. For loss, find the partial derivatives of W and b respectively

dW, db = tape.gradient (loss, [W, b]) Copy the code

24. Use the simplest and simplest gradient descent update W, b, learning_rate is set to 0.1

W.assign_sub( 0.1 *dW) b.assign_sub ( 0.1 * DB) Print ( 'W is: F%, B: F%' % (W.numpy (), b.numpy ())) Copy the code

25. The above is the process of a single iteration. Now we have to continue the loop iteration 20 times, and record each loss, W, b

for i in range ( 20 ): with tf.GradientTape() as tape: pred = linear_regression(X) loss = mean_square(pred, y) dW, db = tape.gradient(loss, [W, b]) W.assign_sub( 0.1 *dW) b.assign_sub( 0.1 *db) Print ( "STEP:% I, Loss:% F, W is: F%, B:% F" % (I + . 1 , Loss, W.numpy (), b.numpy ())) Copy the code

Draw the final fitted curve

plt.plot(X, y, 'ro' , label = 'Original data' ) plt.plot(X, np.array(W * X + b), label = 'Fitted line' ) plt.legend() plt.show() Copy code

4. neural network case

This part will train the LeNet5 model on the CIFAR10 data set

The model structure is as follows:

The CIFAR10 data set is a 32x32 3-channel image with a total of 10 types of labels

Define parameters

26. Define step : the parameters of the convolutional layer

Input picture: 3 32 32

Convolution kernel size: 5 5

Convolution kernel type: 6

So you need to define 5 5 3 6 weight variables and 6 bias variables

conv1_w = tf.Variable(tf.random.truncated_normal([ 5 , 5 , 3 , 6 ], stddev = 0.1 )) = tf.Variable conv1_b (tf.zeros ([ . 6 ])) Copy the code

27. Define step : the parameters of the convolutional layer

Input: 14 14 6

Convolution kernel size: 5 5

Convolution kernel types: 16

So you need to define 5 5 6 16 weight variables and 16 bias variables

conv2_w = tf.Variable(tf.random.truncated_normal([ 5 , 5 , 6 , 16 ], stddev = 0.1 )) = tf.Variable conv2_b (tf.zeros ([ 16 ])) Copy the code

28. Define the step : the parameters of the fully connected layer

Input: 5 5 16

Output: 120

fc1_w = tf.Variable(tf.random.truncated_normal([ 5 * 5 * 16 , 120 ], stddev = 0.1 )) = tf.Variable fc1_b (tf.zeros ([ 120 ])) Copy the code

29. Define step : parameters of the fully connected layer

Input: 120

Output: 84

fc2_w = tf.Variable(tf.random.truncated_normal([ 120 , 84 ], stddev = 0.1 )) = tf.Variable fc2_b (tf.zeros ([ 84 ])) Copy the code

30. Define step : the parameters of the fully connected layer

Input: 84

Output: 10

fc3_w = tf.Variable(tf.random.truncated_normal([ 84 , 10 ], stddev = 0.1 )) = tf.Variable fc3_b (tf.zeros ([ 10 ])) Copy the code

Model

def lenet5(input_img): ## 31. Steps to build INPUT->C1 conv1_1 = tf.nn.conv2d(input_img, conv1_w, strides=[ 1 , 1 , 1 , 1 ], padding= "VALID" ) conv1_2 = tf.nn.relu(tf.nn.bias_add(conv1_1,conv1_b)) ## 32. Steps to build C1->S2 pool1 = tf.nn.max_pool(conv1_2,ksize=[ 1 , 2 , 2 , 1 ],strides=[ 1 , 2 , 2 , 1 ],padding= "VALID" ) ## 33. Steps to build S2->C3 conv2_1 = tf.nn.conv2d(pool1,conv2_w,strides=[ 1 , 1 , 1 , 1 ],padding= "VALID" ) conv2_2 = tf.nn.relu(tf.nn.bias_add(conv2_1,conv2_b)) ## 34. Steps to build C3->S4 pool2 = tf.nn.max_pool(conv2_2,ksize=[ 1 , 2 , 2 , 1 ],strides=[ 1 , 2 , 2 , 1 ],padding= "VALID" ) ## 35. Flatten the output of S4 reshaped = tf.reshape(pool2,[ -1 , 16 * 5 * 5 ]) ## 35. Steps to build S4->C5 fc1 = tf.nn.relu(tf.matmul(reshaped,fc1_w) + fc1_b) ## 36. Steps to build C5->F6 fc2 = tf.nn.relu(tf.matmul(fc1,fc2_w) + fc2_b) ## 37. Steps to build F6->OUTPUT OUTPUT = tf.nn.softmax(tf.matmul(fc2,fc3_w) + fc3_b) return OUTPUTCopy code

38. Create an Adam optimizer with a learning rate of 0.02

= tf.optimizers.Adam Optimizer (learning_rate = 0.02 ) copying the code

Verify network correctness

(Just get some data to verify if it can work) 39. A random pair of x, y data, the shape of x is (1,32,32,3), and the shape of y is (10,)

test_x = tf.Variable(tf.random.truncated_normal([ 1 , 32 , 32 , 3 ])) = test_y [ . 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ] copy the code

Send the data into the model for backpropagation

with tf.GradientTape() as tape: ## 40. Import data from the model prediction = lenet5(test_x) print ( "First prediction:" , prediction) ## 41. Use cross entropy as the loss function to calculate the loss cross_entropy = -tf.reduce_sum(test_y * tf.math.log(prediction)) ## 42. Calculate the gradient trainable_variables = [conv1_w, conv1_b, conv2_w, conv2_b, fc1_w, fc1_b, fc2_w, fc2_b, fc3_w, fc3_b] # list of parameters to be optimized grads = tape.gradient(cross_entropy, trainable_variables) ## 43. Update gradient optimizer.apply_gradients(zip(grads, trainable_variables)) Print ( "after the back-propagation prediction:" , lenet5 (test_x)) copying the code

Read in data, preprocess

## load_cifar() definition abbreviated train_X, train_Y, test_X, test_Y = load_cifar( '/home/kesci/input/cifar10' ) classes = ( 'plane' , 'car' , 'bird' , 'cat' , 'deer' , 'dog' , 'frog' , 'horse' , 'ship' , 'truck' ) train_X.shape, train_X.shape, test_X.shape, test_Y.shape Copy code

Take a look at the data

plt.imshow(train_X[ 0 ]) plt.show() Print (classes [train_Y [ 0 ]]) Copy the code

44. Pretreatment 1: Put

train_y, test_y
Normalize

= tf.cast train_X (train_X, DTYPE = TF. float32 )/255 test_X = tf.cast (test_X, DTYPE = TF. float32 )/255 duplicated code

45. Pretreatment 2: Change

train_y, test_y
Perform onehot encoding

train_Y = tf.one_hot(train_Y, depth= 10 ) = tf.one_hot test_Y (test_Y, depth = 10 ) copying the code

Training network

Because the parameters were modified in the previous experiment, all parameters need to be reinitialized

conv1_w = tf.Variable(tf.random.truncated_normal([ 5 , 5 , 3 , 6 ], stddev = 0.1 )) conv1_b = tf.Variable(tf.zeros([ 6 ])) conv2_w = tf.Variable(tf.random.truncated_normal([ 5 , 5 , 6 , 16 ], stddev = 0.1 )) conv2_b = tf.Variable(tf.zeros([ 16 ])) fc1_w = tf.Variable(tf.random.truncated_normal([ 5 * 5 * 16 , 120 ], stddev = 0.1 )) fc1_b = tf.Variable(tf.zeros([ 120 ])) fc2_w = tf.Variable(tf.random.truncated_normal([ 120 , 84 ], stddev = 0.1 )) fc2_b = tf.Variable(tf.zeros([ 84 ])) fc3_w = tf.Variable(tf.random.truncated_normal([ 84 , 10 ], stddev = 0.1 )) = tf.Variable fc3_b (tf.zeros ([ 10 ])) Copy the code

Then redefine an optimizer

= tf.optimizers.Adam optimizer2 (learning_rate = 0.002 ) copying the code

Simply write a function to calculate the accuracy rate

def accuracy_fn(y_pred, y_true): preds = tf.argmax(y_pred, axis= 1 ) # The index with the largest value, which corresponds to the character label = tf.argmax Labels (y_true, Axis = . 1 ) return tf.reduce_mean (tf.cast (tf.equal (preds, Labels), TF. float32 )) copying the code

46. Send the data to the model and start training. The training set is iterated 5 times, and each pass is divided into 25 batches. After each iteration of the data set, the accuracy rate on the training set is output.

EPOCHS = 5 # The number of iterations of the entire data set for epoch in range (EPOCHS): for i in range ( 25 ): # A whole data set is divided into 10 small batch training with tf.GradientTape() as tape: prediction = lenet5(train_X[i* 2000 :(i+ 1 )* 2000 ]) cross_entropy = -tf.reduce_sum(train_Y[i* 2000 :(i+ 1 )* 2000 ] * tf.math.log(prediction)) trainable_variables = [conv1_w, conv1_b, conv2_w, conv2_b, fc1_w, fc1_b, fc2_w, fc2_b, fc3_w, fc3_b] # list of parameters to be optimized grads = tape.gradient(cross_entropy, trainable_variables) # calculate the gradient optimizer2.apply_gradients(zip(grads, trainable_variables)) # update gradient # After each training, output the accuracy of the training set accuracy = accuracy_fn(lenet5(train_X), train_Y) Print ( 'Epoch [{}/{}], Train Loss: {:} .3f, the Test Accuracy: {:} .3f' .format (+ Epoch . 1 , Epochs, cross_entropy/2000 , Accuracy)) copying the code

Use the network to make predictions

47. Make predictions on the test set

test_prediction = lenet5(test_X) test_acc = accuracy_fn(test_prediction, test_Y) test_acc.numpy() Copy code

Take some data to view the prediction results

plt.figure(figsize=( 10 , 10 )) for i in range ( 25 ): plt.subplot( 5 , 5 ,i + 1 ) plt.imshow(test_X[i], cmap=plt.cm.binary) title=classes[np.argmax(test_Y[i])]+ '=>' title+=classes[np.argmax(test_prediction[i])] plt.xlabel(title) plt.xticks([]) plt.yticks([]) plt.grid(False) Copy code

5. variable save & read

In this part, we implement the simplest way to save & read variable values

48. Create a new Checkpoint object and fill it with the data that has just been trained

save = tf.train.Checkpoint() save.listed = [fc3_b] = {save.mapped 'fc3_b' : save.listed [ 0 ]} copy the code

49. Use the save() method to save, and record the returned save path

= save.save the save_path ( '/Home/kesci/Work/Data/tf_list_example' ) Print (the save_path) copying the code

50. Create a new Checkpoint object and read data from it

restore = tf.train.Checkpoint() fc3_b2 = tf.Variable(tf.zeros([ 10 ])) print (fc3_b2.numpy()) restore.mapped = { 'fc3_b' : fc3_b2} restore.restore(save_path) print (fc3_b2.numpy()) Wonderful review of past issues Route and data download suitable for beginners to get started with artificial intelligence Machine Learning Online Manual Deep Learning Online Manual AI basic download (pdf updated to 25 episodes) This site qq group 1003271085 , please reply to "add group" to join the WeChat group Get a 10% discount coupon for Knowledge Planet on this site, please reply to "Knowledge Planet" Like the article, click one to read Copy code