A School Assignment
Python
Google Colab, Visual Studio 2019
Pandas, Tensorflow, Keras, Numpy, Matplot, Dataframes, Cuda, relu, GPU, TPU, CPU
This assignment experiments with Python packages such as Tensorflow, Keras, Pandas, Numpy, and Matplot. The class provided a few lines of code to finish complete and test. This is demonstrated with the following answers to questions.
Use PyPI or pip to install tensorflow on your machine.
Write a Python script to perform the following tasks:
Use tf.random.uniform( ) to create two 1x9 matrices and name them as A1 and A2 (dtype = tf.float32)
Use print( ) to show the content of A1 and A2
Use tf.reshape( ) to transform A1 and A2 respectively to B1 and B2 as a 3x3 matrix
Use print( ) to show the content of B1 and B2
Use tf.matmul( ) to calculate B1 * B2 and save the product into C1
Use tf.transpose() to transform C1 to C2
Use print( ) to show the content of C1 and C2
Use tf.cast( ) to typecast C1 to D1 as an integer matrix (dtype=tf.int32)
Use print( ) to show the content of D1
This code is for exploratory research. TensorFlow is a platform for machine learning. In this program, the objective is to test TensorFlow’s functions. To implement this, I used Visual Studio 2019. I download TensorFlow with pip and downloaded the Cuda toolkit. This code can also be ran inside Google Colab.
To see more test cases for this question, please view the documentation, here.
import tensorflow as tf
A1 = tf.random.uniform(shape=(1, 9), dtype = tf.float32)
A2 = tf.random.uniform(shape=(1, 9), dtype = tf.float32)
print("\n A1")
print(A1)
print("\n A2: ")
print(A2)
print("\n")
B1 = tf.reshape(tensor = A1, shape = [3, 3])
B2 = tf.reshape(tensor = A2, shape = [3, 3])
print("Reshaped into B1 & B2:")
print("\n B1:")
print(B1)
print("\n B2:")
print(B2)
print("\n")
C1 = tf.matmul(B1,B2) #mutiples b1 by b2, producing b1 * b2
C2 = tf.transpose(C1)
print("Mutiply B1 and B2 as C1 and transpose C1")
print("\n C1:")
print(C1)
print("\n C2:")
print(C2)
print("\n")
print("Type Cast C1 to DL as tf.int32")
D1 = tf.cast(C1, dtype= tf.int32)
print("\n D1:")
print(D1)
Visit google colab at the address: colab.research.google.com
Write a segment of Python code for the following graph:
Let b = a vector of 100 elements with all the values being zero
W = a 784 x 100 matrix with random values that range from -1.0 to 1.0
x = a 100 x 784 matrix with random values that ranges from 0.0 to 1.0
Use print( ) to show the content of h matrix.
The objective of this program is to learn to use python with the TensorFlow api within the online compiler called Google Colab. The variable b creates a vector containing 100 elements all set to 0. The variable W is a matrix that contains values from -1.0 to 1.0. The variable x is another matrix with elements ranging from 0.0 to 1.0. In this program, we use the ReLu. ReLu is a rectified linear unit. It outputs input if it is positive. If it is not positive, the output is 0. Inside this function, we used the matmul to multiply the matrices w and x and added the vector b.
Test Case 1: Content of H
import tensorflow as tf
#Vector of 100 elements all set to 0
b = tf.zeros([100], dtype=tf.dtypes.float32, name=None)
b = tf.Variable(b)
#W = 784 x 100 matrix with values from -1.0 to 1.0:
W = tf.random.uniform(shape= [784, 100], minval= -1.0, maxval= 1.0, dtype = tf.float32)
W = tf.Variable(W)
#x = 100 x 784 matrix with values from 0.0 to 1.0:
x = tf.random.uniform(shape= [100, 784], minval= 0.0, maxval= 1.0, dtype = tf.float32)
x = tf.Variable(x)
#h = ReLu(Wx + b)
h = tf.nn.relu(tf.matmul(x,W) + b)
print(h)
Download Tensorflow and use Keras. Based on our online lecture notes, perform the following tasks:
(1) Import numpy, pandas, and matplotlib.pyplot
(2) Set up fashion mnist dataset
(3) Load training and test data from mnist dataset
(4) With Keras, create a neural network model through Sequential( ) with three layers:
a. Flattern
b. Dense relu
c. Dense softmax
(5) Compile the model
(6) Train the model with train data and epochs = 10
(7) Use test data to evaluate the accuracy of the trained model
(8) Use test data to show the prediction results for the first 10 test images
(9) Create a table of ten rows and four columns. In column 1, paste the ten test images; in column 2, give the predicted classification group name; in column 3, provide the computed probability value; and in column 4, indicate whether the predication is correct or wrong.
Create screenshots to show your results.,
This program was created inside Google Colab using the CPU setting. This program was to demonstrate the neural networks and predictions by using Keras.
To see more test cases for this question, please view the documentation, here.
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
class_names = ['top','trouser','pullover','dress','coat','sandal','shirt','sneaker','bag','ankle boot']
fashion_mnist = tf.keras.datasets.fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
with tf.device('/cpu:0'):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28)),
tf.keras.layers.Dense(512, activation = tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.summary()
model.compile(optimizer = 'adam',loss = 'sparse_categorical_crossentropy',metrics=['accuracy'])
model.fit(x_train, y_train, epochs = 10)
model.evaluate(x_test, y_test)
pred = model.predict(x_test)
print("Prediction 1: " + class_names[np.argmax(pred[0])])
print(pred[0])
print("\n Prediction 2: " + class_names[np.argmax(pred[1])])
print(pred[1])
print("\n Prediction 3: " + class_names[np.argmax(pred[2])])
print(pred[2])
print("\n Prediction 4: " + class_names[np.argmax(pred[3])])
print(pred[3])
print("\n Prediction 5: " + class_names[np.argmax(pred[4])])
print(pred[4])
print("\n Prediction 6: " + class_names[np.argmax(pred[5])])
print(pred[5])
print("\n Prediction 7: " + class_names[np.argmax(pred[6])])
print(pred[6])
print("\n Prediction 8: " + class_names[np.argmax(pred[7])])
print(pred[7])
print("\n Prediction 9: " + class_names[np.argmax(pred[8])])
print(pred[8])
print("\n Prediction 10: " + class_names[np.argmax(pred[9])])
print(pred[9])
print("\n Image1:")
plt.figure()
plt.imshow(x_test[0])
plt.colorbar()
plt.show()
print("\n Image2:")
plt.figure()
plt.imshow(x_test[1])
plt.colorbar()
plt.show()
print("\n Image3:")
plt.figure()
plt.imshow(x_test[2])
plt.colorbar()
plt.show()
print("\n Image4:")
plt.figure()
plt.imshow(x_test[3])
plt.colorbar()
plt.show()
print("\n Image5:")
plt.figure()
plt.imshow(x_test[4])
plt.colorbar()
plt.show()
print("\n Image6:")
plt.figure()
plt.imshow(x_test[5])
plt.colorbar()
plt.show()
print("\n Image7:")
plt.figure()
plt.imshow(x_test[6])
plt.colorbar()
plt.show()
print("\n Image8:")
plt.figure()
plt.imshow(x_test[7])
plt.colorbar()
plt.show()
print("\n Image9:")
plt.figure()
plt.imshow(x_test[8])
plt.colorbar()
plt.show()
print("\n Image10:")
plt.figure()
plt.imshow(x_test[9])
plt.colorbar()
plt.show()
plt.figure()
plt.imshow(x_test[1])
plt.colorbar()
plt.show()
print("\n Image3:")
plt.figure()
plt.imshow(x_test[2])
plt.colorbar()
plt.show()
print("\n Image4:")
plt.figure()
plt.imshow(x_test[3])
plt.colorbar()
plt.show()
print("\n Image5:")
plt.figure()
plt.imshow(x_test[4])
plt.colorbar()
plt.show()
print("\n Image6:")
plt.figure()
plt.imshow(x_test[5])
plt.colorbar()
plt.show()
print("\n Image7:")
plt.figure()
plt.imshow(x_test[6])
plt.colorbar()
plt.show()
print("\n Image8:")
plt.figure()
plt.imshow(x_test[7])
plt.colorbar()
plt.show()
print("\n Image9:")
plt.figure()
plt.imshow(x_test[8])
plt.colorbar()
plt.show()
print("\n Image10:")
plt.figure()
plt.imshow(x_test[9])
plt.colorbar()
plt.show()
Continue the code in Q10 of Assignment 3. Use Google Colab to set the runtime environment to CPU, GPU and TPU, respectively. Run the same machine learning process of the Fashion problem and record the execution time for each epoch in these three settings. Draw a bar chart of the execution time per epoch against the runtime options of CPU, GPU and TPU. Do you think that the GPU is the fastest way to run the problem ?
In this question, the objective is to compare runtimes of processing the epochs for the fashion dataset in the environments CPU, GPU, and TPU. In this experiment, GPU was the fastest at executing the epochs. The results can be viewed below. This makes sense because GPU’s a performance accelerator. It is having many cores and can execute many operations at the same time. Moreover, the GPU is very good at parallel processing. Therefore, in this case, GPU is the fastest way to run this problem.
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
class_names = ['top','trouser','pullover','dress','coat','sandal','shirt','sneaker','bag','ankle boot']
fashion_mnist = tf.keras.datasets.fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
with tf.device('/cpu:0'):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28)),
tf.keras.layers.Dense(512, activation = tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.summary()
model.compile(optimizer = 'adam',loss = 'sparse_categorical_crossentropy',metrics=['accuracy'])
model.fit(x_train, y_train, epochs = 10)
model.evaluate(x_test, y_test)
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError("GPU device not found")
print('Found GPU at: {}'.format(device_name))
class_names = ['top','trouser','pullover','dress','coat','sandal','shirt','sneaker','bag','ankle boot']
fashion_mnist = tf.keras.datasets.fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
with tf.device('/gpu:0'):
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28)),
tf.keras.layers.Dense(512, activation = tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.summary()
model.compile(optimizer = 'adam',loss = 'sparse_categorical_crossentropy',metrics=['accuracy'])
model.fit(x_train, y_train, epochs = 10)
model.evaluate(x_test, y_test)
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
tf.keras.backend.clear_session()
resolver = tf.distribute.cluster_resolver.TPUClusterResolver('grpc://' + os.environ['COLAB_TPU_ADDR'])
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
print("All devices: ", tf.config.list_logical_devices('TPU'))
class_names = ['top','trouser','pullover','dress','coat','sandal','shirt','sneaker','bag','ankle boot']
fashion_mnist = tf.keras.datasets.fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
def create_model():
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=(28,28)))
model.add(tf.keras.layers.Dense(512, activation = tf.nn.relu))
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax))
return model
strategy = tf.distribute.experimental.TPUStrategy(resolver)
with strategy.scope():
model = create_model()
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3, ),
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
model.fit(
x_train.astype(np.float32), y_train.astype(np.float32),
epochs=10,
# steps_per_epoch=60,
validation_data=(x_test.astype(np.float32), y_test.astype(np.float32)),
validation_freq=17
)
Write a python code to do the following tasks:
Import numpy
Use np.random.random( ) to create a 3 by 3 matrix, called x. Show the content of x.
Multiply x with a constant 10 and save it as y. Show the content of y.
Use np.astype( ) to convert y to an integer matrix z. Show the content of z.
Reshape z to a 1 by 9 matrix, called w. Show the content of w.
Create a 2 by 9 matrix, v, with zeros for all its elements.
Concatenate w and v along axis =1 to form a new matrix, p. Show the content of p.
This program explores the methods within the package numpy. I created this program in Google Colab.
import numpy as np
print("x: ")
x = np.random.random((3,3))
print(x)
print('\n')
print("y: ")
y = np.multiply(x, 10)
print(y)
print('\n')
print("z: ")
z = np.array(y)
z.astype(int)
print(z)
print('\n')
print("w: ")
w = np.reshape(z, (1, 9))
print(w)
print('\n')
print("v: ")
v = np.zeros((2, 9))
print(v)
print('\n')
print("p: ")
p = np.concatenate((w, v), axis=0)
print(p)
Create a dataframe, df, by reading a given csv file (data.csv) and do the following tasks:
Show the content of df
Use describe( ) to show the statistical measures
data.csv
This program explores the methods within the package pandas. I created this program in Visual Studio 2019.
import pandas as pd
d = pd.read_csv("data.csv")
df = pd.DataFrame(data=d)
print("Dataframe: ")
print(df)
print("\n")
print("Description: ")
des = df.describe()
print(des)