Neural Networks
Google Tool
Part 1: Conceptual understanding and practice
Review Non-Linear problem in Feature Cross
https://developers.google.com/machine-learning/crash-course/feature-crosses/video-lecture?hl=en
Activation Function
First Play-Ground of Neural Networks parameters
Back Ground 1 : Learning Rate
https://developers.google.com/machine-learning/crash-course/reducing-loss/learning-rate
https://developers.google.com/machine-learning/crash-course/reducing-loss/playground-exercise
Back Ground 2 : Epoches, Batch sizes
batch size The number of examples in a batch. For instance, if the batch size is 100, then the model processes 100 examples per iteration."
https://developers.google.com/machine-learning/glossary?hl=en#epoch
Back Ground 3: Overfitting and regularization
Task 4: Continue experimenting by adding or removing hidden layers and neurons per layer. Also feel free to change learning rates, regularization, and other learning settings. What is the smallest number of neurons and layers you can use that gives test loss of 0.177 or lower?
Part 2: Program exercise
California house dataset
https://developers.google.com/machine-learning/crash-course/california-housing-data-description
Some Hight-Light
--> from tensorflow.keras import layers
--> model = tf.keras.models.Sequential()
--> learning_rate = 0.01
epochs = 15
batch_size = 1000
-->
model.add(tf.keras.layers.Dense(units=20,
activation='relu',
name='Hidden1'))
Other reference
https://gist.github.com/SupremeLeaf/b94f2580e96fbef4e74570eecf5e5a90