Google Machine Learning Education (Google 機器學習教育課程)
認識MNIST手寫辨識資料集 (To know the MINST - a handwritten digit dataset)
人工智慧簡介筆記本-手寫數字辨識 (An AI example - Handwriting Recognition)
類神經網路, Neural network (machine learning)
SoftMax: Softmax函數,或稱歸一化指數函數
整流線性單位函數(Rectified Linear Unit, ReLU),又稱修正線性單元,是一種類神經網路中常用的激勵函數,通常指代以斜坡函數及其變種為代表的非線性函數。
S行函數(sigmoid function or logistic function)
雙曲正切函數(hyperbolic tangent function, tanh)
類神經網路(英語:Artificial Neural Network,ANNs)簡稱神經網路(neural network,NNs)
Back-propagation(BP)是目前深度學習大多數NN(Neural Network)模型更新梯度的方式
Neural Networks and Deep Learning , Michael Nielsen
Neural Networks, Manifolds, and Topology, colah's blog
Distill (a scientific journal which operated 2016-2021): Machine Learning Research Should Be Clear, Dynamic and Vivid.
Randomly labeled data refers to a dataset where the labels assigned to the data points are completely random, not related to the underlying content or patterns in the data. For example, in an image classification task, pictures of cats might be assigned labels like "car" or "ant" without any connection to the actual image.
This type of labeling is often used in research to study how neural networks behave, especially to explore memorization versus generalization. Neural networks trained on randomly labeled data are unable to learn meaningful patterns, yet they may still memorize the input-label pairs, highlighting the model’s capacity to overfit when there is no true relationship between inputs and outputs .
Takoua Saadani, Understanding Data Labels and Data Labeling: Definition, Types, and How it Works for Machine Learning, Jun 29, 2023
TensorFlow: TensorFlow makes it easy to create ML models that can run in any environment. Learn how to use the intuitive APIs through interactive code samples. TensorFlow is a free and open-source software library for machine learning and artificial intelligence. It can be used across a range of tasks but has a particular focus on training and inference of deep neural networks. It is one of the two most popular deep learning libraries alongside PyTorch. (ref. https://en.wikipedia.org/wiki/TensorFlow)
PyTorch: PyTorch is a machine learning library based on the Torch library,[4][5][6] used for applications such as computer vision and natural language processing,[7] originally developed by Meta AI and now part of the Linux Foundation umbrella.[8][9][10][11] It is one of the two most popular deep learning libraries alongside TensorFlow, offering free and open-source software released under the modified BSD license. Although the Python interface is more polished and the primary focus of development, PyTorch also has a C++ interface.[12](ref. https://en.wikipedia.org/wiki/PyTorch )
Kaggle: "Your Machine Learning and Data Science Community". The world's largest data science community with powerful tools and resources to help you achieve your data science goals. Kaggle is a data science competition platform and online community for data scientists and machine learning practitioners under Google LLC. Kaggle enables users to find and publish datasets, explore and build models in a web-based data science environment, work with other data scientists and machine learning engineers, and enter competitions to solve data science challenges. (ref. https://en.wikipedia.org/wiki/Kaggle)
Simple and efficient tools for predictive data analysis
Accessible to everybody, and reusable in various contexts
Built on NumPy, SciPy, and matplotlib
Open source, commercially usable - BSD license
image
(450X280)
template
(40X52)
Back-propagation: Back-propagation(BP)是目前深度學習大多數NN(Neural Network)模型更新梯度的方式,In machine learning, backpropagation is a gradient estimation method commonly used for training neural networks to compute the network parameter updates. (ref. https://en.wikipedia.org/wiki/Backpropagation)
Hebbian theory: Hebbian theory is a neuropsychological theory claiming that an increase in synaptic efficacy arises from a presynaptic cell's repeated and persistent stimulation of a postsynaptic cell. It is an attempt to explain synaptic plasticity, the adaptation of brain neurons during the learning process. It was introduced by Donald Hebb in his 1949 book The Organization of Behavior. The theory is also called Hebb's rule, Hebb's postulate, and cell assembly theory. (ref. https://en.wikipedia.org/wiki/Backpropagation)
Convolution operation demos: Convolution.ipynb
LaNet Implementation: LeNet2MNIST.ipynb
Example 1
Example 2
Artificial Intelligence for Edge Computing, Editors:Mudhakar Srivatsa, Tarek Abdelzaher, Ting He
Coursera Deep Learning: y33-j3T/Coursera-Deep-Learning/tree/master