Although a majority of modern machine learning models are complex and have complicated layouts and code structure, there are a few blocks which are present within almost all machine learning models that you will find in your explorations. These blocks are used for a variety of sub-functions including preparing, evaluating, and calculating. This activity will help you explore these fundamental blocks of machine learning and how they can be applied when creating a machine learning model.
Understand a few of the most common machine learning (ML) ‘blocks’ or functions as they are used in ML applications.
Understand the general layout or order of common ML models.
Explain what each fundamental block of ML is and understand how to use these blocks in creating models.
Access to a computer and large screen (if you want share with others)
A Google account to access Google Colab
A pen or pencil
Machine Learning Model - A set of algorithms, or programs, which can be trained and used to detect patterns in data in order to make predictions on future data.
Algorithm - A set of defined instructions for a computer to perform towards learning from a set of data, making predictions on a set of data, etc. In general, algorithms are the fundamental blocks of most machine learning models.
Batch - A subset of the training dataset used in each iteration for updating the weights between neurons. Batches are beneficial in splitting large datasets into more manageable pieces.
Epoch - A complete cycle of training a machine learning model on the entire training dataset. Completing an epoch means that the model has processed all of the data points within all of the batches in the training dataset.
Label - The known and correct output for an input data point, provided in the dataset in supervised machine learning models. The label provides a comparable outcome for the model to determine its performance from batch to batch and epoch to epoch.
Nodes/Neurons - A fundamental part of neural network machine learning models. A neuron is a mathematical formula that takes data as an input, performs an operation on the data, and provides an output to the next layer in the model.
Loss Function - A mathematical function which provides the numerical difference between a model’s predicted output and the labeled output, commonly considered as the model's error. A loss function is sometimes also known as the cost or objective function.
Activation Function - A mathematical function which provides a logical representation of whether a neuron’s output should be activated or not based on the input data provided. The activation function allows for machine learning models to learn from nonlinear datasets.
Machine learning models are comprised of many elements such as algorithms, batches, epochs, nodes, neurons, labels, activation functions, and many other key building blocks to create a fully functional model. In this section we will be walking through the fundamental building blocks that make up every machine learning model as well as covering the basic structure of different simple machine learning models.
We will be walking through how these concepts interact to create a functioning machine learning model and use that model to perform simple machine learning tasks. Understanding these fundamentals will allow you to learn and walk through more complex models and begin to learn how to create your own models to complete the specific goals and tasks you face in the future.
Take some time to read through the keywords and definitions above.
Click through the image carousel above containing the Fundamental Blocks of Machine Learning. These images will provide you with the background information necessary to understand the activity.
Read and work through the Fundamental Blocks of ML - Handout and the corresponding Fundamental Blocks of ML - Colab Notebook.