"Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to effectively perform a specific task without using explicit instructions, relying on models and inference instead."
Definition of Machine Learning according to Wikipedia
I'd prefer to define it as the technique of making an algorithm learn structures in data without explicitly programming it to do so. This is kind of what the above definition states.
Machine Learning is a subset of a larger domain known as Artificial Intelligence, in which you try to make an intelligent system. By that, I mean to make a system that can perceive an environment full of variables and can take actions that maximizes the chance of success. These skills in particular are in high demand, because they are extremely important in various fields such as:
The possibilities are endless...
In machine learning we develop algorithms and statistical models to perform tasks, basically generate some output. We essentially build a mathematical model for intelligence. That, in a very small essence, is what machine learning is. Using mathematical models to learn from data.
Machine Learning is sub divided, based on various different cases. There are three major sub groups into which all applications fall into, let's have a one by one overview of them
Breakdown of Artificial Intelligence and it's subset, Machine Learning.
In this, we use a labelled dataset and we have to develop a model to gain insights from that data so that, if it's given variables that it hasn't seen in the dataset, it must be able to predict the output to a reasonable degree of accuracy.
A dataset here is basically a set of input and output pairs over which an algorithm learns to predict the output given an input. Examples include but are not limited to:
This is probably the most well known and used form of machine learning, given the amount of data that we have and continuously generate.
In this, we have a dataset, but not labeled. That means we have a bunch of inputs but no clear decisive output. We basically have to draw inferences from the dataset. The main purpose is to find groups or hidden patterns in data.
A dataset here is basically a set of input values, over which an algorithm must find correlation and grouping between various fields of data. Examples include but are not limited to:
This is another widely used algorithms for finding relations in datasets.
In this, we develop an algorithm that can take sequential decisions to maximize it's gains. Basically, the algorithm (agent, formally) performs some actions in an environment and an interpreter rewards or penalizes the agent based on the outcome on the actions in the environment. This is usually more complicated than the ones mentioned above, but a few applications include:
Wherever a sequential decision model is involved
More information on GeeksforGeeks and Wikipedia