Subjective probability expresses epistemic uncertainty by placing probabilities on possible outcomes, which sum to one of the outcome space. A subjective probability expresses someone's uncertainty about a true or false statement as P(TRUE), or about a number, e.g. as P(number < 5). There are parametric probability distributions for continuous and discrete quantities.
A subjective probability can be interpreted as a weight put on an outcome (using some probability reference) or as a bet the person is willing to place on the outcome. The important thing is that subjective probability is an expression of someones uncertainty, it is not a relative frequency or a chance of a random outcome.
A probabilistic model for epistemic uncertainty can be placed on a model of the system of interest to allow evaluating the impact of epistemic uncertainty on something we are interested in (let us call it the quantity of interest). Uncertainty from different sources quantified by probability are easy to combine and propagate though an assessment model by probability calculus. In many situations, the combination is approximated by Monte Carlo simulation, which requires a lot of independent samples are taken.
When a model include aleatory uncertainty (inherent randomness or variability), a 2 dimensional Monte Carlo simulation can be performed which separate aleatory from epistemic uncertainty. In short, a 2DMC begin with drawing a sample from epistemic uncertainty (usually a sample of parameter values for the model) (the outer loop). Then the quantify of interest is simulated from the model keeping the parameter values fixed (the inner loop). The inner loop is a one dimensional Monte Carlo simulation for aleatory uncertainty.
Subjective probability can also express someones uncertainty about a hypothesis or a model. In order for that to be possible, one has to define the total set of possible models. A common way is to specify a set of possible models and treat them as the full set, acknowledging that there could be other models not considered at that very moment.
Someone's uncertainty is always conditional on the knowledge she has, let us denote it by K1. Bayesian inference is a principle to update subjective probability in light of new knowledge, say K2. It makes it possible to go from P(event A|K1) to P(event A|K1 and K2). Event A can be a combination of events (including the a model to be the true one among a possible set of models).
A random variable is a mathematical name for a quantity that is characterized by a probability distribution. In Bayesian inference, both model variables and parameters within the model are treated as random variables. It is up to the assessor to maintain a separation between aleatory and epistemic uncertainty. As a general rule, uncertainty about a quantity that is fixed and that we are uncertain about is seen as epistemic, while a quantity that by chance take different values over space and time are aleatory. Another way to recognize what is aleatory uncertainty is that it is described by a probability model for which there are parameters.
Models of unique events can be seen as only expressing epistemic uncertainty.
There are several ways to summarize and visualize the a model implemented in a Bayesian framework. A Bayesian model is a joint probability distribution over variables and parameters. Let us denote it P(variables and parameters) which can be written as the following product P(variables|parameters)P(parameters). Epistemic uncertainty in a parameter can be described based on the marginal probability distribution for this parameter. Whenever we describe uncertainty in a variable (in situations where the variable is not a unique event) it is a combination of both aleatory and epistemic uncertainty. The marginal distribution for the variable (a predictive distribution) is a mixture of aleatory and epistemic uncertainty. Instead, the variables can be characterized by a 2-dimensional distribution, which is a representation of epistemic uncertainty about the aleatory uncertainty. A 2D distribution of a continuous variable may be presented as a sample of probability density functions (which is why this distribution often is referred to as a spaghetti plot).
The purpose of 2D distributions are mainly for visualization. In an application, we are often interested in epistemic uncertainty of a quantity of interest. If the quantity of interest is a parameter, all is clear, it is just to derive the marginal distribution. The quantity of interest is often a quantity derived from the model, e.g. the frequency of an event (e.g. how often a variable exceeds a vale) or the value of a variable that will be exceeded in say 95 out of 100 occasions (the 95th percentile). If so, epistemic uncertainty about the quantity of interest is derived for a chosen characteristic of aleatory uncertainty.
To quantify uncertainty by subjective probability, one start by specifying probability distributions for parameters, often one at a time sometimes with consideration of dependencies between them. When the uncertainty analysis involves inference from data, this joint probability distribution over parameters are referred to as the prior. The model to which the parameters belong can be further extended with models that fully express how data is related to the parameters (the data generative model). When prior and model (which is probabilistic) are specified, Bayes rule is used to derive uncertainty about parameters given the data (the posterior). Bayesian inference can be done by analytical integration, conjugate models or sampling (e.g. MCMC, ABC).