1. Important distributions used in statistics
The first course introduces the basic concepts of classical probability theory, with particular emphasis on the interpretation of probability and the concept of random variables. Students will learn about the role of cumulative distribution function and probability density function, as well as their relationship in discrete and continuous cases. In addition, the significance of expected value, variance, and higher-order (central) moments in characterizing random variables will be discussed. Furthermore, this course will cover the most important discrete and continuous distributions used in statistics, such as the binomial distribution, Poisson distribution, normal distribution, Student's distribution, chi-square distribution, and Fisher-Snedecor distribution.
Bibliography
W. Feller: An introduction to probability theory and its applications, vol. 1, 3rd edition, John Wiley, New York, 1991.
D. Khosnevisan, Probability, Graduate Studies in Mathematics, American Mathematical Society, 2007.
2. The central limit theorem and the law of large numbers
When dealing with a wide variety of random phenomena that occur in reality, we often assume that the probability variables in question are normally distributed. The reason for this is that the sum of a large number of independent random variables is always approximately normally distributed if the fluctuations of the individual variables are small compared to the random fluctuations of the sum, regardless of the probability distribution of the individual variables (see the central limit theorem). This course will cover the central limit theorem, the de Moivre-Laplace theorem for binomial distributions, Markov inequality and Chebyshev inequality, and the weak law of large numbers.
Bibliography
W. Feller: An introduction to probability theory and its applications, vol. 1, 3rd edition, John Wiley, New York, 1991.
D. Khosnevisan, Probability, Graduate Studies in Mathematics, American Mathematical Society, 2007.
3. Effectiveness of statistical estimates
The effectiveness of statistical estimates plays a key role in the reliability of data analysis and conclusions. The expected value of an unbiased estimate is equal to the true value of the estimated parameter, while a consistent estimate gets closer and closer to the true parameter as the sample size increases. An asymptotically unbiased estimate becomes unbiased for large samples, even if it shows bias for small samples. Efficient and unbiased estimates have the smallest possible variance under the given conditions, thus allowing for the most accurate conclusions. This course is about the efficiency of statistical estimates, mentioning unbiased, consistent, asymptotically unbiased, efficient, and unbiased estimates.
Bibliography
W. Feller: An introduction to probability theory and its applications, vol. 1, 3rd edition, John Wiley, New York, 1991.
D. Khosnevisan, Probability, Graduate Studies in Mathematics, American Mathematical Society, 2007.
4. Confidence interval of the expected value and of the standard deviation
The confidence interval of the expected value shows the range within which the true population mean is likely to fall based on a sample, thus helping to assess the accuracy of the estimate. This is particularly important in statistical conclusions because it not only gives a single value but also expresses the degree of uncertainty. The interval can usually be calculated from the sample mean and standard deviation, for example, in the case of a normal distribution, using Student's t-distribution (William Sealy Gosset). The confidence interval for the standard deviation can be determined in a similar way, usually using the chi-square distribution, which takes into account the uncertainty of the variance estimate. In this course, I will use the Lagrange multiplier method to show why we construct a symmetric interval for the expected value and how we approximate the appropriate quantiles using tables and Excel.
Bibliography
W. Feller: An introduction to probability theory and its applications, vol. 1, 3rd edition, John Wiley, New York, 1991.
D. Khosnevisan, Probability, Graduate Studies in Mathematics, American Mathematical Society, 2007.