🤖 We are excited to introduce an experimental project in innovative didactics in our Pattern Recognition and Computational Intelligence courses!
💡Specifically, we have a GPT powered by GPT-4o based on our course materials. These AI-powered tools are now available for free ChatGPT users.
Access the GPTs via the provided link below.
Interact with the AI to get explanations and answers related to the course content.
Free ChatGPT users face rate limits on usage.
No support for multimodality (no images or diagrams) for free users.
OpenAI is currently releasing the use of GPTs for free apps gradually
In the future, more topics will be covered, specifically the part of NLP and LLM.
Responses may sometimes be imprecise as the content is AI-generated (Please use with caution).
Enhances learning through interactive and personalized assistance.
Supports deeper understanding of complex topics.
👉 Please note this is an experimental project, and we welcome your feedback to improve it.
👉 Please ask for the index of topics to the GPTs.
Prof. Antonello Rizzi - Pattern Recognition, for Information Engineering Master Degrees, since 2002
Classroom course code: see here - the course is delivered in English language
First part:
Introduction to pattern recognition. Classification and clustering problems.
Generalization capability. Deduction and induction. Induction principle over normed spaces. Choosing a Metric. Non-metric spaces. Point to point, point to cluster and cluster to cluster proximity measures. Mahalanobis distance.
Representation and preprocessing functions. Normalization. Missing Data. Ordinal and nominal discrete data. The basic processing pipeline in machine learning and pattern recognition.
Clustering algorithms: k-means, BSAS. The cluster validity problem; sensitivity index; relative validation indexes; Davies-Bouldin index; Silhouette Index; clustering algorithms based on scale parameters; stability indexes; optimized clustering algorithms; constrained and unconstrained unsupervised modelling problems. Hierarchical clustering.
Decision rules: K-NN and condensed K-NN.
Classification systems: performance and sensitivity measures, Classification model synthesis based on cluster analysis. Robust classification: voting techniques. Ensembles of classifiers.
Structured data taxonomy. Dissimilarity measures on structured data. Data fusion. Variable length domains: sequences of events, graphs. Bellman optimality principle; edit distance. Dissimilarity measures on labelled graph spaces (Graph Matching). Automatic Feature selection algorithms.
Introduction to Granular Computing. Metric learning. Local metrics. Representations in dissimilarity spaces. Symbolic histograms. Data Mining and Knowledge Discovery.
Agent based parallel and distributed algorithms for machine learning.
Hardware acceleration on FPGAs and GPUs.
Second part:
The second part of the course begin with the basics of hardware and software relevant to developing pattern recognition systems, including a detailed guide on setting up a robust Anaconda environment which serves as the foundation for machine learning application prototyping. Students will familiarize themselves with integrated development environments and Jupyter notebooks, which are crucial for interactive programming and experimentation in Python.
As the course progresses, we delve into fundamental Python libraries that facilitate advanced computing and data analysis, setting the stage for the exploration of Python’s core data structures and their application in classical machine learning scenarios. Alongside this, the course introduces key statistical concepts and probabilistic frameworks that underpin pattern recognition, including Bayes Theorem and Bayesian learning methods. Through practical sessions, students engage with probabilistic learning models, focusing on concepts such as maximum a posteriori (MAP) criterion and maximum likelihood estimation.
Moreover, the Naïve Bayes Classifier, decision surfaces and discriminant functions for Bayesian classifiers are explored. Practical sessions in Python notebooks support the theoretical concepts by allowing students to apply Bayesian classification techniques in real-world scenarios. The exploration continues with linear and logistic regression, where students learn to manage the overfitting phenomenon and utilize validation techniques like N-Fold cross-validation to enhance model reliability.
Reinforcement learning is introduced as a critical area of machine learning, distinguishing between supervised, unsupervised, and reinforcement learning, and breaking down fundamental components and strategies of this domain. This segment aims to equip students with a robust understanding of learning algorithms and their practical applications.
The course also provides an in-depth look at deep learning technologies, comparing them against traditional machine learning approaches. Students explore neural network architectures, including convolutional neural networks and autoencoders, and engage in hands-on sessions using Python notebooks to implement these models.
In the context of natural language processing, the course addresses both traditional and modern approaches, focusing on text mining, word embedding techniques, and the revolutionary impact of neural language models like BERT and GPT-X. Practical exercises include working with transformers on TensorFlow to solidify the students’ understanding of sequence processing technologies.
Finally, the course culminates in a seminar on "The Cheshire Cat" framework, an innovative approach in AI system development using large language models. This includes discussions on Retrieval Augmented Generation systems versus model fine-tuning, supplemented by case studies to provide a real-world perspective on the theoretical constructs discussed.
Adopted texts
Sergios Theodoridis, Konstantinos Koutroumbas, Pattern Recognition, Fourth Edition, Academic Press,
ISBN: 978-1597492720, September 2008.
Lecture notes and slides available at teacher's site
Prerequisites
Elementary notions of Geometry, Algebra, Differential Calculus, Signal Theory, Information Theory, Informatics.
Study modes
The course is organized as a series of lectures and case study illustrations.
Frequency modes
It is strongly recommended to attend classroom lessons.
Exam modes
The final exam consists in the evaluation of a homework. The topic of the homework is usually agreed with the teacher.
Prof. Antonello Rizzi - Computational Intelligence, for Information Engineering Master Degrees, since 2005
Classroom course code: see here - the course is delivered in English language
First part:
Introduction to Machine Learning and data driven modelling. Soft Computing, Computational Intelligence. Basic data driven modelling problems: clustering, classification, unsupervised modelling, function approximation, prediction. Generalization capability. Deduction and induction.
Induction inference principle over normed spaces. Models and training algorithms. Distance measures and basic preprocessing procedures.
Optimization problems. Optimality conditions. Linear regression. LSE algorithm. Numerical optimization
algorithms: steepest descent and Newton’s method.
Fuzzy logic principles. Fuzzy induction inference principle. Fuzzy Rules.
Classification systems: performance and sensitivity measures. K-NN Classification rule.
The biological neuron and the central nervous system.
Perceptron. Feedforward networks: Multi-layer perceptron. Error Back Propagation algorithm. Support Vector Machines. Automatic modeling systems. Structural parameter sensitivity. Constructive and pruning algorithms.
Swarm Intelligence. Evolutionary Computation. Genetic algorithms. Particle Swarm Optimization, Ant Colony Optimization. Automatic feature selection.
Fuzzy reasoning. Generalized modus ponens; FIS; fuzzyfication and e defuzzyfication. ANFIS. Basic and advanced training algorithms: clustering in the joint input-output space, hyperplane clustering.
Outline of prediction and cross-prediction problems: embedding based on genetic algorithms.
Second part:
The second part of the course begins by exploring performance measures, which are fundamental for evaluating the effectiveness of machine learning algorithms. It addresses the problem of overfitting, discussing measures of structural complexity and the Occam's Razor criterion. The importance of cross-validation, particularly N-fold cross-validation, is a focal point to ensure the generalization of models. Additionally, specific topics related to classification and functional approximation are covered, such as the costs of misclassification and the challenges posed by unbalanced datasets.
The course then delves deeper into Support Vector Machines (SVM), a powerful tool for supervised learning used both for classification and regression. The basic principles of SVMs are explored, such as the definition of support vectors and the optimization of the margin through an optimal hyperplane. The mathematical formulation is discussed along with the importance of the "kernel trick," which allows SVMs to tackle nonlinear classification problems. Solving dual problems through optimization techniques and practical considerations for implementing SVMs are integral parts of the module.
In the context of Smart Grids and energy management, a specific module introduces Computational Intelligence techniques for energy system management. Topics such as energy transition, microgrids, and renewable energies are discussed. Energy Management Systems (EMS), storage systems, and their modeling, and Renewable Energy Communities (REC) are explored, applying Computational Intelligence techniques to optimize energy flow management. Particularly, Fuzzy Systems optimized with Evolutionary Algorithms are discussed. The use of Dynamic Programming and multi-objective optimization is illustrated, with a focus on advanced prediction systems like LSTM networks that are useful in forming effective prediction modules for EMS.
Deep learning occupies a significant portion of the course, with an overview of the fundamental principles and the differences between deep learning, machine learning, and artificial intelligence. Various neural architectures are analyzed, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Recurrent Neural Networks (RNN), with particular emphasis on LSTMs for managing sequential data. The main libraries for synthesizing deep learning models are explored, with a focus on TensorFlow. The concept of computational lattice and gradient calculation of complex computational structures through Automatic Differentiation techniques is discussed. A specific practical session demonstrates how to synthesize machine learning and deep learning algorithms using TensorFlow.
The concept of prediction and forecasting using traditional and deep learning techniques is introduced. The importance of prediction in the context of energy system optimization is highlighted.
Finally, the course addresses the problem of predictive maintenance, exploring techniques and models to predict failures and optimize maintenance in industrial sectors. The importance of applying machine learning techniques is discussed, and case studies are analyzed to illustrate best practices and challenges in implementing these technologies.
Adopted texts
Kruse, R., Borgelt, C., Braune, C., Mostaghim, S., & Steinbrecher, M. (2016). Computational intelligence: a methodological introduction. Springer.
Lecture notes and slides available at teacher's site
Prerequisites
Elementary notions of Geometry, Algebra, Differential Calculus, Signal Theory, Information Theory, Informatics, Digital Signal Processing.
Study modes
The course is organized as a series of lectures and case study illustrations.
Frequency modes
It is strongly recommended to attend classroom lessons.
Exam modes
The final exam consists in the evaluation of a homework. The topic of the homework is usually agreed with the teacher.
property of CIPAR TEAMS © - 2025