November 30, 2023

Flyer

11 30 23 - SPIE FLYER.pdf

Recording

11 30 23 SPIE TALK.mp4

Constrained Quantization for Probability Distributions 

The concept of quantization theory can be traced back to signal processing, which was initially employed by electrical engineers. Quantization involves the process of discretizing signals. In the context of probability distributions, quantization refers to finding the best approximation of a k-dimensional probability distribution P using a discrete probability distribution with a specified number n of supporting points (referred to as the optimal set of n-points). In other words, it aims to find the best approximation of a k-dimensional random vector X with distribution P using a random vector Y that has nearly n values in its range. A plethora of research is given on quantization for probability distributions without using any constraint. 

Recently, I jointly worked with Prof. M K Roychowdhury from the SMSS at UTRGV to introduce the idea of constrained quantization. This new approach allows us to categorize quantization into two types: unconstrained quantization and constrained quantization. In this presentation, I will delve into the topic of constrained quantization for various probability distributions. It is worth noting that unconstrained quantization can be seen as a specific instance of constrained quantization. I will explore the disparities between constrained and unconstrained quantization in terms of quantization dimension, quantization coefficient, and quantization error. Constrained quantization holds great potential for interdisciplinary applications in fields such as signal processing, data compression, machine learning, etc.