Research

Motivated by the strong belief that mathematics provide a solid, still flexible, basis which bridges the gap between diverse scientific areas, my research activities span a wide range of distinct fields and application domains. Do not hesitate to contact me for additional information and inquiries or for establishing new R&D collaborations!

Below you can find an overview of my current and past research activities according to their respective research field, or combination of different fields. The period covered ranges between my graduate studies and the present day. Support for my research during this period has been provided by the European Union, the Greek General Secretariat for Research and Technology, and EONOS Investment Technologies. The distinct scientific and research areas, where I have been active thus far with published work in books, peer-reviewed international journals, and peer-reviewed conference proceedings, are summarized below, along with a brief description of my main work and achievements per research area.

1. (BAYESIAN) COMPRESSIVE SENSING

Related Research Areas: Signal Processing; Compressive Sensing; Statistics

Compressive sensing (CS) emerged the last few years as a novel mathematical theory that enables joint acquisition and compression of a signal at significantly reduced rates than what is dictated by the Nyquist-Shannon sampling theorem. The CS framework guarantees that if a signal of length N possesses a sparse or compressible representation in an appropriate transform domain, or over a suitable sparsifying basis, then an accurate reconstruction can be achieved from a highly reduced set of K projections (K ≪ N) in a second (measurement) basis that is incoherent with the first (sparsifying) basis. We improved the performance of previous reconstruction techniques by developing novel Bayesian CS (BCS) algorithms for the estimation of the original signal, possibly recorded in a highly impulsive environment, from a small set of CS measurements. In particular, the sparsity prior belief is enforced by modeling the underlying signal statistics using heavy-tailed (specifically alpha-Stable) distributions, multivariate Cauchy distributions, or Gaussian Scale Mixture models. Additionally, we introduced two novel approaches, namely, a subspace-augmented MUSIC technique and a Bayesian matching pursuit method, for recovering accurately a multi-signal ensemble, acquired by the nodes of a sensor network, in a distributed way by exploiting the joint sparsity structure among the signals of the ensemble. In the framework of distributed CS, we proposed an efficient approach using duality theory and the method of subgradients, in conjunction with fractional lower-order moments, to satisfy the computational, power, and bandwidth constraints of a wireless sensor network.


1.1 Compressive Image and Video Sensing

Related Research Areas: Image & Video Processing; Compressive Sensing; Statistics; Machine Learning

Apart from our purely mathematical research, targeting the design of novel algorithmic tools for centralized and distributed CS reconstruction, great focus has been given on applying our theoretical achievements to real data and cutting-edge applications. To this end, we were among the first to apply CS for the compressive sampling reconstruction of biomedical ultrasound images (RF echoes), in order to exploit the advantages of CS, including resilience to missing measurements, increased robustness to noise (by modeling ultrasound RF echoes using alpha-Stable distributions or Gaussian Scale Mixtures), decoupling of the encoder (compressor) from the decoder (decompressor), and inherent weak encryption of the data.

Furthermore, we explored the potential of CS for the design of a novel active range imaging (RI) system. Indeed, our research outcomes demonstrated the efficiency of our proposed compressed gated range imaging system, which is capable of accurately recovering the depth profile of a scene from much fewer frames (or convolved pulses) than what is required by classical RI systems, even under very challenging conditions (e.g. clouds, smoke, foliage). Moreover, our proposed solution is able to identify multiple reflected pulses that can be introduced by semi-transparent elements in the scene, with a minimal increase in the number of frames.

To cope with the growing compression ratios, required for all resource-restricted remote imaging systems to minimize the payloads, we exploited the inherent property of CS acting simultaneously as a sensing and compression framework, while also supporting the inherent suppression of the reconstructed non-sparse part of the noiseless sparse representation that is due to the noise. Our objective was to build an efficient compressive video sensing (CVS) system, characterized by a lightweight encoder, whereas the main computational burden is moved to a ground-based decoder. The proposed CS-based video codec, equipped with an adaptive measurement allocation mechanism, demonstrated an enhanced performance, in terms of the compression ratio-reconstruction quality trade-off, when compared against state-of-the-art video compression standards, such as the MPEGx and M-JPEG. In addition, we demonstrated the effectiveness of CS for the design of a compressive video classification method based on the compressed measurements directly, without having access to the full-resolution frames.


1.2 Localization in Wireless Sensor Networks

Related Research Areas: Signal Processing; Compressive Sensing; Statistics; Machine Learning

In a battery-powered wireless sensor network (WSN), energy and communication bandwidth are both limited. Considering this critical restriction, we developed novel solutions for indoor localization of mobile devices using the signal strength measurements collected from several access points at different locations. Our first statistical fingerprint-based localization technique estimated the unknown location by minimizing the Kullback-Leibler divergence between multivariate Gaussian models. To overcome the drawbacks of this statistical approach (e.g. high sensitivity to noise and outliers), we employed CS for sparsity-based indoor localization of mobile devices, while also enabling weak encryption as an inherent property. The performance evaluations conducted at real premises under real-life conditions revealed that our proposed CS-based localization techniques achieve a substantial localization accuracy, whilst reducing significantly the amount of information transmitted from a resource-constrained wireless device to a central server. Furthermore, to improve the performance of single-measurement CS methods for estimating source (target) bearings, we proposed a Bayesian CS method based on multiple noisy CS measurement vectors to estimate accurately the direction of arrival (DOA) for multiple sources. The previous static localization framework was further extended to build a computationally efficient path-tracking system by combining the power of CS with a Kalman filter.


1.3 Data Compression and Rate Control in Smart Water Networks

Related Research Areas: Signal Processing; Compressive Sensing; Machine Learning

The battery-powered low-resourced devices, such as smart meters and sensors, used in smart water networks prohibit the use of high-sample rate sensing, therefore limiting the knowledge we can obtain from the acquired data. To alleviate this problem, efficient data reduction techniques enable high-rate sampling, whilst reducing significantly the required storage and bandwidth resources without sacrificing the meaningful information content. To this end, we have designed an efficient self-adaptive scheme that combines the accuracy of standard lossless compression with the efficiency of CS, which is capable of balancing the trade-offs of each technique and optimally select the best compression mode by minimizing reconstruction errors, given the sensor node battery state.

2. MULTISCALE DECOMPOSITIONS AND NON-GAUSSIAN MODELING FOR VARIOUS SIGNAL MODALITIES

Related Research Areas: Signal Processing; Image & Video Processing; Statistics; Machine Learning; Inverse Problems in Underwater Acoustics

Over the past decades, multiscale and multiorientation decomposition methods, such as the wavelet transform and its variants, steerable pyramids, and neural networks, have revolutionized signal processing. Wavelet transformations are particularly suited for yielding sparse and structured representations of piecewise smooth signals such as images, while steerable pyramids guarantee that additional properties, such as rotation and translation invariance, are satisfied. On the other hand, neural networks have been proven very efficient in extracting multilevel features for representing distinct types of signals. The achieved sparsity and structure offered by a multiscale decomposition, or, in general, by a sparsifying transformation (e.g. an eigendecomposition), boost the performance of transform-domain or feature-domain statistical models and enable simple, yet powerful, signal processing algorithms. To exploit the combined advantages of multiscale decompositions and powerful statistical models, such as the family of alpha-Stable distributions, in our work, first we characterized accurately the sparsity of the wavelet or steerable pyramid coefficients by using alpha-Stable statistical models. Subsequently, we designed and validated new algorithms based on the more accurate models to various application domains including: i) snapshot mosaic hyperspectral image compression (note that this work is based on eigendecompositions and multiscale transformations without employing statistical modeling of the transform coefficients), ii) image retrieval, iii) image fusion, iv) image enhancement, and v) underwater acoustic signal characterization and classification.

3. RISK MANAGEMENT AND PORTFOLIO OPTIMIZATION

Related Research Areas: Financial Signal Processing; Signal Processing; Statistics; Machine Learning; Nonlinear Time Series Analysis

Traditional risk management and portfolio optimization methods are primarily based on statistical techniques. The main drawbacks of conventional statistical methodologies are: i) the high sensitivity to outliers and missing values, ii) the lack of a sample-by-sample treatment, which would account for the real movements of the market indices, iii) the processing of time-domain information, without considering the increased representation capabilities of a joint time-frequency analysis, and iv) the relatively slow reaction to abnormal financial events. There is an ever increasing demand to address all these issues, especially in nowadays rapidly evolving financial world, by designing modern data analysis techniques, as a means of understanding the underlying complex dynamics of economic systems. Due to the complex behavior of financial time series data, most of the existing methods, which either rely on linear stochastic models or require long data series, may often lead to pitfalls concerning the accurate description of the inherent dynamics, and subsequently the performance of further decision making, risk management, and trading strategies, based on the evolution of the data generating process.

One of my major research activities during the last years is the transfer of signal processing, nonlinear time series analysis, and machine learning methodologies to the financial industry, in order to overcome the above limitations and drawbacks of existing financial methods in the fields of risk management and portfolio optimization. In particular, we have demonstrated the efficiency of novel market integration and global risk measures based on multiscale time-frequency representations and probabilistic principal component analysis, towards improving the explanatory degree of the principal components as a percentage of global variance, along with increased robustness to high-frequency components due to noise.

Moreover, with the recent advances in complex data analytics, financial firms are better able to manage risk and alert their clients on time, while also increasing trading performance by merging clusters of diverse databases for both structured and unstructured data. However, this process increases significantly the storage and processing requirements. Motivated by this, we tackled successfully the problem of capturing and reconstructing highly varying microstructures of high-dimensional financial data in low-dimensional spaces, by designing an efficient sparse pattern analysis approach over learned dictionaries.

On the other hand, complex financial markets can be hardly analyzed by means of linear methods. In practice, the process underlying the generated time series is a priori unknown, whilst such signals usually contain both linear and nonlinear, as well as deterministic and stochastic components. In order to extract accurately the underlying nonlinear dynamics of financial time series, towards improving the performance in identifying temporal interdependencies and time-synchronization profiles, we developed a new approach based on the powerful recurrence quantification analysis (RQA) and multiscale decompositions. To alleviate the problem of missing data, our previous method was further extended by incorporating a matrix completion module for restoring the missing entries of a low-rank matrix. Our evaluation on real data revealed an increased accuracy of the proposed framework in detecting switching volatility regimes and time-synchronization profiles between distinct market indexes, which is important for estimating the risk associated with a financial instrument.

Recent studies have highlighted the emotional and social dimension of financial behavior challenging the validity of the efficient market hypothesis (EMH), which states that prediction of stock prices cannot outperform the random walk model. Motivated by this, we explored the effectiveness of a financial indicator exploiting social media content by examining the capability of messages exchanged in Twitter to model the EUR/USD exchange rate. Our quantitative and qualitative analysis provided additional evidence that our proposed behavioral indicator adapts efficiently to the variations of foreign exchange markets.

4. UNCERTAINTY-AWARE SIGNAL PROCESSING IN CYBER-PHYSICAL SYSTEMS

Related Research Areas: Signal Processing; Statistics; Machine Learning; Nonlinear Time Series Analysis

Recent advances in computing and communications have raised a constantly increasing challenge of modernizing and decentralizing industrial processes by introducing dynamic architectures of cyber-physical systems (CPS). The resulting industrial CPS (iCPS) exploit their inherent relationship with wireless sensor networks (WSN) for providing cost-effective, scalable and easily-deployed solutions in industrial spaces, where the deployment of the WSN components is highly coupled to the objectives of the industrial process. This introduces spatio-temporal correlations between data streams originated by different positions. Motivated by the need to extract meaningful information about complex processes monitored by modern iCPS, as well as by the necessity to address typical WSN imperfections (e.g. limited bandwidth, computational complexity, and lifetime), we designed and implemented a novel framework of signal and data processing for treating different layers of information abstraction in an iCPS. At the core of our proposed approach is the estimation and utilization of the inherent data uncertainty as an integral part of any inference task. Furthermore, we proposed a computationally tractable method for high-level data management and analysis for detecting abnormal behavior in the recorded iCPS data, by monitoring interrelations between large and heterogeneous sensor data streams in real-time. The efficacy of the resulting framework was evaluated successfully in a real iCPS designed for the autonomous monitoring of water treatment plants.

5. WIRELESS NETWORK TRAFFIC MODELING

Related Research Areas: Signal Processing; Statistics; Nonlinear Time Series Analysis

Accurate load characterization in an IEEE802.11 infrastructure can be beneficial in modeling network traffic and addressing various problems, such as coverage planning, resource reservation, and network monitoring for anomaly detection. We performed a statistical study by employing the method of singular spectrum analysis to find that the traffic load has a small intrinsic dimension and it can be accurately modeled using a small number of leading (principal) components. This proved to be critical for understanding the main features of the components forming the network traffic and for designing a traffic predictor for the trend component.

6. RECOMMENDATION SYSTEMS

Related Research Areas: Statistics; Machine Learning

Recommendation (or recommender) systems have become increasingly popular in recent years, and are utilized in a variety of application areas including movies, music, news, books, research articles, search queries, social tags, and products in general. Many of the existing hybrid techniques improve recommendation quality by taking a linear combination of the final scores resulting from the available recommendation techniques. However, this approach fails to provide explanations of predictions or further insights into the data. To address this issue, we proposed a purely probabilistic scheme, which provided recommendations of better quality when compared against other methods that combine numerical scores derived from each individual prediction method.