Bijective and average analysis

Context

The standard method of analysis of online algorithms, namely competitive analysis, does not always produce

satisfactory results. The most significant example is the well-known paging problem in a virtual memory system.

Here, extremely naive and inefficient strategies (such as Flush-When-Full) have the same competitive ratio

with more sophisticated (and very efficient in practice) strategies such as Least-Recently-Used (LRU). This implies

that competitive analysis fails sometimes to separate good algorithms from bad ones (practically speaking),

which gives rise to an undesired disconnect between theory and practice.

In previous work we introduced bijective analysis as a novel, yet natural and intuitive framework for comparing

the performance of online algorithms. In informal terms, this technique compares two online algorithms, say A and B,

based on the corresponding costs incurred over all possible sequences of a fixed size. Algorithm A is deemed at

least as good as algorithm B if there exists a bijective function that maps every input of A to an input of B of at least

the same, if not worse cost. In a sense, bijective analysis is similar with the concept of dominance in set theory.

A relaxed version of bijective analysis is average analysis in which one compares the average cost incurred by two

algorithms over all request sequences of the same size.

Objectives

We have set two main objectives in this project concerning bijective and average analysis: First, to apply these

(relatively new) analysis techniques to other online optimization problems. Second, to show how to define well-motivated

relaxations of the above measures that make them even more widely applicable than in the current state of the art.

Results

Coming soon!