The central quest in the classical information theoretic setting is the evaluation of the asymptotic fundamental limit of communication wherein the blocklength is allowed to grow to infinity. However, when blocklength is fixed at a finite value, the maximum rate of information transmission is known to depart from the asymptotic limits sets by the classical theory. The exact evaluation of the finite blocklength limit of communication, however is intractable in general ; the difficulty being attributed to the inherent complex non-convex optimization over functionals. We focus on deriving finite blocklength converses for coding problems, which are asymptotically tight. Towards this, we approach the problem from an optimizer's perspective and translate the non-convexity over functionals equivalently into non-convexity in the space of joint probability distributions. We then develop a new tool to obtain converses for finite blocklength coding problems - by employing linear programming relaxation of the non-convex optimization problem.
Consider the channel coding problem in the presence of an intelligent jammer who controls the channel state by picking a state randomly from a finite collection. The problem can be formulated as a zero-sum game between the communication system comprising of an encoder-decoder team and a finite state jammer, with average probability of error as the pay-off. The game is parameterized by the rate of transmission and blocklength. We explore the existence of saddle point solution for the above zero-sum game. Eventhough only epsilon-saddle point value exists at finite blocklengths, we show that in the limit of large blocklengths, a saddle point value exists for the zero-sum game for all but finitely many rates. Moreover, the upper value of the game coincides with the Shannon solution for the channel coding problem.