Computable Analysis

Analytical Properties of Resource-Bounded Real Functionals

Abstract

Computable analysis is an extension of classical discrete computability by enhancing the normal Turing machine model. It investigates mathematical analysis from the computability perspective. Though it is well developed on the computability level, it is still under developed on the complexity perspective, that is, when bounding the available computational resources. Recently Kawamura and Cook developed a framework to define the computational complexity of operators arising in analysis. Our goal is to understand the effects of complexity restrictions on the analytical properties of the operator. We focus on the case of norms over C[0, 1] and introduce the notion of dependence of a norm on a point and relate it to the query complexity of the norm. We show that the dependence of almost every point is of the order of the query complexity of the norm. A norm with small complexity depends on a few points but, as compensation, highly depends on them. We briefly show how to obtain similar results for non-deterministic time complexity. We characterize the functionals that are computable using one oracle call only and discuss the uniformity of that characterization.

Our goal in this article is to study the complexity of operators defined over the space C[0, 1], and particularly to understand the implications of complexity restrictions on the analytical properties of the operator. Looking for connections between mathematical analysis and the theory of computation is an old and fruitful field of investigation. The most famous example is the fact that on many sorts of topological spaces, a computable function must be continuous. Furthermore, the continuous functions are exactly the functions that are computable relative to some oracle. Topology is always hidden behind computability notions, which explains why higher-order recursion theory and computable analysis are intimately related to descriptive set theory. Such a correspondence between computation and topology also comes up in complexity theory: bounds on the available resources during computation are reflected in analytical constraints on the functions to be computed, confining them to live in a smaller space. Examples of this principle appear in several places. Townsend  characterized relativized polynomial classes of type-2 relations by means of topological notions: for instance if A is an alphabet then a subset of (A^*)^{A^*} is in Σ_1^P relative to some oracle if and only if it is a ‘‘polynomially open set’’ in a certain sense. In analysis, a real function f : [0, 1] → R is polynomial time computable relative to an oracle if and only if it has a polynomial modulus of uniform continuity.

This work is a first study along these lines of the complexity theory (recently developed by Kawamura and Cook) over C[0,1], in which such correspondences are not known to date. Some typical questions are: What are the topological implications of limiting the resources of a machine computing a functional? What is the class of functionals that are computable in polynomial time relative to an oracle? Observe that a bound on a resource such as time imposes two conditions on the machine operation: it restricts its internal computation time as well as the queries submitted to the oracle. We mostly concentrate on the second constraint, expressed in terms of query complexity. The potential limitations imposed by resource bounds on the computation of functionals over C[0,1] come from the representation of the input functions f ∈ C[0,1] which does not give a global view on f but local information only. The whole function is not approximated, for example, by piecewise linear functions, but rather the oracle evaluates the function on demand at queried points, in addition to giving a modulus of continuity of f to the machine. The penny-pinching character of the oracle describing the input is due to the huge amount of information a function contains. As a result, little can be known about f in polynomial time, and classical operators such as taking the supremum or the integral of a function are not polynomial time computable because a machine needs exponential time to evaluate its input on the whole interval.

In this work, we do not consider general functionals but we focus on the simpler case of norms over C[0, 1]. We try to answer the general problem: what are the analytical effects of bounding the computational resources to compute a norm? As explained above, bounding the number of queries submitted to the oracle prevents the machine from evaluating its input f ∈ C[0,1] at too many points, hence a norm with low query complexity should ‘‘depend’’ on a small set of points. This idea is formalized by introducing two notions: the quantitative notion of dependence of a norm on a point and the qualitative notion of relevance of a point w.r.t. a norm. Intuitively, a norm ||.|| has a high degree of dependence on a point if changing f around that point results in a big change in the value ||f|| . We then show how the query complexity of the norm influences these properties. A norm with low complexity depends on a small set but, as compensation, the dependence on that set is very high. In the above discussion we have been exclusively focusing on deterministic complexity. We also carry out a similar analysis in the case of non-deterministic complexity. We introduce the notions of an essential point and sufficient set and establish a characterization of the norms that are computable in non-deterministic polynomial time relative to an oracle. We then investigate the extreme case when only one oracle call is allowed for the machine computing a functional and obtain a characterization of such functionals. Surprisingly, the argument to obtain such a characterization is much more involved than expected. It contains subtleties that make it non-uniform in terms of complexity.

References

 

On the Extension of Computable Real Functions

Abstract

In this work we investigate interrelationships among different notions from mathematical analysis, effective topology, and classical computability theory. Our main objective of the study is the class of computable functions defined over an interval with the boundary being a left-c.e. real number. We investigate necessary and sufficient conditions under which such functions can be computably extended. It turns out that this depends on the behavior of the function near the boundary as well as on the class of left-c.e. real numbers to which the boundary belongs, that is, how it can be constructed. Of particular interest a class of functions is investigated: sawtooth functions constructed from computable enumerations of c.e. sets.

This work can be viewed as a contribution to the understanding of which parts of classical mathematical analysis can be carried over to the constructible setting, and the organization of notions from computability theory using analytical concepts. One of the simplest problems in real analysis is to extend a continuous function defined on a set to a bigger set. Let us consider the following very simple case. Start from a real number a ∈ (0, 1) and a continuous function f : [0, a) → R. When can f be extended to a continuous function on [0, 1]? The answer is immediate: it can be extended exactly when f has a limit at a, and then the value of the extension of f at a must be that limit. In computable or constructive analysis, this construction can be easily carried by effectivizing the assumptions accordingly. Specifically, if a is a computable real number and f : [0, a) → R is computable then f has a computable extension on [0, 1] exactly when f converges effectively at a. This is exactly the counterpart of the classical result.

In this work we raise the following question: what happens when a is not computable? Though very simple, this question does not admit a straightforward answer but instead reveals rich phenomena and deep relationship with computability theory. The complexity of the problem makes it presumably impossible to admit a complete and general answer. The reasons why some functions may or may not admit computable extensions are extremely diverse and involve different sorts of arguments and concepts. We prove results that can be organized in three categories: (i) general results that hold independently of a, (ii) a thorough study of the problem for a restricted class of functions called the sawtooth functions, (iii) an investigation of the role of a in this problem, exhibiting several classes of reals for which the problem can be solved in different ways, with characterizations of these classes.

The main concepts involved in this investigation are: presentations of left-c.e. real numbers and their computation power, computable linear orderings, notions of genericity for left-c.e. reals and c.e. sets. The general problem being too complex, we focus on a restricted problem. We assume that a is a left-c.e. real rather than an arbitrary real number, and we assume that the computable function f : [0, a) → R converges to 0 (or equivalently to any computable number) at a. These assumptions may seem very restrictive, but it happens that the problem is already quite rich and difficult in that case. We briefly discuss the case when a is not left-c.e. and argue that the left- c.e. case is almost the only interesting one. We do not discuss the problem for functions that converge to non-computable numbers at a: it is a possible future direction that would need a complete study.

We prove a first result that will be very useful in the sequel, and that can be stated informally as follows.

Theorem A. If a is not right-c.e. and f : [0,a) → R has a computable extension g on [0, 1] then every effectively compact property satisfied by f must be essentially satisfied by g.

Here “essentially” means that g must satisfy the property on some interval [0,b] with b > a. Examples of such properties are being 1-Lipschitz, or having a sawtooth shape.

Sawtooth functions: We study a particular class of functions. To each sequence of natural numbers $(n_i)_{i∈N}$ we associate a function consisting of juxtaposed sawtooth with heights $2^{-n_i}$ . For this class of functions we obtain a characterization of the functions that admit a computable extension:

Theorem B. The sawtooth function associated with the computable sequence $(n_i)_{i∈N} admits a computable extension on [0,1] if and only if there exists a computable linear ordering over N such that the sequence $(n_i)_{i∈N} is a monotonically increasing initial segment of .

We then study in details a sufficient and a necessary condition for admitting a computable extension.

A sufficient condition: First, the null extension of f : [0, a) → R is computable if and only if f converges effectively to 0 at a, so effective convergence to 0 is a sufficient condition. We will easily see that this condition is not necessary in general. However for some a’s, this sufficient condition is also necessary, and we obtain a characterization of this class of reals:

Theorem C. Let a be left-c.e. The following statements are equivalent:

Right generic real numbers can be thought of as typical, or generic among the left-c.e. reals.


A necessary condition: If f has a computable extension on [0,1] then f must have a computable modulus of continuity, which is then a necessary condition. Again, this condition is not sufficient in general. This is not immediate, but the sawtooth functions studied earlier provide counter-examples. We also obtain a characterization of the reals a for which this necessary condition is also sufficient:

Theorem D. Let a be left-c.e. The following statements are equivalent:

Separating the two classes: We met two classes of left-c.e. reals that are strongly related to the computable extension problem: the right-generic reals and the simple reals. The latter is a subclass of the former (outside the computable reals). Is it a proper subclass? We positively answer this question, introducing the notion of generalized binary representation: if $(u_n)_{n∈N} is a computable sequence with a computable sum such as $1/n^2$, a generalized binary representation of a real a is a set $A⊆N$ such that $a=\sum_{n∈A} u_n$. While simple reals do not admit a c.e. generalized binary representation, we prove that right-generic reals can, which separates the two classes.

Theorem E. If the sequence $(u_n)_{n∈N}$ converges slowly to 0, formally if $u_{n+1}/u_n$ converges to 1, then there exists a c.e. set A such that $\sum_{n∈A} u_n$ is right-generic (and such a sum is never a simple real).

To prove this theorem, instead of building the set A by hand, we carefully choose a topology such that if A is in some sense generic in that topology then $\sum_{n∈A} u_n$ is right-generic.

References

Complexity Analysis of the Reachability Problem

Abstract

Reachability for piecewise affine systems is known to be undecidable, starting from dimension 2. In this work we investigate the exact complexity of several decidable variants of reachability and control questions for piecewise affine systems. We show in particular that the region-to-region bounded time versions leads to NP-complete or co-NP-complete problems, starting from dimension 2. We also prove that a bounded precision version leads to PSPACE-complete problems.

In the context of verification, proving the safety of a system can often be reduced to a reachability question of the form point-to-region or region-to-region reachability. These variants are more general questions than point-to-point reachability. Their complexities do not follow from existing results. In this work we choose to restrict to the case of piecewise affine maps and we consider the following natural variant of the problem.  

References

Contrast Rational and Real Computation

Abstract

There have been several different approaches of investigating computation over the real numbers. Among such computable analysis seems to be the most amenable to physical realization and the most widely accepted model that can also act as a unifying framework. Computable analysis was introduced by A. Turing in 1936, A. Grzegorczyk in 1955, and D. Lacombe 1955. A representation based approach to the field was then developed by C. Kreitz and K. Weihrauch in 1983. Any typical representation is based on using the rationals, a countable dense subset of the reals with finite representation, to approximate the real numbers. The purpose of this work is to investigate the transition phenomena between rational computation and the completion of such computation (in some given representation) that induces a computability concept over the reals. This gives deeper insights into the nature of real computation (and generally computation over infinite objects) and how it conceptually differs from the corresponding foundational notion of discrete computation. We have studied both the computability and the complexity-theoretic transition phenomena. The main conclusion is the finding of a conceptual gap between rational and real computation manifested, for instance, by the existence of computable rational functions whose extensions to the reals are not computable and vice versa. This gap can be attributed to two main reasons: (1) continuity and smoothness of real functions play necessary roles in their computability and complexity-theoretic properties whereas the situation is the contrary in rational computation and (2) real computation is approximate and hence robust whereas rational computation is exact and rigid. Despite these negative results, if we relax our framework to include relative computation, then we can bridge the rational-real computation gap relative to ∆2 oracles of the arithmetical hierarchy. We have shown that ∆2 is a tight bound for the rational-real computational equivalence. That is, if a continuous function over the rationals is computable, then its extension to the reals is computable relative to a ∆2 oracle; and vice versa. This result can also be considered an extension of the Shoenfield’s Limit Lemma from classical recursion theory to the computable analysis context.

As mentioned the purpose of this work is to compare between two notions of computability: computability over the representing set and computability over the represented set. This is exemplified here by investigating the space of real numbers with the dyadic rationals as the representing set. This helps providing deeper insights into the nature of real continuous computation (and henceforth, over any space of infinite objects) and its physical realizability and how it differs from computation over the discrete representing rationals. It turns out there is a conceptual gap between these two notions manifested by: (1) the existence of computable rational-preserving real functions whose restrictions to the rationals are not computable, (2) the existence of computable (continuous) rational functions whose extensions to the reals are not computable, and (3) the existence of polynomial time computable rational functions whose extensions to the reals are not polynomial time computable and vice versa. This last point can actually be generalized to any complexity class. The difference of the two notions of computability is rooted in the fact that computation over the representing set is exact whereas it is only approximate over the represented one. On one hand approximate computation is robust which is advantageous from both the computability and complexity theoretic perspectives. On the other hand it needs additional information about the neighborhood of the point being computed which in some cases might be hard to extract or even not effectively available.

Despite the negative results mentioned above about the incompatibility of the rational and real computation (in the context of computable analysis), the situation gets much better when we expand our framework to include relative computation. It just requires us to climb up to the second level of the arithmetical hierarchy, specifically ∆2 oracles, in order to reconcile rational and real computation. We will show that rational and real computation are equivalent relative to ∆2 oracles. In other words, if a continuous function over the rationals is computable, then its extension to the reals is computable relative to a ∆2 oracle, and vice versa. It turns out that ∆2 is a tight bound for such computational equivalence, that is, in general oracles weaker than ∆2 are not strong enough. This result of relativizing computable analysis to ∆2 oracles can also be viewed as an extension of the Shoenfield’s Limit Lemma from classical recursion theory to the computable analysis context. The classical Shoenfield’s Limit Lemma relates ∆2 sets to computable functions over the integers.

References