Logic Seminar in NJU

We study \Sigma_1 theory of the Turing degrees in the (both finite and transfinite levels of ) Ershov hierarchy. By the author's recent work joint with Slaman, Stephan and Yang, we prove that most of them have different \Sigma_1 -theory in the case parameters permitted

We give the computable Riesz representation theorems on two different cases. One is for the dual of C[a,b] . By the Riesz representation theorem for the dual of C[a,b] , for every continuous linear operator F:C[a,b]\to \mathbb{R} , there is a function g:[a,b]\to\mathbb{R} of bounded variation such that [F(f)=\int f\,dg \ \ \ (f\in C[a,b])] and the function g can be normalized such that V(g)=\| F\| . In this paper we prove a computable version of this theorem in the framework of TTE. For given natural representations of the spaces we prove that there are computable operators mapping F to gand mapping g to F . Another is for locally compact Hausdorff space. We know that Riesz Representation Theorem plays an important role in Functional Analysis and General Topology. That is, let X be a locally compact Hausdorff space, and let I be positive linear functional on C(X) . Then there is unique regular Borel measure \mu on X such that [I(f)=\int f\,d\mu ] holds for each f in C(X) .\cite{GP65} In this paper, we prove a computable version of this theorem.

We first introduce the definition of approximation, i.e. a sub-formula with less variables. We present a method to construct the most accurate approximations. Furthermore, we consider the variable minimal approximations with some characteristics, so-called variable minimal equivalence (VME), variable minimal satisfiability (VMS), and variable minimal unsatisfiability (VMU), respectively. We investigate these classes and show that the relevant determining problems are intractable in general.

We introduce the concepts of complex and autocomplex sets, where a set A is complex if there is a recursive, nondecreasing and unbounded lower bound on the Kolmogorov complexity of the prefixes (of the characteristic sequence) of A, and autocomplex is defined likewise with recursive replaced by A-recursive. We observe that exactly the autocomplex sets allow to compute words of given Kolmogorov complexity and demonstrate that a set computes a diagonally nonrecursive (DNR) function if and only if the set is autocomplex. The class of sets that compute DNR functions is intensively studied in recursion theory and is known to coincide with the class of sets that compute fixed-point free functions. Consequently, the Recursion Theorem fails relative to a set if and only if the set is autocomplex, that is, we have a characterization of a fundamental concept of theoretical computer science in terms of Kolmogorov complexity. Moreover, we obtain that recursively enumerable sets are autocomplex if and only if they are complete, which yields an alternate proof of the well-known completeness criterion for recursively enumerable sets in terms of computing DNR functions. All results on autocomplex sets mentioned in the last paragraph extend to complex sets if the oracle computations are restricted to truth-table or weak truth-table computations, for example, a set is complex if and only if it wtt-computes a DNR function. Moreover, we obtain a set that is complex but does not compute a Martin-Lof random set, which gives a partial answer to the open problem whether all sets of positive constructive Hausdorff dimension compute Martin-Lof random sets. Furthermore, the following questions are addressed: Given n, how difficult is it to find a word of length n that (a) has at least prefix-free Kolmogorov complexity n, (b) has at least plain Kolmogorov complexity n or (c) has the maximum possible prefix-free Kolmogorov complexity among all words of length n. All these questions are investigated with respect to the oracles needed to carry out this task and it is shown that (a) is easier than (b) and (b) is easier than (c). In particular, we argue that for plain Kolmogorov complexity exactly the PA-complete sets compute incompressible words, while the class of sets that compute words of maximum complexity depends on the choice of the universal Turing machine, whereas for prefix-free Kolmogorov complexity exactly the complete sets allow to compute words of maximum complexity.

Interpreting in a structure M another structure N is a model theoretic technique to reduce properties of some kind of structures to those of another kind, e.g. decidability/undecidability, definability, rigidity. Investigations of "big pictures" of degree structures always involve such technique. We will review two variations in r.e. degrees, namely Harring-Shelah coding and Slaman-Woodin coding, and a shortcoming claimed by Nies of known codings. Finally we will suggest a way to walk around this shortcoming, though our solution does not lead to any valuable insight. To this end we will briefly introduce the construction of Harrington-Shelah coding.

Given a bounded linear mapping T of separable Hilbert spaces and an element y from the range of T , can we effectively solve the equation Tx = y ? If one formulates this problem within the representation based approach to effective analysis (i.e. using Turing machines operating on infinite binary sequences representing the objects of consideration), it is well known that the answer is ``No'' in general. Taking inspiration in a result from Information Based Complexity and applying Tikhonov Regularization, we show that the inverse of T can still be computed as a point in an L^2 space with respect to a Gaussian measure if additional information on the adjoint of T and on the L^2 norm of the inverse is available.

This talk will survey some of the principle ideas and methods involved in measuring the strength of arithmetical proof systems, in terms of the complexity of their provably recursive functions. Transfinite subrecursive hierarchies of bounding functions provide a linear scale against which proof-theoretic strength can be measured and compared. They also give a direct link to various well-known combinatorial independence results for theories. The methods are those of proof theory - cut elimination, ordinal analysis etc.

Hilbert's Tenth Problem (HTP) asks for an effective algorithm to test whether an arbitrary polynomial equation with integer coefficients has integral solutions. On the basis of the important work of Davis, Putnam and Robinson, in 1970 Matijasevich took the last step to show the unsolvability of HTP. Since 1970 reduction of involved unknowns and various extended HTP have become research themes in the field. In this talk we will give a survey of problems and results in the two aspects.

Based on a result of Nies on definability the upper semilattice of computably enumerable degrees (denoted by R), we find that in R filters generated by definable subsets are also definable. As applications we demonstrate two new definable filters and study their supremum. Finally we demonstrate a counterexample.

We investigate aspects of effectivity and computability on closed and compact subsets of locally compact spaces. We use the framework of the representation approach, TTE, where continuity and computability on finite and infinite sequences of symbols are defined canonically and transferred to abstract sets by means of notations and representations. This work is a generalization of the concepts introduced in \cite{BW99} and \cite{Wei00} for the Euclidean case and in \cite{BP03} for metric spaces. Whenever reasonable, we transfer a representation of the set of closed or compact subsets to locally compact spaces and discuss its properties and their relations to each other.

Hardness or completeness proofs are among the most fundamental methods for proving lower complexity bounds in computational complexity theory. The most prominent example of a completeness notion is \mathrm{NP} -completeness which is shared by hundreds of important problems and which - assuming \mathrm{P} \not= \mathrm{NP} - implies intractability. While giving lower bounds makes completeness notions attractive for applications by allowing the analysis of the complexity of individual problems, for the theorist the investigation of the structure of complete problems is of central interest. Since all problems of a given complexity class \mathrm{C} are coded into a corresponding complete problem A , it is natural to ask how richness of the class \mathrm{C} is reflected by the structure of A , and which redundancies a problem A must have in order to allow all of these codings. We will discuss this question at the example of complete problems for the exponential time class \mathrm{E} = \mathrm{DTIME}(2^{\mathrm{lin}(n)}) . In particular, we will show how structure and redundancy properties of complete problems depend on the underlying reducibilities. We will also discuss how the structural analysis of complete sets might be of use for separating complexity classes.

A problem A is complete for a class \mathrm{C} if all problems in \mathrm{C} can be (feasibly) reduced to A . For a class \mathrm{C} containing intractable problems this implies intractability of the complete set A . So, for instance, completeness for the nondeterministic polynomial-time class \mathrm{NP} or for the exponential time class \mathrm{E} = \mathrm{DTIME}(2^{\mathrm{lin}(n)}) implies intractability. In fact, there are numerous examples of interesting problems where intractability has been shown this way. In the 1990s Lutz raised the question whether this approach to intractability can be generalized by considering more general \emph{weak} completeness notions which still imply intractability. While for a complete set A for a class \mathrm{C} all of \mathrm{C} can be reduced to A , Lutz proposed to call a set A weakly complete if a \emph{nonnegligible} part of \mathrm{C} can be reduced to A . For the class \mathrm{E} , Lutz formalized these ideas by introducing a resource bounded measure on this class and by saying that a subclass of \mathrm{E} is negligible if it has measure 0 in \mathrm{E} . Lutz justified this definition by showing that 1) there are weakly complete sets in his sense which are not complete and 2) weak completeness still implies intractability. In our talk we will review a characterization of Lutz's weak completeness in terms of resource-bounded randomness (given by Ambos-Spies, Terwijn and Zheng) which shows that, in the sense of resource-bounded measure, weakly complete sets are abundant whereas complete sets are rare. Finally we discuss some recently studied more general (and conceptually much simpler) weak completeness notions which do not require the machinery of resource-bounded measure but which are defined in basic terms of complexity theory.

Borel structures are analogs of computable structures on the continuum. We consider the question to what extent it matters that the representation is injective, and compare Borel structures to structures representable by automata.

Barmpalias, Brodhead, Cenzer, Dashti and Weber (2007) introduced a probability distribution on nonempty closed subsets of Cantor space, and studied algorithmically random closed sets in this model. By comparing their model with percolation limit sets, studied by R. Lyons (1990) among others, we strengthen some of their results.

The LR degrees is a structure which stems out of lowness considerations in algorithmic randomness. Recently, jointly with Lewis, Soskova, Stephan we have proved a number of results about this structure. The focus of this talk will be on the methods that we used to prove some of these results. I will do a couple of priority and forcing constructions which prove some results in the LR degrees.

The r.e. degrees have been studied extensively since 1960s. One could easily generalize the definition of r.e. degrees to the so called n-r.e. degrees. Some properties are shared by both r.e. degrees and d.r.e. degrees (for example, lower density), some properties, however, (for example, upper density) hold only in r.e. degrees but not in n-r.e. degrees. Therefore, we conclude that r.e. degrees are not elementarily equivalent to n-r.e. degrees. However, it is surprisingly difficult to tell that 2-r.e. degrees are not elementarily equivalent to 3-r.e. degrees. The elementary equivalence between 3-r.e. and 4-r.e. degrees even remains unknown (Downey’s Conjecture). In this talk, we review some facts about n-r.e. degrees with an emphasis on Lachlan set and splitting properties. Whether this approach could answer the Downey’s Conjecture eventually is yet to be seen 

Model theory is the study of algebraic structures, such as groups and rings, through the lens of first-order logic.  For any structure $M$ we can ask two related questions: Which first-order sentences are true in $M$? What subsets of $M^n$ are (internally) definable?  The techniques of model theory answer these questions for many of the structures arising in algebra, such as the field of real numbers and the ring of algebraic integers.  In the process, several notions of model-theoretic ``tameness'' emerge.  For example, the real field and several related structures possess the ``o-minimal'' property, which ensures that definable sets have well-defined dimensions, compact definable sets can be triangulated, and definable functions are piecewise continuous.  These tameness properties have been essential to the applications of model theory in algebraic geometry and additive combinatorics.  The ``NIP'' property is a more general notion of tameness--the class of NIP structures includes the o-minimal structures as well as many of the structures arising naturally in algebra such as the field of p-adic numbers and the ring of formal power series.  In recent years, a conjectural classification of NIP fields has emerged.  Loosely, speaking, we expect almost all NIP fields to be henselian valued fields.  In my talk, I will give an overview of model theory, discuss the NIP field conjectures, and state my partial results in the ``finite-dimensional'' NIP setting.

 Hrushovski and Loeser used the space \widehat{V} of generically stable types concentrating on V to study the topology of Berkovich analytification V^{an} of V. In this talk we will give a brief introduction to this object and present an alternative approach, based on lovely pairs of valued fields, to study various analytifications using model theory. We will provide a model-theoretic counterpart \widetilde{V} of the Huber's analytification of V. We show that, thesame as for \widehat{V}, the space \widetilde{V} is strict pro-definable. Furthermore, we will discuss canonical liftings of the deformation retraction developed by Hrushovski and Loeser. This is a joint project with Pablo Cubides-Kovacsics and Martin Hils.

 Left-c.e. reals are the reals which can be approximated by a series of nondeceasing computable reals. D.c.e. reals are differences of two left-c.e. reals. Based on works about derivation of d.c.e. reals by Barmpalias and Lewis-Pye, and by Miller, we show that among all d.c.e. reals Martin-Löf random reals are exactly the ones which don't have computable approximations with different convergence rates. While among all left-c.e. reals, such characterization doesn't hold any more. If we define another randomness notion to characterize such property for left-c.e. reals, it can be shown that such randomness is strictly weaker than Martin-Löf randomness. However, its position in the spectrum of other randomness notion like Schnorr randomness is still unclear. We will also explore this topic in terms of relativization. This is a joint work with George Barmpalias and Liang Yu.

 Computable Lipschitz reducibility was introduced to measure relative complexity, which imply K-reducibility , as well as C-reducibility.  In this talk  we show that  the computable  Lipschitz degrees  of left-c.e.  reals  are not dense.

 Using a special kind of Birkhoff lattice, we construct a permutation model in which there exists a finite-to-one map from the symmetric group of an infinite set A onto A, which cannot exist even in the presence of the axiom of countable choice. This is a joint work with Jiachen Yuan.

 Moore proved that it is consistent to have a 2 element basis for Aronszajn lines consisting of coherent lines. It is well-known that the basis can have a large size. We will introduce a forcing axiom which implies a 4 element basis for Aronszajn lines consisting of 2 coherent lines and 2 non-coherent lines. We will also introduce some progress on related topics, e.g., 3 element basis.

 Peterzil和Steinhorn证明了定义在o-minimal结构上的非紧群中含有一个1维子群,它可以看作是一条无界曲线在无穷远点处的“切空间”。随后,Peterzil和Scarchenko将这一结论推广至高维情形。Kamensky,Scarchenko和叶谨赫在代数闭域和代数闭赋值域也证明了类似的结果。我将对这些结果做出一个介绍,并给出相应的例子。最后我会简单介绍我和W. Johnson最近在p-进闭域中证明的结果。

对于一串可计算的数列(a_k),若其平方和\sum_k (a_k)^2收敛,那么适当的改变其中一些项的符号,我们可以使其求和\sum_k (x_k)(a_k)也收敛。其中,x_k\in {+,-}代表符号的选取方式。我们将尝试分析这些符号选取方式,并从算法随机性的角度给予其一定程度的刻画。

We use a new non-classical priority method to generalize the work of Melnikov, Selivanov and Yamaleev about new Turing degrees in fine hierarchy on higher levels.

Nonstandard analysis, a powerful machinery derived from mathematical logic, has had many applications in probability theory as well as stochastic processes. Nonstandard analysis allows construction of a single object---a hyperfinite probability space---which satisfies all the first order logical properties of a finite probability space, but which can be simultaneously viewed as a measure-theoretical probability space via the Loeb construction. As a consequence, the hyperfinite/measure duality has proven to be particularly in porting discrete results into their continuous settings.  The connection between frequentist and Bayesian optimality in statistical decision theory is a longstanding open problem. For statistical decision problems with a finite parameter space, it is well known that a decision procedure is extended admissible (frequentist optimal) if and only if it is Bayes. Such connection becomes fragile for decision problems with an infinite parameter space and one must relax the notion of Bayes optimality to regain such equivalence between extended admissibility and Bayes optimality. Various attempts have been made in the literature but they are subject to technical conditions which often rule our semi-parametric and nonparametric problems. By using nonstandard analysis, we develop a novel notion of nonstandard Bayes optimality (Bayes with infinitesimal excess risk). We show that, without any technical condition, a decision procedure is extended admissible if and only if it is nonstandard Bayes. We conclude by showing that several existing standard results in the literature can be derived from our main result.

More talks can be found here.