We will discuss two related measures of complexity for mathematical theorems and constructions. One asks what proof techniques (or formally axioms) are needed to prove specific theorems. The other asks (for existence proofs) how complicated (in the sense of computability) are the objects that are asserted to exist.
For this talk we will consider some illustrative examples from Combinatorics. In particular, we will consider several theorems of matching theory such as those of Frobenius, (M. and P.) Hall and König. While in the infinite case these theorems seem both different and yet somehow the same, an analysis of the countable case in terms of computability or provability clearly distinguishes among them and assigns precise levels to their complexity.
At the most complicated level we will consider lies the König Duality Theorem: Every bipartite graph has a matching such that one can choose a vertex from each edge of the matching so as to produce a cover, i.e. a set with an element from every edge. This theorem cannot be proven using algorithmic methods even when combined with compactness (König’s lemma for binary trees) or full König’s lemma. We will show that it requires highly nonelementary methods as typified by constructions by transfinite recursion, choice principles and, for some versions, even more.
If time permits, we may also mention the calibration of some results of Ramsey theory that lie at the other (low) end of our classification scheme: Ramsey’s theorem for n-tuples for different n and some consequences such as the theorems of Dilworth and Erdos-Szekeres. (Every infinite partial order has an infinite chain or antichain and every infinite linear order has an infinite ascending or descending sequence.) We will not use, or even consider, any formal systems and no knowledge of logic is presupposed.
We will work instead with an intuitive notion of what it means for a function to be computable, i.e. there is a computer program that calculates it given time and space enough and no mechanical failures. We will also explain the relevant combinatorial notions.
Title: Lattice embeddings into the computably enumerable Turing degrees.
Abstract: This talk will attempt to explain in lattice-theoretic terms the status quo of
the problem of which finite lattices can be embedded into the computably
enumerable Turing degrees, concentrating on the lattice theory rather than the
computability-theoretic constructions.
In 2000, Lerman proved a characterization of the embeddable finite lattices
for join-semidistributive lattices, building on joint work with Lempp in the
mid-1990′s. (It is generally believed that once the join-semidistributive case
is solved, the full characterization is not too difficult.) Subsequent work by
Lempp, Lerman and Solomon produced only minor improvements and ended in a
stalemate on this problem which has been open for almost 50 years.
The problem with Lerman’s condition is that it is not known to be decidable
(and so in particular not expressible by first-order formula in the language
of lattices); rather, it requires the existence of finite sequences of
sequences of lattice elements subject to fairly delicate transition rules.
I will try to explain Lerman’s condition in detail, motivating it somewhat by
a hint of where each feature comes from in computability-theoretic terms.
Title: On the role of the Collection Principle for Sigma-0-2-formulas in second-order reverse mathematics.
Abstract:
This is joint work with Chitat Chong and Yue Yang.
We show that the principle PART from Hirschfeldt and Shore [2007] is
equivalent to the Sigma^0_2-Bounding principle BSigma^0_2 over RCA_0,
answering one of their open questions.
Our work also fills a gap in a proof in Cholak, Jockusch and Slaman [2001] by
showing that D^2_2 implies BSigma^0_2 and is thus indeed equivalent to Stable
Ramsey’s Theorem for Pairs SRT^2_2.
This also allows us to conclude that the combinatorial principles IPT, SPT and
SIPT defined by Dzhafarov and Hirst [2009] all imply BSigma^0_2, and thus that
SPT and SIPT are both equivalent to SRT^2_2 as well.
Our proof uses the notion of a bi-tame cut in models of arithmetic, the
existence of which we show to be equivalent, over RCA_0, to the failure of
BSigma^0_2.