The primary focus of my research is algorithmic randomness, especially its relation to randomness extraction and to probability and statistics. More generally I have worked in various areas of computability theory (recursion theory).

The basic idea of algorithmic randomness is to declare an object to be algorithmically random if it belongs to no “computable” set of measure zero. Since only computable sets can truly be understood by humans, an algorithmically random object will be as random as one can desire. This way one can formulate questions such as: can one compute a random object from an object that is almost random? Can one compute anything useful from a random object? etc. So algorithmic randomness is a natural setting for randomness extraction (although not the only natural setting).

*Martin-Löf (ML-) randomness* is a notion of algorithmic randomness, arguably the most natural one. It basically corresponds to the class of all Turing machines, whereas other randomness notions correspond to those Turing machines that always halt, say. It turns out that the collection of all Martin-Löf random sets is simpler in terms of definability than rival notions. It is a \(\Sigma^0_2\) class, whereas other notions tend to give \(\Pi^0_3\) classes or worse. It is an \(F_\sigma\) set rather than \(F_{\sigma\delta}\). Thus each ML-random set belongs to a closed set containing nothing but ML-random sets, and this is a basic tool in the area.

**Open problem**: Let \(\text{MLR}\) denote the collection of all Martin-Löf random subsets of \(\mathbb N\) and let \(\le_T\) denote Turing reducibility.

Note: Hertling (2005) showed that it is not necessarily true that \(B\in\text{MLR}\).

This is one of many questions of the form: if a set is close to being random, then can we extract randomness from it? I have worked primarily on two notions of being close to random: being close in asymptotic Hamming distance, and being an infinite subset.

The current state of knowledge about infinite subsets is that each Schnorr 2-random set \(X\) has an infinite subset \(Y\) such that no Martin-Löf random set is Turing reducible to \(Y\). This is the main result of the paper *A strong law of computationally weak subsets*. The main idea of the proof is to combine the Extinction Criterion for Galton-Watson processes with a construction from the paper *Comparing DNR and WWKL*.

In the case of asymptotic Hamming distance, the paper *Recovering randomness from an asymptotic Hamming distance* has some results. One can define asymptotic Hamming distance in two ways: either require that the Hamming distance of the prefixes of \(X\) and \(Y\) of length \(n\) is small for all \(n\), or merely for infinitely many \(n\). It turned out to be most natural, in a way, to require small distance on an infinite computable set of values of \(n\). This is related to Michel Weber’s strengthening of the Law of the Iterated Logarithm. This law has a somewhat strange form, involving the logarithm of the logarithm function. However, when one looks at what happens at an infinite sparse set of times, this strangeness disappears. In terms of this notion of asymptotic Hamming distance I obtained the following type of result:

This has the corollary that from a set \(Y\) that is complex in the sense of Kjos-Hanssen, Merkle, and Stephan (TAMS, 2011), and simultaneously has positive effective packing dimension, one cannot uniformly compute any set \(Z\) that is even Mises-Wald-Church stochastic. A goal is to strengthen this so that \(Y\) has positive effective Hausdorff dimension. Another possible way it can be strengthened is to deal with all Turing reductions at once. Conceivably there is a route here to prove that for each sufficiently random set \(X\), there is a set \(Y\) that is so close to \(X\) as to force \(Y\) to have effective Hausdorff dimension 1; and such that no 1-random set is Turing reducible to \(Y\). This would strengthen a result of Greenberg and Miller which asserts that such sets \(Y\) exist, if we remove the assumption of being close to a random set.