Category Archives: research

Complexity Option Game High Score Table

Complexity Option Game

Highest individual payoffs
Name Payoff
Rupert $11
Bjørn JavaScript $11
Rutger $11
Rupert $11
$3
$3
Steven Gum $3
$2
Thorsten $2
$2
$2
Rutger $2
$2
Jake $2
$2
$2
Nina $2
Alv $2
Jake $2
Jake $2
$2
$2
$2
$2
$2
Gabriele acquagassata $1
$1
Morten $1
Gabriele acquagassata $1
$1
$1
$1
$1
Gabriele acquagassata $1
$1
$1
$1
$1
Saurav $1
$1
God $1
$1
$1
$1
Lucas $1
3 $1
$1
ian2 $1
James $1
Lisa $1
$1
David $1
$1
x $1
x $1
$1
$1
$1
x $1
$1
$1
$1
$1
$1
$1
$1
$1
$1
$1
$1
Znevhf $1
Znevhf $1
$1
$1
$1
$1
$1
Thorsten $1
$1
nate $1
nate $1
Thorsten $1
nate $1
Jesus CHrist $1
$1
Jesus CHrist $1
$1
$1
$1
$1
$1
axs34 $1
Jake $1
Jake $1
Jake $1
Jake $1
Jake $1
Jake $1
Matthew $1
Jake $1
Jake $1
Jake $1
$1
Jake $1
$1
$1
bu $1
Rutger $1
Jake $1
Jake $1
$1
moo $1
dev $1
$1
Jake $1
Soumik $1
axs34 $1
$1
J $1
Mary $1
Mary $1
Daniel $1
Jake $1
Jake $1
Jake $1
Jake $1
$1
$1
$1
Dan $1
$1
Dan $1
Nina $1
$1
Yoshiki $1
$1
$1
$1
$1
Godot $1
Znevhf $0
Math $0
$0
J $0
$0
$0
$0
$0
$0
$0
A name $0
Herr God $0
Math $0
Znevhf $0
Znevhf $0
x $0
Znevhf $0
$0
$0
$0
Gev $0
Gev $0
$0
$0
Gev $0
$0
$0
$0
$0
Gev $0
$0
$0
$0
Yoshiki $0
Doralisa $0
Doralisa $0
H $0
Amir $0
$0
Gev $0
$0
$0
$0
Catiline $0
$0
$0
$0
$0
$0
$0
$0
$0
$0
$0
$0
$0
$0
$0
$0
$0
$0
$0
Thorsten $0
Nagel $0
$0
$0
Yoshiki $0
Buddenbroek $0
$0
$0
$0
J $0
$0
$0
Jesus CHrist $0
$0
$0
$0
Thorsten $0
Thorsten $0
$0
$0
$0
$0
Yoshiki $0
$0
$0
$0
$0
$0
$0
$0
$0
$0
Chris $0
$0
$0
$0
Nina $0
Nina $0
$0
$0
Gabriele acquagassata $0
$0
$0
$0
$0
$0
Lisa $0
TT $0
ian2 $0
$0
$0
$0
Lucas $0
$0
Chris $0
$0
$0
Daniel $0
$0
axs34 $0
axs34 $0
? $0
Jake $0
Jake $0
mra $0
Jake $0
? $0
Jake $0
dev $0
? $0
Thirty Fodders $0
S. $0
Jake $0
$0
$0
bu $0
Dan $0
dfsg $0
Godot $0
Jake $0
Jake $0
Jake $0
Jake $0
Jake $0
Jake $0
Jake $0
Jake $0
lol $0
$0
$0
$0
benjamin $0
$0
$0
danish $0
i $0
i $0
test $0
$0
$0
$0
$0
$0
$0
$0
$0
$0
$0
Rachel $0
Ulven! (you know my name) $0
$0
Robert $0
$0
$0
hi $0
ram $0
$0
$0
$0
$0
$0
$0
$0
$0
Neeraj $0
$0
$0
$0
$0
lol $0
$0
$0
$0
$0
$0
Jehan $0
$0
Kellie $0
$0
$0
$0
$0
$0
$0
$0
nate $0
nate $0
nate $0
Kellie $0
Kellie $0
nate $0
nate $0
$0

Here are the players' total earnings out of the market total of $204.

Name Earnings
Jake $23
Rupert $22
Rutger $14
Bjørn JavaScript $11
Thorsten $4
Gabriele acquagassata $3
nate $3
Nina $3
Steven Gum $3
x $3
Znevhf $2
axs34 $2
Mary $2
Jesus CHrist $2
Dan $2
Alv $2
J $1
Soumik $1
dev $1
Morten $1
Daniel $1
James $1
Godot $1
Lisa $1
moo $1
Yoshiki $1
3 $1
bu $1
David $1
Lucas $1
ian2 $1
Matthew $1
Saurav $1
God $1
Note: the highest payoff theoretically possible is currently $11. For comparison, Pakravan and Saadat calculated the following option prices:
Option prices
Expiry Price
0 or 1 $0
2 or 3 $0.5
4 or 5 $0.75
6 or 7 $0.875
8 or 9 $1.070313
10 or 11 $1.191406
12 or 13 $1.236328
14 or 15 $?
16 or 17 $?
18 or 19 $?
20 or 21 $?
22 or 23 $?
Infinity $Infinity?

20140508_142802

Interactions between computability, universal algebra, and group theory

All talks will be in Keller Hall 314.

Scientific program

Thursday February 12

  • 10:30-11:20am Olga Kharlampovich (CUNY — Hunter College): Elementary classification questions for groups and algebras I: Groups.
  • 11:30am-1:20pm: CASUAL LUNCH. Suggestion: Manoa Gardens
  • 1:30-2:20pm Free discussion

Friday February 13

  • 9:30-10:20am Bakhadyr Khoussainov (University of Auckland):A quest for algorithmically random algebraic structures
  • 11:30-12:20pm Alexei Miasnikov (Stevens Institute of Technology): Elementary classification questions for groups and algebras II: Associative and Lie algebras.
  • 12:30-1:20pm BRIEF LUNCH. Suggestion: Sustainability Courtyard
  • 1:30-2:20pm Paul Kim Long V. Nguyen (University of Hawaii — Leeward Community College): $\Sigma^0_3$-completeness of subdirect irreducibility
Abstract (Kharlampovich and Miasnikov)

We consider some fundamental model-theoretic questions that can be asked about a given algebraic structure (a group, a ring, etc.), or a class of structures, to understand its principal algebraic and logical properties. These Tarski type questions include: elementary classification and decidability of the first-order theory.

In the case of free groups we proved that two non-abelian free groups of different ranks are elementarily equivalent, classified finitely generated groups elementarily equivalent to a finitely generated free group (also done by Sela) and proved decidability of the first-order theory.

We describe partial solutions to Tarski’s problems in the class of free associative and Lie algebras of finite rank and some open problems. In particular, we will show that unlike free groups, two free associative algebras of finite rank over the same field are elementarily equivalent if and only if they are isomorphic. Two free associative algebras of finite rank over different infinite fields are elementarily equivalent if and only if the fields are equivalent in the weak second order logic, and the ranks are the same. We will also show that for an infinite field the theory of a free associative algebra is undecidable.

Things to do in Honolulu

Pow!Wow! art festival

Travel: parking, getting to campus from your hotel, and beaches.

Prefix Markov chains

For lengths 1, 2, and 3 these are planar.
No longer for length 4.
Displayed are postfix Markov chains, i.e., another bit is added at the end of the string.



To prove that the latter is nonplanar, let us first remove 0000 and 1111 and their incident edges:

Next let us remove 0110 and its incident edges; moreover, let’s remove the edges connecting 0101 and 1010 to eachother:

Then we’ll remove the edges 0100 to 1000, and 1110 to 1101:

Then let’s remove 4 vertices by smoothing (i.e., using a homeomorphism):

Let’s try to identify a $K_{3,3}$ using red and blue for the two parts of the bipartite graph:

We see that one edge of the $K_{3,3}$ is missing, however it can be obtained by removing further vertices and smoothing. Done!

High Score Table

Complexity Guessing Game

The top spots are inhabitated by computability theorists such as Mushfeq Khan, Rutger Kuyper, Rupert Hölzl, Greg Igusa.

Rank Name # times correct ($a$) # times wrong ($b$) Bayesian proportion ($\frac{a+100}{a+b+200}$)
1 qefhsum 1000 0 0.917
2 Rutger 500 0 0.857
3 Rupert 333 3 0.808
4 Greg 105 39 0.596
5 Patrick Chambers 121 57 0.585
6 44 19 0 0.543
7 Peter Hengyao Han 25 7 0.539
8 David 25 13 0.525
9 Emma 10 1 0.521
10 domotorp 14 8 0.514
11 Bert 7 3 0.51
Alexey 6 2 0.51
Kayleigh Hyde 4 0 0.51
14 adam 7 4 0.507
2 6 3 0.507
amir 4 1 0.507
17 Noah 3 1 0.505
Znevhf 3 1 0.505
abelard 3 1 0.505
Erin 2 0 0.505
Bradley Valiente 2 0 0.505
Evgeny 2 0 0.505
reza 2 0 0.505
charli 2 0 0.505
d-wax 2 0 0.505
niels 2 0 0.505
Cameron 2 0 0.505
9 2 0 0.505
29 Jack 15 14 0.502
Kevin Buzzard 9 8 0.502
James Myer 6 5 0.502
Binh 5 4 0.502
Abdel 4 3 0.502
Lionel 4 3 0.502
Tyler Iyomasa 4 3 0.502
Pedro 2 1 0.502
Sonia 2 1 0.502
Victor 2 1 0.502
Шш 1 0 0.502
achraf 1 0 0.502
aaaa 1 0 0.502
hello 1 0 0.502
Matt 1 0 0.502
Sander 1 0 0.502
antoine 1 0 0.502
Uncultured Tramp 1 0 0.502
Paul 1 0 0.502
Stu 1 0 0.502
popo 1 0 0.502
Igor 1 0 0.502
pete 1 0 0.502
Morrison 1 0 0.502
pincopallino 1 0 0.502
efte 1 0 0.502
asd 1 0 0.502
thomas 1 0 0.502
Magnus Talberg 1 0 0.502
ewrerewrw 1 0 0.502
cc 1 0 0.502
Rune 1 0 0.502
as 1 0 0.502
12 1 0 0.502
;pl\'p;l\'p,l 1 0 0.502
NICOLAS BAEZ 1 0 0.502
BALAJI 1 0 0.502
Jesse 1 0 0.502
rajat 1 0 0.502
Onkar 1 0 0.502
Rune 1 0 0.502
fiona 1 0 0.502
Jeff 1 0 0.502
orlp 1 0 0.502
Zain 1 0 0.502
3 1 0 0.502
Damien 1 0 0.502
hgfhdgf 1 0 0.502
heptagon 1 0 0.502
Deepal 1 0 0.502
tes 1 0 0.502
Yarramsetti Durga Sai Ratna Kumar 1 0 0.502
han 1 0 0.502
Jane 1 0 0.502
sajal 1 0 0.502
xxxxxxxxxxxxx 1 0 0.502
tr 1 0 0.502
asdassdasdsad 1 0 0.502
mehdi 1 0 0.502
sdfa 1 0 0.502
swathi 1 0 0.502
Michal 1 0 0.502
juan 1 0 0.502
Nash 1 0 0.502
sdv 1 0 0.502
jass 1 0 0.502
95 Pinco pallino 13 13 0.5
Turbo 4 4 0.5
Peter 2 2 0.5
Neil 2 2 0.5
Lynne Higa 2 2 0.5
Morten 2 2 0.5
bald 2 2 0.5
Toby 2 2 0.5
Doralisa 1 1 0.5
chack 1 1 0.5
pluto 1 1 0.5
Dylan C. Beck 1 1 0.5
asdfadsf 1 1 0.5
kk 1 1 0.5
Daniel 1 1 0.5
manor 1 1 0.5
smilex 1 1 0.5
Anthony 1 1 0.5
David Cervantes Nava 1 1 0.5
Nick Gur 1 1 0.5
Fulker 1 1 0.5
Conrado 1 1 0.5
josh 1 1 0.5
James 1 1 0.5
Xukou Ling 1 1 0.5
weakmath 1 1 0.5
brikir 1 1 0.5
Muzamil 1 1 0.5

Note: Entries under nondescript and extremely short names, such as "n", may be deleted periodically.

Deterministic automaton complexity of strings

The deterministic automaton complexity of a finite binary string \(x=x_1\ldots x_k\) is defined to be the least number \(A(x)\) of states of a finite state automaton \(M\) such that \(x\) is the only string of length \(k\) in the language accepted by \(M\). It was introduced by Shallit and Wang (2001).

In a Discrete Mathematics class in Spring 2009, students Jason Axelson, Chris Ho and Aaron Kondo wrote a C program that calculated the complexity of all strings of length at most 7.

Results:
\[A()=1,\quad A(0)=2,\]
\[A(x_1\ldots x_k)\le k \quad\text{ for } k\ge 2,\]
\[A(x_1\ldots x_k)\le k-1 \quad\text{ for } k\ge 5.\]
\[A(x)\le 5 \quad\text{ for } k=7.\]

The string \(011100\) is the only string (starting with a 0) of length 6 or less with the property that its complexity is increased when read backwards:
\[A(011100)=4\ne 5=A(001110).\]


String 011100
Automaton witnessing the complexity of the string 011100. (For simplicity, the trash state where any missing arrow goes is not shown.)

A table of complexities is available:

Table of constants \(c\) with given property \(\forall x\in\{0,\dots,b-1\}^n\)

In the large-alphabet limit, Kayleigh graphs apply and we upper bound is \(\lfloor n/2\rfloor + 2\).
The example \(x=01\) shows the deterministic \(A(x)\) is not well defined unless
we also specify \(b\).
http://math.hawaii.edu/~bjoern/dfa-complexity-of-01-as-3-ary

Length \(n\) \(b=1\), \(A(x)\le cn + 1\) \(b=2\), \(A(x)\le cn\) \(b=3\), \(A(x)\le cn\) \(b=4\), \(A(x)\le cn\)
0 0 \(\infty\) \(\infty\) \(\infty\)
1 0 2 2 2
2 0 1 3/2? 1
3 0 1 1 1
4 0 1 1 1
5 0 0.8 1 1
6 0 0.83 in progress
7 0 0.714285
8 0 0.75
9 0 0.777 or 0.666

News (September 6, 2015): For binary strings of length at most 8, you may view witnessing automata using the format

http://math.hawaii.edu/~bjoern/dfa-complexity-of-011

Thus, the following table is now partially obsolete.

Complete table for binary strings of length \(\leq 7\) and partial table for length \(8\)

In the following table we only consider strings starting with 0, since strings starting with 1 have the same complexity as the string obtained by flipping each bit (e.g., \(K(001)=K(110)\)).

Some witnessing automata are linked to, they were drawn using svg-edit.

In the following table, \(\delta_i = 0102\) means that \((\delta(0,i),\delta(1,i),\delta(2,i),\delta(3,i)) = (0,1,0,2)\)
in a witnessing DFA with 4 states, found by the MATH 301 students’ program.
Curiously, note that below there are no instances of permutation automata (where \(\delta_i\) is a bijection) except for the empty string.

\(A^-(x)\) denotes incomplete DFA complexity, i.e., there is no requirement that the transition functions be total.
We have \(A_N(x)\le A^-(x)\le A(x).\)

String \(x\) Complexity \(A(x)\) \(\delta_0\) \(\delta_1\) \(A^-(x)\)

1 0 0 1
0 2 00 10 1
00 2 01 11 1
01 2 00 10 2
000 2 01 11 1
001 3 110 200 2
010 3 100 210 2
011 3 111 220 2
0000 2 01 11 1
0001 3 022 122 2
0010 4 1120 2321 3
0011 3 202 100 3
0100 4 1130 2211 3
0101 3 122 202 2
0110 4 1110 2231 3
0111 3 122 212 (avoids 0) 2
00000 2 01 11
00001 3 022 122
00010 4 1120 2322
00011 4 1122 2320
00100 4 1102 2331
00101 4 1132 2301
00110 4 1101 2332
00111 4 1131 2302
01000 4 1302 2131
01001 4 1223 2320
01010 3 122 202
01011 4 1221 2320
01100 4 1001 2303
01101 4 1322 2120
01110 4 1220 2321
01111 3 122 212 (avoids 0)
000000 2 01 11
000001 3 022 122
000010 4 0313 2333
000011 4 0333 2313
000100 5 11240 23221
000101 5 11241 23220
000110 5 11220 23241
000111 4 2033 1002
001000 5 13240 22211
001001 4 1321 2220
001010 4 2313 1233 (avoids 0)
001011 4 2032 1003
001100 4 1320 2223
001101 5 13241 22230
001110 5 13220 22243 4 (\(A_N=3\))
001111 4 2313 3133
010000 4 2133 3313
010001 5 12243 23220
010010 4 1220 2321
010011 5 12231 23240
010100 4 2313 1303
010101 3 122 202
010110 5 12210 23241
010111 5 12211 23240
011000 5 13240 21221
011001 4 2002 1033
011010 5 13220 21241
011011 4 1221 2320
011100 4 1220 2323
011101 5 12241 23210
011110 4 2313 1323 (avoids 0)
011111 3 122 212
0000000 2
0000001 3
0000010 4
0000011 4
0000100 5
0000101 5
0000110 5
0000111 5
0001000 5
0001001 5
0001010 5
0001011 5
0001100 5
0001101 5
0001110 5
0001111 5
0010000 5
0010001 5
0010010 4
0010011 5
0010100 4
0010101 4
0010110 5
0010111 5
0011000 5
0011001 5
0011010 5
0011011 5
0011100 5
0011101 5
0011110 5
0011111 4
0100000 4
0100001 5
0100010 5
0100011 5
0100100 4
0100101 5
0100110 5
0100111 5
0101000 5
0101001 5
0101010 3
0101011 4
0101100 5
0101101 5
0101110 5
0101111 5
0110000 5
0110001 5
0110010 5
0110011 5
0110100 5
0110101 5
0110110 4
0110111 5
0111000 5
0111001 5
0111010 5
0111011 5
0111100 5
0111101 5
0111110 4
0111111 3
String Complexity Witness
00000000 2 \(0^*\)
00000001 3 \(0^* 1\)
00000010 4 \(0^* 10\)
00000011 4 \(0^* 11\)
00000100 5 \(0^* 100\)
00000101 5 \(0^* 101\)
00000110 5 \(0^* 110\)
00000111 5 \(0^* 111\)
00001000 6 \(0^* 1000\)
00001001 6 \(0^* 1001\)
00001010 6 \(0^* 1010\)
00001011 6 \(0^* 1011\)
00001100 6 \(0^* 1100\)
00001101 6 \(0^* 1101\)
00001110 6 \(0^* 1110\)
00001111 5 d0: (1, 2, 3, 3, 0), d1: (4, 0, 1, 2, 0) better than \(0^* 1111\)
00010000 6 \(0001 0^*\)
00010001 \(\le\)5 \((0001)^*\)
00010010 \(\le\)6 \(0(001)^* 0\)
00010011 \(\le\)6 \(0(001)^* 1\)
00010100 \(\le\)6 \( 00 (01010…)00\)
00010101 \(\le\)5 \( 00 (01)*\)
00010110 5 witness (K-H)
00010111 5 no trash
00011000 \(\le\)6 \( 00011,00011,…\)
00011001 \(\le\)6 \( 0 (0011,0011,…)\)
00011010 6 witness (K-H)
00011011 \(\le\)6 \( 00(011)^*\)
00011100 5 witness (K-H)
00011101 6 witness (K-H)
00011110 \(\le 6\) (Christina M) (Jiehua H)
00011111 \(\le 5\) (Jiehua H)
00100000 \(\le 5\) (Christina M) (Jiehua H)
00100001 \(\le 6\) (Christina M) (Jue W)
00100010 \(\le 6\) (Jue W)
00100011 \(\le 6\) (Jue W)
00100100 \(\le 6\) (Jue W)
00100101 \(\le 5\) (Jue W)
00100110 \(\le 6\) (Jue W)
00100111 \(\le 6\) (Jue W)
00101000 \(\le 6\): \(a \rightarrow_0 b \rightarrow_0 c \rightarrow_1 b\) and \(c \rightarrow_0 d \rightarrow_0 e\) plus one thrash (Frank S)
00101001 \(\le 6\) (Jue W)
00101010 \(4\) (Christina M)
00101011 5\(\le 7\) d0: (1, 2, 1, 0, 4), d1: (3, 0, 4, 0, 2)
00101100 \(\le 7\)
00101101 \(\le 7\)
00101110 \(\le 7\)
00101111 \(\le 6\) (Christina M)
00110000 \(\le 6\) (Christina M) (Jue W)
00110001 \(\le 6\) (Jue W)
00110010 \(\le 7\)
00110011 \(\le 7\)
00110100 \(\le 7\)
00110101 \(\le 7\)
00110110 \(\le 7\)
00110111 \(\le 7\)
00111000 \(\le 6\) (Jue W)
00111001 \(\le 7\)
00111010 \(\le 6\) (Jue W)
00111011 \(\le 7\)
00111100 \(\le 6\) (Christina M) (Jue W)
00111101 \(\le 6\) (Christina M) (Jue W)
00111110 \(\le 5\) (Christina M) (Jue W)
00111111 \(4\) (Christina M) (Jue W)
01000000 \(4\) (Christina M) (Jue W)
01000001 \(\le 5\) (Christina M) (Jue W)
01000010 \(\le 6\) (Christina M) (Jue W)
01000011 \(\le 6\) (Christina M) (Jue W)
01000100 \(\le 7\)
01000101 \(\le 7\)
01000110 \(\le 7\)
01000111 \(\le 7\)
01001000 \(\le 7\)
01001001 \(\le 7\)
01001010 \(\le 7\)
01001011 \(\le 7\)
01001100 \(\le 7\)
01001101 \(\le 7\)
01001110 \(\le 7\)
01001111 \(\le 6\) (Christina M)
01010000 \(\le 6\) (Christina M)
01010001 \(\le 6\) (Christina M)
01010010 \(\le 6\) (Christina M)
01010011 \(\le 6\) (Christina M)
01010100 4 \(\le 4\) (Christina M)
01010101 3 (\(\le 4\) Christina M)
01010110 \(\le 6\) (Christina M)
01010111 \(\le 6\) (Christina M)
01011000 \(\le 7\)
01011001 \(\le 7\)
01011010 \(\le 7\)
01011011 \(\le 7\)
01011100 \(\le 7\)
01011101 \(\le 7\)
01011110 \(\le 6\) (Christina M)
01011111 \(5\) (Christina M)
01100000 \(5\) (Christina M)
01100001 \(\le 6\) (Christina M)
01100010 \(\le 7\)
01100011 \(\le 7\)
01100100 \(\le 7\)
01100101 \(\le 7\)
01100110 \(\le 7\)
01100111 \(\le 7\)
01101000 \(\le 7\)
01101001 \(\le 7\)
01101010 \(\le 7\)
01101011 \(\le 7\)
01101100 \(\le 7\)
01101101 \(\le 7\)
01101110 \(\le 7\)
01101111 \(\le 6\) 4 consecutive
01110000 \(\le 6\) (Christina M)
01110001 \(\le 7\)
01110010 \(\le 7\)
01110011 \(\le 7\)
01110100 \(\le 7\)
01110101 \(\le 7\)
01110110 \(\le 7\)
01110111 \(5\) (Jue W)
01111000 \(\le 6\) (Christina M)
01111001 \(\le 6\) 4 consecutive
01111010 \(\le 6\) (Christina M)
01111011 \(\le 6\) (Chelsea T) (Christina M)
01111100 \(5\) (Chelsea T) (Christina M)
01111101 \(5\) (Chelsea T) (Christina M)
01111110 \(4\) (Chelsea T) (Christina M)
01111111 \(3\) (Chelsea T) (Christina M)

Research statement

The primary focus of my research is algorithmic randomness, especially its relation to randomness extraction and to probability and statistics. More generally I have worked in various areas of computability theory (recursion theory).

The basic idea of algorithmic randomness is to declare an object to be algorithmically random if it belongs to no “computable” set of measure zero. Since only computable sets can truly be understood by humans, an algorithmically random object will be as random as one can desire. This way one can formulate questions such as: can one compute a random object from an object that is almost random? Can one compute anything useful from a random object? etc. So algorithmic randomness is a natural setting for randomness extraction (although not the only natural setting).

Martin-Löf (ML-) randomness is a notion of algorithmic randomness, arguably the most natural one. It basically corresponds to the class of all Turing machines, whereas other randomness notions correspond to those Turing machines that always halt, say. It turns out that the collection of all Martin-Löf random sets is simpler in terms of definability than rival notions. It is a \(\Sigma^0_2\) class, whereas other notions tend to give \(\Pi^0_3\) classes or worse. It is an \(F_\sigma\) set rather than \(F_{\sigma\delta}\). Thus each ML-random set belongs to a closed set containing nothing but ML-random sets, and this is a basic tool in the area.

Open problem: Let \(\text{MLR}\) denote the collection of all Martin-Löf random subsets of \(\mathbb N\) and let \(\le_T\) denote Turing reducibility.

Suppose \(A\subseteq B\subseteq C\), \(A\in\text{MLR}\), and \(C\in\text{MLR}\). Then is it true that \((\exists X\in\text{MLR})(X\le_T B)\)?

Note: Hertling (2005) showed that it is not necessarily true that \(B\in\text{MLR}\).

This is one of many questions of the form: if a set is close to being random, then can we extract randomness from it? I have worked primarily on two notions of being close to random: being close in asymptotic Hamming distance, and being an infinite subset.

The current state of knowledge about infinite subsets is that each Schnorr 2-random set \(X\) has an infinite subset \(Y\) such that no Martin-Löf random set is Turing reducible to \(Y\). This is the main result of the paper A strong law of computationally weak subsets. The main idea of the proof is to combine the Extinction Criterion for Galton-Watson processes with a construction from the paper Comparing DNR and WWKL.

In the case of asymptotic Hamming distance, the paper Recovering randomness from an asymptotic Hamming distance has some results. One can define asymptotic Hamming distance in two ways: either require that the Hamming distance of the prefixes of \(X\) and \(Y\) of length \(n\) is small for all \(n\), or merely for infinitely many \(n\). It turned out to be most natural, in a way, to require small distance on an infinite computable set of values of \(n\). This is related to Michel Weber’s strengthening of the Law of the Iterated Logarithm. This law has a somewhat strange form, involving the logarithm of the logarithm function. However, when one looks at what happens at an infinite sparse set of times, this strangeness disappears. In terms of this notion of asymptotic Hamming distance I obtained the following type of result:

For each Turing reduction \(\Phi\) and each sufficiently random set \(X\), there is a set \(Y\) which is close to \(X\), such that \(\Phi^Y\) is not random.

This has the corollary that from a set \(Y\) that is complex in the sense of Kjos-Hanssen, Merkle, and Stephan (TAMS, 2011), and simultaneously has positive effective packing dimension, one cannot uniformly compute any set \(Z\) that is even Mises-Wald-Church stochastic. A goal is to strengthen this so that \(Y\) has positive effective Hausdorff dimension. Another possible way it can be strengthened is to deal with all Turing reductions at once. Conceivably there is a route here to prove that for each sufficiently random set \(X\), there is a set \(Y\) that is so close to \(X\) as to force \(Y\) to have effective Hausdorff dimension 1; and such that no 1-random set is Turing reducible to \(Y\). This would strengthen a result of Greenberg and Miller which asserts that such sets \(Y\) exist, if we remove the assumption of being close to a random set.