All posts by admin

image-heidelberg

The complexity of complexity

The paper “On the complexity of automatic complexity” is to appear in Theory of Computing Systems, in a post-conference journal issue for Computability, Complexity and Randomness 2015 held in Heidelberg, Germany.

While it is not known whether the set of strings of maximal nondeterministic automatic complexity is NP-complete (hence the paper is called “On the complexity…” rather than just “The complexity…”), the paper shows that the more general problem for automatic complexity of equivalence relations is NP-complete. It is also shown that the set of highly complex strings is not context-free.

london1

Shift registers fool finite automata @ WoLLIC 2017

LFSRs (linear feedback shift registers) are popular pseudorandomness generators.
In a new project we show that they generate output (often called $m$-sequences) of maximal (nondeterministic path-based) automatic complexity. At this point we have an experimental result, one which would have probability $2^{-93}$ to occur “by chance”, as well as a theoretical but sub-optimal result.

Moreover, an $m$-sequence of length 31 provides an example of a word $x$ such that
$$
A^-(x)=A_N(x)+2
$$
where $A_N$ is nondeterministic path-based automatic complexity, and $A^-$ is non-total deterministic automatic complexity.
Such an example (where $A^-(x)-A_N(x)>1$) was not known before the consideration of LFSRs in this area. That consideration was an idea of Jason Castiglione.

The paper has been accepted at WoLLIC 2017.

This project was presented at the poster session of the SIAM Conference on Discrete Mathematics 2016 in Atlanta, Georgia. The session was otherwise dominated by interesting work on RNA pseudoknots and chord diagrams (3 out of 6 posters) which in the case of the work of the Biocomplexity Institute researchers Ricky Chen and Thomas Li involves modeling with multiply context-free grammars.

2326007202794064150-account_id=1

Superposition as memory @ UCNC17

Imagine a lock with two states, locked and unlocked, which may be manipulated using two operations, called 0 and 1. Moreover, the only way to (with certainty) unlock using four operations is to do them in the sequence 0011, i.e., $0^n1^n$ where $n=2$. In this scenario one might think that the lock needs to be in certain further states after each operation, so that there is some memory of what has been done so far. Here we show that this memory can be entirely encoded in superpositions of the two basic states locked and unlocked, where, as dictated by quantum mechanics, the operations are given by unitary matrices. Moreover, we show using the Jordan–Schur lemma that a similar lock is not possible for $n=60$.

Details in the paper: Superposition as memory: unlocking quantum automatic complexity which appeared in the Lecture Notes in Computer Science volume of the conference Unconventional Computation and Natural Computation (UCNC) 2017.

Slides

ladder_chain_belt_gold_BT0002

Dirac’s belt trick as a homotopy

Paint one of the “rungs” of the belt.

(When the belt is worn by someone who is standing up, a rung will be a vertical strip.)

Take a video of the performance of the belt trick. Notice how the painted rung is rotating.
The homotopy is $H:X\times [0,1]\to Y$ where $X$ is the interval $[0,1]$ and $Y$ is $\mathrm{SO}(3)$.
$x\mapsto H(x,0)$ is the twisted belt and $x\mapsto H(x,1)$ is the straightened belt.

Each $x$ is a rung of the belt.

$H(x,t)$ is the rotation of rung $x$ at time $t$ in the video.

The rungs do not merely rotate, they are also translated in space during the belt trick. However, we can ignore the translation and focus on the rotation.
For instance, the top rung (where the buckle is) is translated but does not rotate.

ASL Annual Meetings in North America

Midwest $n-2$ East $n-1$ West $n$ $n$
Western Illinois 2020
UIUC UConn Boise State 2017
UW Madison Waterloo, ON UC Boulder 2014
Notre Dame Washington, D.C.** UC Berkeley 2011
Montreál, QC U. Florida* UC Irvine* 2008
UIC Carnegie Mellon Stanford* 2005
UIUC U. Pennsylvania Las Vegas 2002

**invited plenary speaker
*special session speaker
attendee

hmm

Few paths, fewer words, and the maximum probability of writing 001 in a two-state Hidden Markov Model being $8/27$

The aim of this note is to give the simplest possible non-trivial calculation of the parameters of a HMM that maximize the probability of emitting a certain string.

Let $\{0,1\}$ be our alphabet.
Let $p$ be the probability of emitting 1 in state $s_0$.
Let $q$ be the probability of emitting 1 in state $s_1$.
Let $\epsilon$ be the probability of transitioning from $s_0$ to $s_1$.
Let $\delta$ be the probability of transitioning from $s_1$ to $s_0$.
Let $S(t)$ be the state after transitioning $t$ times, a random variable.
The probability of emitting the string 001 when starting in state $s_0$ is then
$$
f(p,q,\epsilon,\delta)=\Pr(001; S(1)=s_0=S(2))+\Pr(001; S(1)=s_0, S(2)=s_1)$$
$$+\Pr(001; S(1)=s_1, S(2)=s_0)+\Pr(001; S(1)=s_1=S(2))$$
$$=\overline p^2 p \overline\epsilon^2 + \overline p^2q\overline\epsilon\epsilon + \overline p\overline q p\epsilon\delta + \overline p\overline q q \epsilon\overline\delta.
$$
Which choice of parameters $p, q, \epsilon, \delta$ will maximize this probability?
To answer this we first note that
$$\frac{\partial f}{\partial\delta}=0\iff p=1, q=1, \epsilon=0\text{ or }p=q.$$
Going through these possibilities we keep finding values of $f$ bounded above by $\overline p^2p\le 4/27$:

  1. $p=1$ immediately gives $f=0$.
  2. $q=1$ gives $f=\overline p^2 p \overline\epsilon^2 + \overline p^2\overline\epsilon\epsilon=\overline p^2p\overline\epsilon.$
  3. $\epsilon=0$ gives $f=\overline p^2 p.$
  4. $p=q$ gives $f=\overline p^2 p \overline\epsilon^2 + \overline p^2p\overline\epsilon\epsilon + \overline p^2 p\epsilon\delta + \overline p^2 p (\epsilon\overline\delta)=\overline p^2p(\overline\epsilon^2 + \overline\epsilon\epsilon + \epsilon\delta + \epsilon\overline\delta)=\overline p^2p.$

We next consider boundary values for $\delta$.

  1. $\delta=0$. We may assume $p=0$, since there is no use in considering a positive probability of emitting a 1 in state $s_0$ if there is no chance of ever returning to that state. Then upon calculation of $\partial f/\partial q$ we find that $\epsilon=2\overline q$, which gives $f=2q^2\overline q$. This is maximized at $q=2/3$ which corresponds to $\epsilon=2/3$ as well, and gives a value $f=8/27>1/4$.

    This $8/27$ is decomposable as a sum of two disjoint scenarios of probability $4/27$:

    1. One is that after writing the first 0 we stay in state $s_0$, write another 0, and then transition to state $s_1$ to write a 1.
    2. The other is that after writing the first 0 we move to state $s_1$, write the 2nd zero there, and stay there to write the 3rd letter, 1.
  2. $\delta=1$. Then $f=\overline p^2 p \overline\epsilon^2 + \overline p^2q\overline\epsilon\epsilon + \overline p\overline q p\epsilon
    =\overline p(\overline p p \overline\epsilon^2 + \overline pq\overline\epsilon\epsilon +\overline q p\epsilon)$.
    Then $0=\partial f/\partial q = \overline p\epsilon(\overline p\overline\epsilon – p)$ if $p=\frac{\overline\epsilon}{1+\overline\epsilon}$. This gives $f=\frac{\overline\epsilon}{(1+\overline\epsilon)^3}$ (turns out not to depend on $q$) which is maximized for $\epsilon=1/2$ with value $f=4/27$. So we consider boundary values for $q$:

    1. $q=0$. Then $f=\overline pp(\overline p\overline\epsilon^2+\epsilon)\le \frac14\cdot 1$.
    2. $q=1$. Then $f=\overline p^2p\overline\epsilon^2+\overline p^2\overline\epsilon\epsilon$ and $\partial f/\partial\epsilon=\overline p^2(1-2\epsilon-2p\overline\epsilon)=0$ if $\overline p=1/(2\overline\epsilon)$, which gives $f=\overline p/4\le 1/4$.

Now note how much easier this is if we only consider a single path. Then clearly $1/4$ is the maximum possible, via 3 different paths, because of the presence of terms of the form $a\overline a$.
Replacing such occurrences by $1/4$ we upper bound $f$ by
$$\frac14\left(\overline p \overline\epsilon^2 + \overline p^2q + \overline q\epsilon\delta + \overline p \epsilon\overline\delta\right).
$$

group

Computability Theory List Server

As of June 15, 2006, we are not posting emails for ANY third party. To post one must be a subscriber to the list. If you are having problems first confirm yourself as subscriber (directions are below) and if that does not work please remove yourself from the list and resubscribe. Directions on how to do this can be found by following the links below.

To use the list just send email to comp-thy@lists.hawaii.edu, the list server will take care of the rest. You must be a member of the list to send mail to the list. Anyone is free to join the list. Use the list just as you would a normal email address expect for the fact that everyone subscribed to the list will receive a copy of your email. It may take some time before your message reaches everyone on the list. You may use the list as you see fit.

Although it would be best if it were used for short announcements of interest to all computability theorists.

A WORD OF CAUTION: Large files cause problems for many mailers.

Using the list server

The list server at University of Hawaii maintains the mailing list. It can do many things. For example, it can be used to subscribe, unsubscribe, or look at the archive for the list. These and others tasks are completed by issuing commands to the list server. The easiest way to do this is do use the WWW interface at listserv.hawaii.edu.

Note only COMP-THY subscribers may access the list archives. When you attempt to access the archives, you will be asked for your email address and a password. If you already have a password for listserv.hawaii.edu, use that password to access the COMP-THY list. If you do not have a password for listserv.hawaii.edu, go to listserv.hawaii.edu to sign up.

Maui-Hawaii

Few paths, fewer words: model selection with automatic structure functions

The paper “Kolmogorov structure functions for automatic complexity in computational statistics” appeared in the Lecture Notes in Computer Science proceedings of the conference COCOA 2014, Maui, Hawaii.
The paper then appeared in the journal Theoretical Computer Science 2015.

The ideas are implemented in the Structure function calculator.

A new paper Few paths, fewer words: model selection with automatic structure functions has been conditionally accepted for publication in Experimental Mathematics.

Some slides