Superposition as memory: unlocking quantum automatic complexity

Imagine a lock with two states, locked and unlocked, which may be manipulated using two operations, called 0 and 1. Moreover, the only way to (with certainty) unlock using four operations is to do them in the sequence 0011, i.e., $0^n1^n$ where $n=2$. In this scenario one might think that the lock needs to be in certain further states after each operation, so that there is some memory of what has been done so far. Here we show that this memory can be entirely encoded in superpositions of the two basic states locked and unlocked, where, as dictated by quantum mechanics, the operations are given by unitary matrices. Moreover, we show using the Jordan–Schur lemma that a similar lock is not possible for $n=60$.

Details in the paper: Superposition as memory: unlocking quantum automatic complexity which is to appear in the Lecture Notes in Computer Science volume of the conference Unconventional Computation and Natural Computation (UCNC) 2017.


Few paths, fewer words, and the maximum probability of writing 001 in a two-state Hidden Markov Model being $8/27$

The aim of this note is to give the simplest possible non-trivial calculation of the parameters of a HMM that maximize the probability of emitting a certain string.

Let $\{0,1\}$ be our alphabet.
Let $p$ be the probability of emitting 1 in state $s_0$.
Let $q$ be the probability of emitting 1 in state $s_1$.
Let $\epsilon$ be the probability of transitioning from $s_0$ to $s_1$.
Let $\delta$ be the probability of transitioning from $s_1$ to $s_0$.
Let $S(t)$ be the state after transitioning $t$ times, a random variable.
The probability of emitting the string 001 when starting in state $s_0$ is then
f(p,q,\epsilon,\delta)=\Pr(001; S(1)=s_0=S(2))+\Pr(001; S(1)=s_0, S(2)=s_1)$$
$$+\Pr(001; S(1)=s_1, S(2)=s_0)+\Pr(001; S(1)=s_1=S(2))$$
$$=\overline p^2 p \overline\epsilon^2 + \overline p^2q\overline\epsilon\epsilon + \overline p\overline q p\epsilon\delta + \overline p\overline q q \epsilon\overline\delta.
Which choice of parameters $p, q, \epsilon, \delta$ will maximize this probability?
To answer this we first note that
$$\frac{\partial f}{\partial\delta}=0\iff p=1, q=1, \epsilon=0\text{ or }p=q.$$
Going through these possibilities we keep finding values of $f$ bounded above by $\overline p^2p\le 4/27$:

  1. $p=1$ immediately gives $f=0$.
  2. $q=1$ gives $f=\overline p^2 p \overline\epsilon^2 + \overline p^2\overline\epsilon\epsilon=\overline p^2p\overline\epsilon.$
  3. $\epsilon=0$ gives $f=\overline p^2 p.$
  4. $p=q$ gives $f=\overline p^2 p \overline\epsilon^2 + \overline p^2p\overline\epsilon\epsilon + \overline p^2 p\epsilon\delta + \overline p^2 p (\epsilon\overline\delta)=\overline p^2p(\overline\epsilon^2 + \overline\epsilon\epsilon + \epsilon\delta + \epsilon\overline\delta)=\overline p^2p.$

We next consider boundary values for $\delta$.

  1. $\delta=0$. We may assume $p=0$, since there is no use in considering a positive probability of emitting a 1 in state $s_0$ if there is no chance of ever returning to that state. Then upon calculation of $\partial f/\partial q$ we find that $\epsilon=2\overline q$, which gives $f=2q^2\overline q$. This is maximized at $q=2/3$ which corresponds to $\epsilon=2/3$ as well, and gives a value $f=8/27>1/4$.

    This $8/27$ is decomposable as a sum of two disjoint scenarios of probability $4/27$:

    1. One is that after writing the first 0 we stay in state $s_0$, write another 0, and then transition to state $s_1$ to write a 1.
    2. The other is that after writing the first 0 we move to state $s_1$, write the 2nd zero there, and stay there to write the 3rd letter, 1.
  2. $\delta=1$. Then $f=\overline p^2 p \overline\epsilon^2 + \overline p^2q\overline\epsilon\epsilon + \overline p\overline q p\epsilon
    =\overline p(\overline p p \overline\epsilon^2 + \overline pq\overline\epsilon\epsilon +\overline q p\epsilon)$.
    Then $0=\partial f/\partial q = \overline p\epsilon(\overline p\overline\epsilon – p)$ if $p=\frac{\overline\epsilon}{1+\overline\epsilon}$. This gives $f=\frac{\overline\epsilon}{(1+\overline\epsilon)^3}$ (turns out not to depend on $q$) which is maximized for $\epsilon=1/2$ with value $f=4/27$. So we consider boundary values for $q$:

    1. $q=0$. Then $f=\overline pp(\overline p\overline\epsilon^2+\epsilon)\le \frac14\cdot 1$.
    2. $q=1$. Then $f=\overline p^2p\overline\epsilon^2+\overline p^2\overline\epsilon\epsilon$ and $\partial f/\partial\epsilon=\overline p^2(1-2\epsilon-2p\overline\epsilon)=0$ if $\overline p=1/(2\overline\epsilon)$, which gives $f=\overline p/4\le 1/4$.

Now note how much easier this is if we only consider a single path. Then clearly $1/4$ is the maximum possible, via 3 different paths, because of the presence of terms of the form $a\overline a$.
Replacing such occurrences by $1/4$ we upper bound $f$ by
$$\frac14\left(\overline p \overline\epsilon^2 + \overline p^2q + \overline q\epsilon\delta + \overline p \epsilon\overline\delta\right).


Computability Theory List Server

As of June 15, 2006, we are not posting emails for ANY third party. To post one must be a subscriber to the list. If you are having problems first confirm yourself as subscriber (directions are below) and if that does not work please remove yourself from the list and resubscribe. Directions on how to do this can be found by following the links below.

To use the list just send email to comp-thy@lists.hawaii.edu, the list server will take care of the rest. You must be a member of the list to send mail to the list. Anyone is free to join the list. Use the list just as you would a normal email address expect for the fact that everyone subscribed to the list will receive a copy of your email. It may take some time before your message reaches everyone on the list. You may use the list as you see fit.

Although it would be best if it were used for short announcements of interest to all computability theorists.

A WORD OF CAUTION: Large files cause problems for many mailers.

Using the list server

The list server at University of Hawaii maintains the mailing list. It can do many things. For example, it can be used to subscribe, unsubscribe, or look at the archive for the list. These and others tasks are completed by issuing commands to the list server. The easiest way to do this is do use the WWW interface at listserv.hawaii.edu.

Note only COMP-THY subscribers may access the list archives. When you attempt to access the archives, you will be asked for your email address and a password. If you already have a password for listserv.hawaii.edu, use that password to access the COMP-THY list. If you do not have a password for listserv.hawaii.edu, go to listserv.hawaii.edu to sign up.


Few paths, fewer words: model selection with automatic structure functions

The paper “Kolmogorov structure functions for automatic complexity in computational statistics” appeared in the Lecture Notes in Computer Science proceedings of the conference COCOA 2014, Maui, Hawaii.
The paper then appeared in the journal Theoretical Computer Science 2015.

The ideas are implemented in the Structure function calculator.

A new paper Few paths, fewer words: model selection with automatic structure functions has been conditionally accepted for publication in Experimental Mathematics.

Some slides


Covering the computable sets

I participated in the workshop on Algorithmic Randomness in Singapore and the conference on Computability, Complexity and Randomness.

With host Frank Stephan and fellow participant Sebastiaan Terwijn we wrote a paper entitled Covering the recursive sets which appeared in Lecture Notes in Computer Science, Conference on Computability in Europe, 2015, and has now been published in Annals of Pure and Applied Logic.



この記事は、ハワイ大学のビヨーン・ヒュース-ハンセン教授(Prof. Bjørn Kjos-Hanssen)のオートマトン複雑性に関する最新の研究について、日本の読者向けに概要をまとめたものである。

1. 目的

2. オートマトン複雑性分野の歴史

1960年代末 コルモゴロフ複雑性(Kolmogorov Complexity)
1973年 コルモゴロフ構造関数(Kolmogorov Structure Function)
2001年 ShallitとWangによるオートマトン複雑性(Automatic Complexity by Shallit and Wang)
2014年 Vereshchagin のオートマトン複雑性上での構造関数(Vereshchagin structure function for Automatic complexity)

3. 当研究に期待される成果

4. 当分野において期待されるインパクト

5. 諸科学において期待されるインパクト



小林 佐保,千葉大学 理学部 数学・情報数理学科
Saho Kobayashi, Undergraduate, Department of Mathematics and Informatics, Faculty of Science, Chiba University


A Conflict Between Some Semantic Conditions

Damir Dzhafarov, Stefan Kaufmann, Bjørn Kjos-Hanssen, Dave Ripley, et al., at the 2016 ASL Annual Meeting at UConn.


José Carmo and Andrew J.I. Jones have studied contrary-to-duties obligations in a series of papers.

They develop a logical framework for scenarios such as the following:

1. There ought to be no dog.
2. If there is a dog, there ought to be a fence.

One conjecture from Carmo and Jones 1997 was refuted in a rather technical way in my 1996 term paper at University of Oslo.
The conjecture stated that one could simply add the condition
(Z \in \pii(X)) \land
(Y \subseteq X) \land
(Y \cap Z \ne \emptyset ) \rightarrow (Z \in \pii(Y )) \tag{5e}
for the conditional obligation operator ob.
In a follow-up paper (2001) they argued that (5e) could be added by weakening some other conditions.
In a new paper, to appear in Studia Logica, and presented at the Association for Symbolic Logic Annual Meeting 2016 at UConn, I argue that (5d) and (5e) are in conflict with each other. The argument is a generalization and strengthening of the 1996 argument.