Shattuck_Avenue_Berkeley_Calif

No modular iterative way to get joint distribution from covariance matrix

Suppose $A$, and $B$ are events with
$$\Pr(A)=3/12,\quad \Pr(B)=4/12,\quad\Pr(A\cap B)=0$$
Suppose $A’$ and $B’$ are events with
$$\Pr(A’)=3/12,\quad\Pr(B’)=8/12,\quad\Pr(A’\cap B’)=1/12$$
Notice that the covariance matrix $M$ for the Bernoulli random variables $1_A$, $1_B$ is the same as the one for $1_{A’}$, $1_{B’}$.
Now suppose we wanted to take any given joint distribution giving covariance matrix $M$ and extend it to the covariance matrix for $A$, $B$, $C$, where $\Pr(C)=\Pr(C\setminus(A\cup B))=5/12$.

We claim this is impossible if we are given the joint distribution of $A’$ and $B’$. That is, we claim there is no choice of probabilities concerning $C’$ that will give the right covariance matrix.
Note that $\Pr(C’)\in\{5/12, 7/12\}$ is necessary since $\mathrm{Var}(1_C)=\mathrm{Var}(1_{C’})$ is necessary. Moreover
$$0-(3/12)(5/12)=\mathrm{Cov}(A,C)=\mathrm{Cov}(A’,C’)=E(1_{A’}1_{C’})-E(1_{A’})E(1_{C’})=\Pr(A’\cap C’)-(3/12)(5/12\text{ or }7/12)$$
so
$$\Pr(A’\cap C’)=(3/12)(0\text{ or }2/12)=0\text{ or }6/144$$
Similarly for $B’$,
$$0-(4/12)(5/12)=\mathrm{Cov}(B,C)=\mathrm{Cov}(B’,C’)=E(1_{B’}1_{C’})-E(1_{B’})E(1_{C’})=\Pr(B’\cap C’)-(8/12)(5/12\text{ or }7/12)$$
so
$$\Pr(B’\cap C’)=-20/144+(40\text{ or }56)/144= (20\text{ or }36)/144$$
The choice $\Pr(C’)=5/12$. $\Pr(A’\cap C’)=0$, $\Pr(B’\cap C’)=20/144$ gives $\Pr(A’\cup B’\cup C’)=156/144>1$, contradiction.

The other choice $\Pr(C’)=7/12$, $\Pr(A’\cap C’)=6/144$, $\Pr(B’\cap C’)=36/144$ gives $\Pr(C’\setminus(A’\cup B’))\ge 84/144-6/144-36/144=42/144=7/24$, so $\Pr(A’\cup B’\cup C’)\ge 10/12 + 7/24 > 1$, also contradiction.
Q.E.D.

image-heidelberg

On the complexity of automatic complexity

The paper “On the complexity of automatic complexity” is to appear in Theory of Computing Systems, in a post-conference journal issue for Computability, Complexity and Randomness 2015 held in Heidelberg, Germany.

While it is not known whether the set of strings of maximal nondeterministic automatic complexity is NP-complete (hence the paper is called “On the complexity…” rather than just “The complexity…”), the paper shows that the more general problem for automatic complexity of equivalence relations is NP-complete. It is also shown that the set of highly complex strings is not context-free.

london1

Shift registers fool finite automata @ WoLLIC 2017

LFSRs (linear feedback shift registers) are popular pseudorandomness generators.
In a new project we show that they generate output (often called $m$-sequences) of maximal (nondeterministic path-based) automatic complexity. At this point we have an experimental result, one which would have probability $2^{-93}$ to occur “by chance”, as well as a theoretical but sub-optimal result.

Moreover, an $m$-sequence of length 31 provides an example of a word $x$ such that
$$
A^-(x)=A_N(x)+2
$$
where $A_N$ is nondeterministic path-based automatic complexity, and $A^-$ is non-total deterministic automatic complexity.
Such an example (where $A^-(x)-A_N(x)>1$) was not known before the consideration of LFSRs in this area. That consideration was an idea of Jason Castiglione.

The paper has been accepted at WoLLIC 2017.

This project was presented at the poster session of the SIAM Conference on Discrete Mathematics 2016 in Atlanta, Georgia. The session was otherwise dominated by interesting work on RNA pseudoknots and chord diagrams (3 out of 6 posters) which in the case of the work of the Biocomplexity Institute researchers Ricky Chen and Thomas Li involves modeling with multiply context-free grammars.

WoLLIC 2017 slides

2326007202794064150-account_id=1

Superposition as memory @ UCNC17

Imagine a lock with two states, locked and unlocked, which may be manipulated using two operations, called 0 and 1. Moreover, the only way to (with certainty) unlock using four operations is to do them in the sequence 0011, i.e., $0^n1^n$ where $n=2$. In this scenario one might think that the lock needs to be in certain further states after each operation, so that there is some memory of what has been done so far. Here we show that this memory can be entirely encoded in superpositions of the two basic states locked and unlocked, where, as dictated by quantum mechanics, the operations are given by unitary matrices. Moreover, we show using the Jordan–Schur lemma that a similar lock is not possible for $n=60$.

Details in the paper: Superposition as memory: unlocking quantum automatic complexity which appeared in the Lecture Notes in Computer Science volume of the conference Unconventional Computation and Natural Computation (UCNC) 2017.

Slides

ladder_chain_belt_gold_BT0002

Dirac’s belt trick as a homotopy

Paint one of the “rungs” of the belt.

(When the belt is worn by someone who is standing up, a rung will be a vertical strip.)

Take a video of the performance of the belt trick. Notice how the painted rung is rotating.
The homotopy is $H:X\times [0,1]\to Y$ where $X$ is the interval $[0,1]$ and $Y$ is $\mathrm{SO}(3)$.
$x\mapsto H(x,0)$ is the twisted belt and $x\mapsto H(x,1)$ is the straightened belt.

Each $x$ is a rung of the belt.

$H(x,t)$ is the rotation of rung $x$ at time $t$ in the video.

The rungs do not merely rotate, they are also translated in space during the belt trick. However, we can ignore the translation and focus on the rotation.
For instance, the top rung (where the buckle is) is translated but does not rotate.

ASL Annual Meetings in North America

Midwest $n-2$ East $n-1$ West $n$ $n$
Western Illinois 2020
UIUC UConn Boise State 2017
UW Madison Waterloo, ON UC Boulder 2014
Notre Dame Washington, D.C.** UC Berkeley 2011
Montreál, QC U. Florida* UC Irvine* 2008
UIC Carnegie Mellon Stanford* 2005
UIUC U. Pennsylvania Las Vegas 2002

**invited plenary speaker
*special session speaker
attendee

hmm

Few paths, fewer words, and the maximum probability of writing 001 in a two-state Hidden Markov Model being $8/27$

The aim of this note is to give the simplest possible non-trivial calculation of the parameters of a HMM that maximize the probability of emitting a certain string.

Let $\{0,1\}$ be our alphabet.
Let $p$ be the probability of emitting 1 in state $s_0$.
Let $q$ be the probability of emitting 1 in state $s_1$.
Let $\epsilon$ be the probability of transitioning from $s_0$ to $s_1$.
Let $\delta$ be the probability of transitioning from $s_1$ to $s_0$.
Let $S(t)$ be the state after transitioning $t$ times, a random variable.
The probability of emitting the string 001 when starting in state $s_0$ is then
$$
f(p,q,\epsilon,\delta)=\Pr(001; S(1)=s_0=S(2))+\Pr(001; S(1)=s_0, S(2)=s_1)$$
$$+\Pr(001; S(1)=s_1, S(2)=s_0)+\Pr(001; S(1)=s_1=S(2))$$
$$=\overline p^2 p \overline\epsilon^2 + \overline p^2q\overline\epsilon\epsilon + \overline p\overline q p\epsilon\delta + \overline p\overline q q \epsilon\overline\delta.
$$
Which choice of parameters $p, q, \epsilon, \delta$ will maximize this probability?
To answer this we first note that
$$\frac{\partial f}{\partial\delta}=0\iff p=1, q=1, \epsilon=0\text{ or }p=q.$$
Going through these possibilities we keep finding values of $f$ bounded above by $\overline p^2p\le 4/27$:

  1. $p=1$ immediately gives $f=0$.
  2. $q=1$ gives $f=\overline p^2 p \overline\epsilon^2 + \overline p^2\overline\epsilon\epsilon=\overline p^2p\overline\epsilon.$
  3. $\epsilon=0$ gives $f=\overline p^2 p.$
  4. $p=q$ gives $f=\overline p^2 p \overline\epsilon^2 + \overline p^2p\overline\epsilon\epsilon + \overline p^2 p\epsilon\delta + \overline p^2 p (\epsilon\overline\delta)=\overline p^2p(\overline\epsilon^2 + \overline\epsilon\epsilon + \epsilon\delta + \epsilon\overline\delta)=\overline p^2p.$

We next consider boundary values for $\delta$.

  1. $\delta=0$. We may assume $p=0$, since there is no use in considering a positive probability of emitting a 1 in state $s_0$ if there is no chance of ever returning to that state. Then upon calculation of $\partial f/\partial q$ we find that $\epsilon=2\overline q$, which gives $f=2q^2\overline q$. This is maximized at $q=2/3$ which corresponds to $\epsilon=2/3$ as well, and gives a value $f=8/27>1/4$.

    This $8/27$ is decomposable as a sum of two disjoint scenarios of probability $4/27$:

    1. One is that after writing the first 0 we stay in state $s_0$, write another 0, and then transition to state $s_1$ to write a 1.
    2. The other is that after writing the first 0 we move to state $s_1$, write the 2nd zero there, and stay there to write the 3rd letter, 1.
  2. $\delta=1$. Then $f=\overline p^2 p \overline\epsilon^2 + \overline p^2q\overline\epsilon\epsilon + \overline p\overline q p\epsilon
    =\overline p(\overline p p \overline\epsilon^2 + \overline pq\overline\epsilon\epsilon +\overline q p\epsilon)$.
    Then $0=\partial f/\partial q = \overline p\epsilon(\overline p\overline\epsilon – p)$ if $p=\frac{\overline\epsilon}{1+\overline\epsilon}$. This gives $f=\frac{\overline\epsilon}{(1+\overline\epsilon)^3}$ (turns out not to depend on $q$) which is maximized for $\epsilon=1/2$ with value $f=4/27$. So we consider boundary values for $q$:

    1. $q=0$. Then $f=\overline pp(\overline p\overline\epsilon^2+\epsilon)\le \frac14\cdot 1$.
    2. $q=1$. Then $f=\overline p^2p\overline\epsilon^2+\overline p^2\overline\epsilon\epsilon$ and $\partial f/\partial\epsilon=\overline p^2(1-2\epsilon-2p\overline\epsilon)=0$ if $\overline p=1/(2\overline\epsilon)$, which gives $f=\overline p/4\le 1/4$.

Now note how much easier this is if we only consider a single path. Then clearly $1/4$ is the maximum possible, via 3 different paths, because of the presence of terms of the form $a\overline a$.
Replacing such occurrences by $1/4$ we upper bound $f$ by
$$\frac14\left(\overline p \overline\epsilon^2 + \overline p^2q + \overline q\epsilon\delta + \overline p \epsilon\overline\delta\right).
$$