All posts by admin

Kolmogorov complexity and the recursion theorem

Kolmogorov complexity and the recursion theorem (with Wolfgang Merkle and Frank Stephan). Transactions of the American Mathematical Society363 (2011) no. 10, 5465—5480.
arXiv:0901.3933. Preliminary version in STACS 2006, Lecture Notes in Computer Science,
vol. 3884, Springer, Berlin, 2006, pp. 149—161.

The groundwork for this paper was laid in Berkeley in 2004 while Merkle and I were both stopping by there. The paper fits in a long line of results saying “DNR is equivalent to…”, in this case to computing a real whose initial segment complexity is sufficiently high.

A strong law of computationally weak subsets

A strong law of computationally weak subsets

Journal of Mathematical Logic 11 (2011) no. 1, 1—10.
DOI: 10.1142/S0219061311000980
Electronic Colloquium on Computational Complexity, Report No. 150 (2010).

This paper establishes a new statistical law, namely that for each random sequence
$$0111011101101101101\ldots$$
it is possible to replace some of the 1s by 0s (in other words, form a subset of 1s) in such a way that no random sequence can be recovered by computational means.

To illustrate, imagine that the new sequence looks like
$$0111010101101000101\ldots$$

Technically the result is that each 2-random set has an infinite subset computing no 1-random set. It is perhaps the main result obtained under Prof. Kjos-Hanssen’s grant NSF DMS-0901020 (2009-2013).
Joseph S. Miller at U. of Wisconsin has established a strengthening of this result replacing 2-random by 1-random, but that result is so far unpublished.

Dr. Bjørn Kjos-Hanssen is a professor at the University of Hawai‘i at Manoa in the Department of Mathematics. His research deals with the abstract theory of computation, computability, randomness and compression algorithms.

Kjos-Hanssen is the author of more than 20 papers in journals including the prestigious Mathematical Research Letters and Transactions of the American Mathematical Society, and has a PhD from UC Berkeley in the subject Logic and the Methodology of Science.

Shreve Exercise I-1.6

Bank A has a long position in a derivative that pays either $V_1(H)=3$ or $V_1(T)=0$ at at time 1.
The price of the derivative is $V_0=1.2$.
In order to hedge the long position, Bank A proceeds as follows:
First, Bank A borrows 1.2 from Bank B.
Then Bank A invests this amount 1.2 in the money market, by handing it over to Bank C.
Next Bank A sells $-\Delta_0=0.5$ shares of the underlying stock short.
This gives Bank A the amount $-\Delta_0 S_0=2.00$ on hand, which it invests in the money market, giving $(1+r)2.00=2.50$ at time 1.
The debt to Bank B becomes $(1+r)1.2=1.50$ at time 1. Thus the proceeds from the short sale, and the debt to Bank B,
together add up to $2.5-1.5=+1.00$. The short sale of the stock means that Bank A will be liable to pay someone either $-\Delta_0 uS_0=4$ or $-\Delta_0 dS_0=1$ at time 1.
Together with the +1.00 this leaves the bank in a position of $1-4=-3$ or $1-1=0$ at time 1. However, the bank also has that long position in the derivative, balancing the -3 or 0 position out.
In the end, then, the only thing left is the investment in the money market via Bank C. It has grown to $(1+r)1.2=1.5$ at time 1.
Thus Bank A has successfully hedged a long position in the derivative.

Actually this can be simplified: the bank just sells 0.5 shares short.
The bank thus receives 2.00 at time 0, which it places in the money market, leading to 2.50 at time 1. When the bank has to pay the value of the stock at time 1 it has to pay either 4 or 1. At the same time the bank receives 3 or 0 because of the long position in the derivative. This gives a net loss of 1 at time 1, leaving the bank with $2.50-1.00=1.50$ at the end.

The probability distribution as a computational resource for randomness testing

The probability distribution as a computational resource for randomness testing
Journal of Logic and Analysis 2 (2010), no. 10, 1—13.

This paper grew out of my assignment to teach Math 472, Statistical Inference, in Spring 2009 at UH. By analyzing the proof of the law of large numbers and some other work I showed that Hippocratic and Galenic randomness coincide for Bernoulli measures. [There has been follow-up work to this paper which it would make sense to describe here.]