*Foundations of Mathematics*, Kenneth Kunen, parts of chapters I-IV.*Modal logic for open minds*, Johan van Benthem, some of chapters 1-11 and 16.*A Proof Assistant for Higher-Order Logic*, Isabelle tutorial, parts of chapters 1-7, or instead The Natural Number Game

# Category Archives: teaching

# MATH 657 Spring 2020

Course title: “Recursive functions and complexity”

Textbook title: “A second course in formal languages and automata theory” by J. Shallit

Despite the intimidating titles this is just a graduate introduction to automata, computability, and complexity.

Possible additional topics: Automatic complexity and Python programming.

# Statistics of rental prices

Spring 2017 saw the last MATH 373 class ever, as we transitioned to MATH 372 combining MATH 371 (probability) and 373 (statistics).

Final cohort MATH 373 students Tiffany Eulalio and Jake Koki’s term paper on apartment rental prices in Honolulu has been accepted for publication in undergraduate journal Manoa Horizons volume II.

# The number of segments on an autograph tree seed capsule

Spring 2016 MATH 472 student Daren Kuwaye’s paper The number of segments on a *Culsia rosea* seed capsule has been published in the new UH Manoa undergraduate journal, Manoa Horizons.

# The Tukey test: starting with a known variance

Consider normal random variables $Y_i$ with means $\mu_i$, $1\le i\le k$.

The Tukey test often seems overly complicated, but it becomes clearer if we first assume that variance of $Y_i$, $\sigma^2$, is known.

Let’s draw $r$ samples from each, called $Y_{i,j}$, $1\le j\le r$.

Denote the sample averages by $\overline Y_{i+}$.

Let $W_i = \overline Y_{i+}-\mu_i$.

Let $Q=R$, the range, be defined by

$$

R = \max_i W_i – \min_i W_i = \max_{i,j} |W_i-W_j|.

$$

Suppose we understand the distribution of $R$ well enough to find a number $Q_\alpha$ such that

$$

\Pr(R\le Q_\alpha) = 1-\alpha

$$

Define the intervals

$$

I_{i,j} = (\overline Y_{i+}-\overline Y_{j+} – Q_\alpha, \overline Y_{i+}-\overline Y_{j+} + Q_\alpha)

$$

Then it is easily seen that

$$

\Pr(\text{for all $i$, $j$, }\mu_i-\mu_j \in I_{i,j}) = 1-\alpha.

$$

and hence for all $i$, $j$,

$$

\Pr(\mu_i-\mu_j\in I_{i,j}) \ge 1-\alpha.

$$

(Larsen and Marx make a mistake here, mixing up the last two equations.)

Thus, the hypothesis that $\mu_i=\mu_j$ can be rejected if we observe $0\not\in I_{i,j}$.

Now, if the variance is unknown, we instead define $Q=R/S$ where $S$ is a certain estimator of the variance.

An important point is that we want to understand the joint distribution of $R$ and $S$, which is easiest if they are independent.

We do have an estimator of $\sigma^2$ that’s independent of the $W_i$, namely the residual sum of squares (a.k.a. sum of squares for error),

$$

\mathrm{SSE} = \sum_i \sum_j (Y_{i,j}-\overline Y_{i+})^2

$$

So we take $S^2$ to be a suitable constant time $\mathrm{SSE}$.

Namely, we want $S^2$ to be an unbiased estimator of $\sigma^2/r$, the variance of $W_i$. And we know that $\mathrm{SSE}/\sigma^2$ is $\chi^2(rk-k)$ distributed.

This leads us to define

$$

S^2 = \mathrm{MSE}/r

$$

where $\mathrm{MSE} = \mathrm{SSE}/(rk-k)$.

A point here is that $\mathrm{SSE}/\sigma^2$ is a sum of $k$ independent $\chi^2(r-1)$ random variables (one for each $i$), hence is itself $\chi^2(rk-k)$.

# Graduate Program in Logic

The Department of Mathematics at University of Hawaii at Manoa has long had an informal graduate program in logic, lattice theory, and universal algebra going back to Alfred Tarski’s student William Hanf.

Starting in 2016, things are getting a little more formal.

We intend the following course rotation (repeating after two years):

Semester | Course number | Course title |
---|---|---|

Fall 2015 | MATH 649B | Graduate Seminar |

Spring 2016 | MATH 649* | Applied Model Theory |

Fall 2016 | MATH 654* | Graduate Introduction to Logic |

Spring 2017 | MATH 657 | Computability and Complexity |

*Actual course numbers may vary.

#### Faculty who may teach in the program

David A. Ross, Professor

Bjørn Kjos-Hanssen, Professor

Mushfeq Khan, Temporary Assistant Professor 2014-2017

Achilles Beros, Temporary Assistant Professor 2015-2017

# ANOVA and regression on BitBucket

# The noncentral $t$ distribution is an elementary function relative to erf

In this note we show that for each fixed number of degrees of freedom $\nu$, the noncentral $t$ distribution with noncentrality parameter (= the mean of the numerator) $\mu$ is an elementary function relative to erf.

The pdf is, per Wikipedia,

$$

f(x) =\frac{\nu^{\frac{\nu}{2}} \exp\left (-\frac{\nu\mu^2}{2(x^2+\nu)} \right )}{\sqrt{\pi}\Gamma(\frac{\nu}{2})2^{\frac{\nu-1}{2}}(x^2+\nu)^{\frac{\nu+1}{2}}} \int_0^\infty y^\nu\exp\left (-\frac{1}{2}\left(y-\frac{\mu x}{\sqrt{x^2+\nu}}\right)^2\right ) dy.$$

The non-elementary part is the integral

$$

\int_0^\infty y^\nu\exp\left (-\frac{1}{2}\left(y-\frac{\mu x}{\sqrt{x^2+\nu}}\right)^2\right ) dy = g\left(\frac{\mu x}{\sqrt{x^2+\nu}}\right)

$$

where

$$

g(a)=\int_0^\infty y^\nu\exp\left (-\frac{1}{2}\left(y-a\right)^2\right ) dy.

$$

We claim that $g$ is elementary relative to the error function erf, or equivalently relative to the standard normal cdf $\Phi$.

Namely, by change of variable and the binomial theorem,

$$

g(a)=\sum_{k=0}^\nu a^{\nu-k} {\nu\choose k} \alpha_k

$$

where

$$

\alpha_k = \int_a^\infty x^k e^{-x^2/2}\,dx

$$

Next using $u=x^{k-1}$, $dv=x e^{-x^2/2}\,dx$ and integration by parts, we can show a recurrence relation for $\alpha_k$:

$$\alpha_k = a^{k-1}e^{-a^2/2} + (k-1)\alpha_{k-2}$$

Finally, $\alpha_0=\sqrt{2\pi}(1-\Phi(a))$ and $\alpha_1 = e^{-a^2/2}$, and we are done.