Title: Randomness notions and reverse mathematics
by Paul Shafer (University of Leeds) as part of Computability theory and applications
There are many notions of algorithmic randomness in addition to classic Martin-Löf randomness, such as 2-randomness, weak 2-randomness, computable randomness, and Schnorr randomness. For each notion of randomness, we consider the statement “For every set Z, there is a set X that is random relative to Z” as a set-existence principle in second-order arithmetic, and we compare the strengths of these principles. We also show that a well-known characterization of 2-randomness in terms of incompressibility can be proved in RCA_0, which is non-trivial because it requires avoiding the use of $Sigma^0_2$ bounding.
Title: A family of metrics connecting Jaccard distance to normalized information distance
by Bjørn Kjos-Hanssen (University of Hawaii at Manoa) as part of Computability theory and applications
Distance metrics are used in a wide variety of scientific contexts. In a 2001 paper in the journal Bioinformatics, M. Li, Badger, Chen, Kwong, and Kearney introduced an information-based sequence distance. It is analogous to the famous Jaccard distance on finite sets. Soon thereafter, M. Li, Chen, X. Li, Ma and Vitányi (2004) rejected that distance in favor of what they call the normalized information distance (NID). Raff and Nicholas (2017) proposed a return to the Bioinformatics distance, based on further application-informed criteria.
We attempt to shed some light on this “dispute” by showing that the Jaccard distance and the NID analogue form the extreme points of the set of metrics within a family of semimetrics studied by Jiménez, Becerra, and Gelbukh (2013).
The NID is based on Kolmogorov complexity, and Terwijn, Torenvliet and Vitányi (2011) showed that it is neither upper semicomputable nor lower semicomputable. Our result gives a 2-dimensional family including the NID as an extreme point. It would be interesting to know if any of these functions are semicomputable.
Title: The probability of commuting subgroups in arbitrary lattices of subgroups
by Seid Kassaw (University of Cape Town) as part of Topological Groups
Lecture held in Elysium.
The subgroup commutativity degree $sd(G)$ of a finite group $G$ was introduced
almost ten years ago and deals with the number of commuting subgroups in the
subgroup lattice $L(G)$ of $G$. The extremal case $sd(G) = 1$ detects a class of groups
classified by Iwasawa in 1941 (in fact, $sd(G)$ represents a probabilistic measure which
allows us to understand how far $G$ is from the groups of Iwasawa). This means
$sd(G) = 1$ if and only if $G$ is the direct product of its Sylow $p$-subgroups and these
are all modular; or equivalently $G$ is a nilpotent modular group. Therefore, $sd(G)$ is
strongly related to structural properties of $L(G)$ and $G$.
In this talk, we introduce a new notion of probability $gsd(G)$ in which two arbitrary sublattices $S(G)$ and $T(G)$ of $L(G)$ are involved simultaneously. In case
$S(G) = T(G) = L(G)$, we find exactly $sd(G)$. Upper and lower bounds for $gsd(G)$
are shown and we study the behaviour of $gsd(G)$ with respect to subgroups and
quotients, showing new numerical restrictions. We present the commutativity
and subgroup commutativity degree for infinite groups and put some open problems
for further generalization.
Title: Complexity of root-taking in power series fields & related problems
by Karen Lange (Wellesley College) as part of Computability theory and applications
In earlier work with Knight and Solomon, we bounded the computational complexity of the root-taking process over Puiseux and Hahn series, two kinds of generalized power series. But it is open whether the bounds given are optimal. By looking at the most basic steps in the root-taking process for Hahn series, we together with Hall and Knight became interested in the complexity of problems associated with well-ordered subsets of a fixed ordered abelian group. Here we provide an overview of the results so far in both these settings.
Title: Bayesian Statistics, Topology and Machine Learning for Complex Data Analysis
by Farzana Nasrin (University of Hawaiʻi) as part of Topological Groups
Lecture held in Elysium.
Analyzing and classifying large and complex datasets are generally challenging. Topological data analysis, that builds on techniques from topology, is a natural fit for this. Persistence diagram is a powerful tool that originated in topological data analysis that allows retrieval of important topological and geometrical features latent in a dataset. Data analysis and classification involving persistence diagrams have been applied in numerous applications. In this talk, I will provide a brief introduction of topological data analysis, focusing primarily on persistence diagrams, and a Bayesian framework for inference with persistence diagrams. The goal is to provide a supervised machine learning algorithm in the space of persistence diagrams. This framework is applicable to a wide variety of datasets. I will present applications in materials science, biology, and neuroscience.