This work considers the optimization of electrode positions in head imaging by electrical impedance tomography. The study is motivated by maximizing the sensitivity of electrode measurements to conductivity changes when monitoring the condition of a stroke patient, which justifies adopting a linearized version of the complete electrode model as the forward model. The algorithm is based on finding a (locally) A-optimal measurement configuration via gradient descent with respect to the electrode positions. The efficient computation of the needed derivatives of the complete electrode model is one of the focal points. Two algorithms are introduced and numerically tested on a three-layer head model. The first one assumes a region of interest and a Gaussian prior for the conductivity in the brain, and it can be run offline, i.e., prior to taking any measurements. The second algorithm first computes a reconstruction of the conductivity anomaly caused by the stroke with an initial electrode configuration by combining lagged diffusivity iteration with sequential linearizations, which can be interpreted to produce an approximate Gaussian probability density for the conductivity perturbation. It then resorts to the first algorithm to find new, more informative positions for the available electrodes with the constructed density as the prior.
Augmented reality for laparoscopic liver resection is a visualisation mode that allows a surgeon to localise tumours and vessels embedded within the liver by projecting them on top of a laparoscopic image. Preoperative 3D models extracted from CT or MRI data are registered to the intraoperative laparoscopic images during this process. In terms of 3D-2D fusion, most of the algorithms make use of anatomical landmarks to guide registration. These landmarks include the liver's inferior ridge, the falciform ligament, and the occluding contours. They are usually marked by hand in both the laparoscopic image and the 3D model, which is time-consuming and may contain errors if done by a non-experienced user. Therefore, there is a need to automate this process so that augmented reality can be used effectively in the operating room. We present the Preoperative-to-Intraoperative Laparoscopic Fusion Challenge (P2ILF), held during the Medical Imaging and Computer Assisted Interventions (MICCAI 2022) conference, which investigates the possibilities of detecting these landmarks automatically and using them in registration. The challenge was divided into two tasks: 1) A 2D and 3D landmark detection task and 2) a 3D-2D registration task. The teams were provided with training data consisting of 167 laparoscopic images and 9 preoperative 3D models from 9 patients, with the corresponding 2D and 3D landmark annotations. A total of 6 teams from 4 countries participated, whose proposed methods were evaluated on 16 images and two preoperative 3D models from two patients. All the teams proposed deep learning-based methods for the 2D and 3D landmark segmentation tasks and differentiable rendering-based methods for the registration task. Based on the experimental outcomes, we propose three key hypotheses that determine current limitations and future directions for research in this domain.
We propose a diffusion approximation method to the continuous-state Markov Decision Processes (MDPs) that can be utilized to address autonomous navigation and control in unstructured off-road environments. In contrast to most decision-theoretic planning frameworks that assume fully known state transition models, we design a method that eliminates such a strong assumption that is often extremely difficult to engineer in reality. We first take the second-order Taylor expansion of the value function. The Bellman optimality equation is then approximated by a partial differential equation, which only relies on the first and second moments of the transition model. By combining the kernel representation of the value function, we design an efficient policy iteration algorithm whose policy evaluation step can be represented as a linear system of equations characterized by a finite set of supporting states. We first validate the proposed method through extensive simulations in 2D obstacle avoidance and 2.5D terrain navigation problems. The results show that the proposed approach leads to a much superior performance over several baselines. We then develop a system that integrates our decision-making framework with onboard perception and conduct real-world experiments in both cluttered indoor and unstructured outdoor environments. The results from the physical systems further demonstrate the applicability of our method in challenging real-world environments.
Graph representation learning (GRL) is critical for extracting insights from complex network structures, but it also raises security concerns due to potential privacy vulnerabilities in these representations. This paper investigates the structural vulnerabilities in graph neural models where sensitive topological information can be inferred through edge reconstruction attacks. Our research primarily addresses the theoretical underpinnings of cosine-similarity-based edge reconstruction attacks (COSERA), providing theoretical and empirical evidence that such attacks can perfectly reconstruct sparse Erdos Renyi graphs with independent random features as graph size increases. Conversely, we establish that sparsity is a critical factor for COSERA's effectiveness, as demonstrated through analysis and experiments on stochastic block models. Finally, we explore the resilience of (provably) private graph representations produced via noisy aggregation (NAG) mechanism against COSERA. We empirically delineate instances wherein COSERA demonstrates both efficacy and deficiency in its capacity to function as an instrument for elucidating the trade-off between privacy and utility.
The affine space of all tensor product B\'ezier patches of degree nxn with prescribed main diagonal curves is determined. First, the pair of B\'ezier curves which can be diagonals of a B\'ezier patch is characterized. Besides prescribing the diagonal curves, other related problems are considered, those where boundary curves or tangent planes along boundary curves are also prescribed.
We develop an anytime-valid permutation test, where the dataset is fixed and the permutations are sampled sequentially one by one, with the objective of saving computational resources by sampling fewer permutations and stopping early. The core technical advance is the development of new test martingales (nonnegative martingales with initial value one) for testing exchangeability against a very particular alternative. These test martingales are constructed using new and simple betting strategies that smartly bet on the relative ranks of permuted test statistics. The betting strategies are guided by the derivation of a simple log-optimal betting strategy, and display excellent power in practice. In contrast to a well-known method by Besag and Clifford, our method yields a valid e-value or a p-value at any stopping time, and with particular stopping rules, it yields computational gains under both the null and the alternative without compromising power.
Direct reciprocity is a wide-spread mechanism for evolution of cooperation. In repeated interactions, players can condition their behavior on previous outcomes. A well known approach is given by reactive strategies, which respond to the co-player's previous move. Here we extend reactive strategies to longer memories. A reactive-$n$ strategy takes into account the sequence of the last $n$ moves of the co-player. A reactive-$n$ counting strategy records how often the co-player has cooperated during the last $n$ rounds. We derive an algorithm to identify all partner strategies among reactive-$n$ strategies. We give explicit conditions for all partner strategies among reactive-2, reactive-3 strategies, and reactive-$n$ counting strategies. Partner strategies are those that ensure mutual cooperation without exploitation. We perform evolutionary simulations and find that longer memory increases the average cooperation rate for reactive-$n$ strategies but not for reactive counting strategies. Paying attention to the sequence of moves is necessary for reaping the advantages of longer memory.
This book is meant to provide an introduction to linear models and the theories behind them. Our goal is to give a rigorous introduction to the readers with prior exposure to ordinary least squares. In machine learning, the output is usually a nonlinear function of the input. Deep learning even aims to find a nonlinear dependence with many layers, which require a large amount of computation. However, most of these algorithms build upon simple linear models. We then describe linear models from different perspectives and find the properties and theories behind the models. The linear model is the main technique in regression problems, and the primary tool for it is the least squares approximation, which minimizes a sum of squared errors. This is a natural choice when we're interested in finding the regression function which minimizes the corresponding expected squared error. This book is primarily a summary of purpose, significance of important theories behind linear models, e.g., distribution theory and the minimum variance estimator. We first describe ordinary least squares from three different points of view, upon which we disturb the model with random noise and Gaussian noise. Through Gaussian noise, the model gives rise to the likelihood so that we introduce a maximum likelihood estimator. It also develops some distribution theories via this Gaussian disturbance. The distribution theory of least squares will help us answer various questions and introduce related applications. We then prove least squares is the best unbiased linear model in the sense of mean squared error, and most importantly, it actually approaches the theoretical limit. We end up with linear models with the Bayesian approach and beyond.
A Gaussian process is proposed as a model for the posterior distribution of the local predictive ability of a model or expert, conditional on a vec- tor of covariates, from historical predictions in the form of log predictive scores. Assuming Gaussian expert predictions and a Gaussian data generat- ing process, a linear transformation of the predictive score follows a noncen- tral chi-squared distribution with one degree of freedom. Motivated by this we develop a non-central chi-squared Gaussian process regression to flexibly model local predictive ability, with the posterior distribution of the latent GP function and kernel hyperparameters sampled by Hamiltonian Monte Carlo. We show that a cube-root transformation of the log scores is approximately Gaussian with homoscedastic variance, which makes it possible to estimate the model much faster by marginalizing the latent GP function analytically. Linear pools based on learned local predictive ability are applied to predict daily bike usage in Washington DC.
We consider the problem of learning support vector machines robust to uncertainty. It has been established in the literature that typical loss functions, including the hinge loss, are sensible to data perturbations and outliers, thus performing poorly in the setting considered. In contrast, using the 0-1 loss or a suitable non-convex approximation results in robust estimators, at the expense of large computational costs. In this paper we use mixed-integer optimization techniques to derive a new loss function that better approximates the 0-1 loss compared with existing alternatives, while preserving the convexity of the learning problem. In our computational results, we show that the proposed estimator is competitive with the standard SVMs with the hinge loss in outlier-free regimes and better in the presence of outliers.
Forecasts for key macroeconomic variables are almost always made simultaneously by the same organizations, presented together, and used together in policy analyses and decision-makings. It is therefore important to know whether the forecasters are skillful enough to forecast the future values of those variables. Here a method for joint evaluation of skill in directional forecasts of multiple variables is introduced. The method is simple to use and does not rely on complicated assumptions required by the conventional statistical methods for measuring accuracy of directional forecast. The data on GDP growth and inflation forecasts of three organizations from Thailand, namely, the Bank of Thailand, the Fiscal Policy Office, and the Office of the National Economic and Social Development Council as well as the actual data on GDP growth and inflation of Thailand between 2001 and 2021 are employed in order to demonstrate how the method could be used to evaluate the skills of forecasters in practice. The overall results indicate that these three organizations are somewhat skillful in forecasting the direction-of-changes of GDP growth and inflation when no band and a band of +/- 1 standard deviation of the forecasted outcome are considered. However, when a band of +/- 0.5% of the forecasted outcome is introduced, the skills in forecasting the direction-of-changes of GDP growth and inflation of these three organizations are, at best, little better than intelligent guess work.