We consider a problem of inferring contact network from nodal states observed during an epidemiological process. In a black-box Bayesian optimisation framework this problem reduces to a discrete likelihood optimisation over the set of possible networks. The high dimensionality of this set, which grows quadratically with the number of network nodes, makes this optimisation computationally challenging. Moreover, the computation of the likelihood of a network requires estimating probabilities of the observed data to realise during the evolution of the epidemiological process on this network. A stochastic simulation algorithm struggles to estimate rare events of observing the data (corresponding to the ground truth network) during the evolution with a significantly different network, and hence prevents optimisation of the likelihood. We replace the stochastic simulation with solving the chemical master equation for the probabilities of all network states. This equation also suffers from the curse of dimensionality due to the exponentially large number of network states. We overcome this by approximating the probability of states in the tensor-train decomposition. This enables fast and accurate computation of small probabilities and likelihoods. Numerical simulations demonstrate efficient black-box Bayesian inference of the network.
The consistency of the maximum likelihood estimator for mixtures of elliptically-symmetric distributions for estimating its population version is shown, where the underlying distribution $P$ is nonparametric and does not necessarily belong to the class of mixtures on which the estimator is based. In a situation where $P$ is a mixture of well enough separated but nonparametric distributions it is shown that the components of the population version of the estimator correspond to the well separated components of $P$. This provides some theoretical justification for the use of such estimators for cluster analysis in case that $P$ has well separated subpopulations even if these subpopulations differ from what the mixture model assumes.
To date, most methods for simulating conditioned diffusions are limited to the Euclidean setting. The conditioned process can be constructed using a change of measure known as Doob's $h$-transform. The specific type of conditioning depends on a function $h$ which is typically unknown in closed form. To resolve this, we extend the notion of guided processes to a manifold $M$, where one replaces $h$ by a function based on the heat kernel on $M$. We consider the case of a Brownian motion with drift, constructed using the frame bundle of $M$, conditioned to hit a point $x_T$ at time $T$. We prove equivalence of the laws of the conditioned process and the guided process with a tractable Radon-Nikodym derivative. Subsequently, we show how one can obtain guided processes on any manifold $N$ that is diffeomorphic to $M$ without assuming knowledge of the heat kernel on $N$. We illustrate our results with numerical simulations and an example of parameter estimation where a diffusion process on the torus is observed discretely in time.
Multi-contrast (MC) Magnetic Resonance Imaging (MRI) reconstruction aims to incorporate a reference image of auxiliary modality to guide the reconstruction process of the target modality. Known MC reconstruction methods perform well with a fully sampled reference image, but usually exhibit inferior performance, compared to single-contrast (SC) methods, when the reference image is missing or of low quality. To address this issue, we propose DuDoUniNeXt, a unified dual-domain MRI reconstruction network that can accommodate to scenarios involving absent, low-quality, and high-quality reference images. DuDoUniNeXt adopts a hybrid backbone that combines CNN and ViT, enabling specific adjustment of image domain and k-space reconstruction. Specifically, an adaptive coarse-to-fine feature fusion module (AdaC2F) is devised to dynamically process the information from reference images of varying qualities. Besides, a partially shared shallow feature extractor (PaSS) is proposed, which uses shared and distinct parameters to handle consistent and discrepancy information among contrasts. Experimental results demonstrate that the proposed model surpasses state-of-the-art SC and MC models significantly. Ablation studies show the effectiveness of the proposed hybrid backbone, AdaC2F, PaSS, and the dual-domain unified learning scheme.
We do the error analysis in reliability measures due to the assumption of independence amongst the component lifetimes. In reliability theory, we come across different n-component structures like series, parallel, and k-out-of-n systems. A n component series system works only if all the n components work. While studying the reliability measures of a n-component series system, we mostly assume that all the components have independent lifetimes. Such an assumption eases mathematical complexity while analyzing the data and hence is very common. But in reality, the lifetimes of the components are very much interdependent. Such an assumption of independence hence leads to inaccurate analysis of data. In multiple situations like studying a complex system with many components, we turn to assuming independence keeping some room for error. However, if we have some knowledge of the behaviour of errors or some estimate on the error bound, we could decide if we assume independence and prefer mathematical simplicity (if we know the error is within our allowed limit), or keep the mathematical complexity and get accurate results without assuming independence. We aim to find the relative errors in the reliability measures for a n-component series system.
Temporal network data is often encoded as time-stamped interaction events between senders and receivers, such as co-authoring scientific articles or communication via email. A number of relational event frameworks have been proposed to address specific issues raised by complex temporal dependencies. These models attempt to quantify how individual behaviour, endogenous and exogenous factors, as well as interactions with other individuals modify the network dynamics over time. It is often of interest to determine whether changes in the network can be attributed to endogenous mechanisms reflecting natural relational tendencies, such as reciprocity or triadic effects. The propensity to form or receive ties can also, at least partially, be related to actor attributes. Nodal heterogeneity in the network is often modelled by including actor-specific or dyadic covariates. However, comprehensively capturing all personality traits is difficult in practice, if not impossible. A failure to account for heterogeneity may confound the substantive effect of key variables of interest. This work shows that failing to account for node level sender and receiver effects can induce ghost triadic effects. We propose a random-effect extension of the relational event model to deal with these problems. We show that it is often effective over more traditional approaches, such as in-degree and out-degree statistics. These results that the violation of the hierarchy principle due to insufficient information about nodal heterogeneity can be resolved by including random effects in the relational event model as a standard.
Global concern over food prices and security has been exacerbated by the impacts of armed conflicts such as the Russia Ukraine War, pandemic diseases, and climate change. Traditionally, analyzing global food prices and their associations with socioeconomic factors has relied on static linear regression models. However, the complexity of socioeconomic factors and their implications extend beyond simple linear relationships. By incorporating determinants, critical characteristics identification, and comparative model analysis, this study aimed to identify the critical socioeconomic characteristics and multidimensional relationships associated with the underlying factors of food prices and security. Machine learning tools were used to uncover the socioeconomic factors influencing global food prices from 2000 to 2022. A total of 105 key variables from the World Development Indicators and the Food and Agriculture Organization of the United Nations were selected. Machine learning identified four key dimensions of food price security: economic and population metrics, military spending, health spending, and environmental factors. The top 30 determinants were selected for feature extraction using data mining. The efficiency of the support vector regression model allowed for precise prediction making and correlation analysis. Keywords: environment and growth, global economics, price fluctuation, support vector regression
We often rely on censuses of triangulations to guide our intuition in $3$-manifold topology. However, this can lead to misplaced faith in conjectures if the smallest counterexamples are too large to appear in our census. Since the number of triangulations increases super-exponentially with size, there is no way to expand a census beyond relatively small triangulations; the current census only goes up to $10$ tetrahedra. Here, we show that it is feasible to search for large and hard-to-find counterexamples by using heuristics to selectively (rather than exhaustively) enumerate triangulations. We use this idea to find counterexamples to three conjectures which ask, for certain $3$-manifolds, whether one-vertex triangulations always have a "distinctive" edge that would allow us to recognise the $3$-manifold.
We show that the known list-decoding algorithms for univariate multiplicity and folded Reed-Solomon (FRS) codes can be made to run in nearly-linear time. This yields, to our knowledge, the first known family of codes that can be decoded in nearly linear time, even as they approach the list decoding capacity. Univariate multiplicity codes and FRS codes are natural variants of Reed-Solomon codes that were discovered and studied for their applications to list-decoding. It is known that for every $\epsilon >0$, and rate $R \in (0,1)$, there exist explicit families of these codes that have rate $R$ and can be list-decoded from a $(1-R-\epsilon)$ fraction of errors with constant list size in polynomial time (Guruswami & Wang (IEEE Trans. Inform. Theory, 2013) and Kopparty, Ron-Zewi, Saraf & Wootters (SIAM J. Comput. 2023)). In this work, we present randomized algorithms that perform the above tasks in nearly linear time. Our algorithms have two main components. The first builds upon the lattice-based approach of Alekhnovich (IEEE Trans. Inf. Theory 2005), who designed a nearly linear time list-decoding algorithm for Reed-Solomon codes approaching the Johnson radius. As part of the second component, we design nearly-linear time algorithms for two natural algebraic problems. The first algorithm solves linear differential equations of the form $Q\left(x, f(x), \frac{df}{dx}, \dots,\frac{d^m f}{dx^m}\right) \equiv 0$ where $Q$ has the form $Q(x,y_0,\dots,y_m) = \tilde{Q}(x) + \sum_{i = 0}^m Q_i(x)\cdot y_i$. The second solves functional equations of the form $Q\left(x, f(x), f(\gamma x), \dots,f(\gamma^m x)\right) \equiv 0$ where $\gamma$ is a high-order field element. These algorithms can be viewed as generalizations of classical algorithms of Sieveking (Computing 1972) and Kung (Numer. Math. 1974) for computing the modular inverse of a power series, and might be of independent interest.
Under interference, the potential outcomes of a unit depend on treatments assigned to other units. A network interference structure is typically assumed to be given and accurate. In this paper, we study the problems resulting from misspecifying these networks. First, we derive bounds on the bias arising from estimating causal effects under a misspecified network. We show that the maximal possible bias depends on the divergence between the assumed network and the true one with respect to the induced exposure probabilities. Then, we propose a novel estimator that leverages multiple networks simultaneously and is unbiased if one of the networks is correct, thus providing robustness to network specification. Additionally, we develop a probabilistic bias analysis that quantifies the impact of a postulated misspecification mechanism on the causal estimates. We illustrate key issues in simulations and demonstrate the utility of the proposed methods in a social network field experiment and a cluster-randomized trial with suspected cross-clusters contamination.
Consider minimizing the entropy of a mixture of states by choosing each state subject to constraints. If the spectrum of each state is fixed, we expect that in order to reduce the entropy of the mixture, we should make the states less distinguishable in some sense. Here, we study a class of optimization problems that are inspired by this situation and shed light on the relevant notions of distinguishability. The motivation for our study is the recently introduced spin alignment conjecture. In the original version of the underlying problem, each state in the mixture is constrained to be a freely chosen state on a subset of $n$ qubits tensored with a fixed state $Q$ on each of the qubits in the complement. According to the conjecture, the entropy of the mixture is minimized by choosing the freely chosen state in each term to be a tensor product of projectors onto a fixed maximal eigenvector of $Q$, which maximally "aligns" the terms in the mixture. We generalize this problem in several ways. First, instead of minimizing entropy, we consider maximizing arbitrary unitarily invariant convex functions such as Fan norms and Schatten norms. To formalize and generalize the conjectured required alignment, we define alignment as a preorder on tuples of self-adjoint operators that is induced by majorization. We prove the generalized conjecture for Schatten norms of integer order, for the case where the freely chosen states are constrained to be classical, and for the case where only two states contribute to the mixture and $Q$ is proportional to a projector. The last case fits into a more general situation where we give explicit conditions for maximal alignment. The spin alignment problem has a natural "dual" formulation, versions of which have further generalizations that we introduce.