In this work, we propose an efficient nullspace-preserving saddle search (NPSS) method for a class of phase transitions involving translational invariance, where the critical states are often degenerate. The NPSS method includes two stages, escaping from the basin and searching for the index-1 generalized saddle point. The NPSS method climbs upward from the generalized local minimum in segments to overcome the challenges of degeneracy. In each segment, an effective ascent direction is ensured by keeping this direction orthogonal to the nullspace of the initial state in this segment. This method can escape the basin quickly and converge to the transition states. We apply the NPSS method to the phase transitions between crystals, and between crystal and quasicrystal, based on the Landau-Brazovskii and Lifshitz-Petrich free energy functionals. Numerical results show a good performance of the NPSS method.
This work deals with developing two fast randomized algorithms for computing the generalized tensor singular value decomposition (GTSVD) based on the tubal product (t-product). The random projection method is utilized to compute the important actions of the underlying data tensors and use them to get small sketches of the original data tensors, which are easier to be handled. Due to the small size of the sketch tensors, deterministic approaches are applied to them to compute their GTSVDs. Then, from the GTSVD of the small sketch tensors, the GTSVD of the original large-scale data tensors is recovered. Some experiments are conducted to show the effectiveness of the proposed approach.
Theoretical developments in sequential Bayesian analysis of multivariate dynamic models underlie new methodology for causal prediction. This extends the utility of existing models with computationally efficient methodology, enabling routine exploration of Bayesian counterfactual analyses with multiple selected time series as synthetic controls. Methodological contributions also define the concept of outcome adaptive modelling to monitor and inferentially respond to changes in experimental time series following interventions designed to explore causal effects. The benefits of sequential analyses with time-varying parameter models for causal investigations are inherited in this broader setting. A case study in commercial causal analysis-- involving retail revenue outcomes related to marketing interventions-- highlights the methodological advances.
This work characterizes equivariant polynomial functions from tuples of tensor inputs to tensor outputs. Loosely motivated by physics, we focus on equivariant functions with respect to the diagonal action of the orthogonal group on tensors. We show how to extend this characterization to other linear algebraic groups, including the Lorentz and symplectic groups. Our goal behind these characterizations is to define equivariant machine learning models. In particular, we focus on the sparse vector estimation problem. This problem has been broadly studied in the theoretical computer science literature, and explicit spectral methods, derived by techniques from sum-of-squares, can be shown to recover sparse vectors under certain assumptions. Our numerical results show that the proposed equivariant machine learning models can learn spectral methods that outperform the best theoretically known spectral methods in some regimes. The experiments also suggest that learned spectral methods can solve the problem in settings that have not yet been theoretically analyzed. This is an example of a promising direction in which theory can inform machine learning models and machine learning models could inform theory.
In this work, we detail the GPU-porting of an in-house pseudo-spectral solver tailored towards large-scale simulations of interface-resolved simulation of drop- and bubble-laden turbulent flows. The code relies on direct numerical simulation of the Navier-Stokes equations, used to describe the flow field, coupled with a phase-field method, used to describe the shape, deformation, and topological changes of the interface of the drops or bubbles. The governing equations -Navier-Stokes and Cahn-Hilliard equations-are solved using a pseudo-spectral method that relies on transforming the variables in the wavenumber space. The code targets large-scale simulations of drop- and bubble-laden turbulent flows and relies on a multilevel parallelism. The first level of parallelism relies on the message-passing interface (MPI) and is used on multi-core architectures in CPU-based infrastructures. A second level of parallelism relies on OpenACC directives and cuFFT libraries and is used to accelerate the code execution when GPU-based infrastructures are targeted. The resulting multiphase flow solver can be efficiently executed in heterogeneous computing infrastructures and exhibits a remarkable speed-up when GPUs are employed. Thanks to the modular structure of the code and the use of a directive-based strategy to offload code execution on GPUs, only minor code modifications are required when targeting different computing architectures. This improves code maintenance, version control and the implementation of additional modules or governing equations.
In this paper, we deal with sequential testing of multiple hypotheses. In the general scheme of construction of optimal tests based on the backward induction, we propose a modification which provides a simplified (generally speaking, suboptimal) version of the optimal test, for any particular criterion of optimization. We call this DBC version (the one with Dropped Backward Control) of the optimal test. In particular, for the case of two simple hypotheses, dropping backward control in the Bayesian test produces the classical sequential probability ratio test (SPRT). Similarly, dropping backward control in the modified Kiefer-Weiss solutions produces Lorden's 2-SPRTs . In the case of more than two hypotheses, we obtain in this way new classes of sequential multi-hypothesis tests, and investigate their properties. The efficiency of the DBC-tests is evaluated with respect to the optimal Bayesian multi-hypothesis test and with respect to the matrix sequential probability ratio test (MSPRT) by Armitage. In a multihypothesis variant of the Kiefer-Weiss problem for binomial proportions the performance of the DBC-test is numerically compared with that of the exact solution. In a model of normal observations with a linear trend, the performance of of the DBC-test is numerically compared with that of the MSPRT. Some other numerical examples are presented. In all the cases the proposed tests exhibit a very high efficiency with respect to the optimal tests (more than 99.3\% when sampling from Bernoulli populations) and/or with respect to the MSPRT (even outperforming the latter in some scenarios).
Topology optimization is used to systematically design contact-aided thermo-mechanical regulators, i.e. components whose effective thermal conductivity is tunable by mechanical deformation and contact. The thermo-mechanical interactions are modeled using a fully coupled non-linear thermo-mechanical finite element framework. To obtain the intricate heat transfer response, the components leverage self-contact, which is modeled using a third medium contact method. The effective heat transfer properties of the regulators are tuned by solving a topology optimization problem using a traditional gradient based algorithm. Several designs of thermo-mechanical regulators in the form of switches, diodes and triodes are presented.
Many combinatorial optimization problems can be formulated as the search for a subgraph that satisfies certain properties and minimizes the total weight. We assume here that the vertices correspond to points in a metric space and can take any position in given uncertainty sets. Then, the cost function to be minimized is the sum of the distances for the worst positions of the vertices in their uncertainty sets. We propose two types of polynomial-time approximation algorithms. The first one relies on solving a deterministic counterpart of the problem where the uncertain distances are replaced with maximum pairwise distances. We study in details the resulting approximation ratio, which depends on the structure of the feasible subgraphs and whether the metric space is Ptolemaic or not. The second algorithm is a fully-polynomial time approximation scheme for the special case of $s-t$ paths.
In this work, we propose two information generating functions: general weighted information and relative information generating functions, and study their properties. { It is shown that the general weighted information generating function (GWIGF) is shift-dependent and can be expressed in terms of the weighted Shannon entropy. The GWIGF of a transformed random variable has been obtained in terms of the GWIGF of a known distribution. Several bounds of the GWIGF have been proposed. We have obtained sufficient conditions under which the GWIGFs of two distributions are comparable. Further, we have established a connection between the weighted varentropy and varentropy with proposed GWIGF. An upper bound for GWIGF of the sum of two independent random variables is derived. The effect of general weighted relative information generating function (GWRIGF) for two transformed random variables under strictly monotone functions has been studied. } Further, these information generating functions are studied for escort, generalized escort and mixture distributions. {Specially, we propose weighted $\beta$-cross informational energy and establish a close connection with GWIGF for escort distribution.} The residual versions of the newly proposed generating functions are considered and several similar properties have been explored. A non-parametric estimator of the residual general weighted information generating function is proposed. A simulated data set and two real data sets are considered for the purpose of illustration. { Finally, we have compared the non-parametric approach with a parametric approach in terms of the absolute bias and mean squared error values.}
We study an interacting particle method (IPM) for computing the large deviation rate function of entropy production for diffusion processes, with emphasis on the vanishing-noise limit and high dimensions. The crucial ingredient to obtain the rate function is the computation of the principal eigenvalue $\lambda$ of elliptic, non-self-adjoint operators. We show that this principal eigenvalue can be approximated in terms of the spectral radius of a discretized evolution operator obtained from an operator splitting scheme and an Euler--Maruyama scheme with a small time step size, and we show that this spectral radius can be accessed through a large number of iterations of this discretized semigroup, suitable for the IPM. The IPM applies naturally to problems in unbounded domains, scales easily to high dimensions, and adapts to singular behaviors in the vanishing-noise limit. We show numerical examples in dimensions up to 16. The numerical results show that our numerical approximation of $\lambda$ converges to the analytical vanishing-noise limit within visual tolerance with a fixed number of particles and a fixed time step size. Our paper appears to be the first one to obtain numerical results of principal eigenvalue problems for non-self-adjoint operators in such high dimensions.
In this paper, to address the optimization problem on a compact matrix manifold, we introduce a novel algorithmic framework called the Transformed Gradient Projection (TGP) algorithm, using the projection onto this compact matrix manifold. Compared with the existing algorithms, the key innovation in our approach lies in the utilization of a new class of search directions and various stepsizes, including the Armijo, nonmonotone Armijo, and fixed stepsizes, to guide the selection of the next iterate. Our framework offers flexibility by encompassing the classical gradient projection algorithms as special cases, and intersecting the retraction-based line-search algorithms. Notably, our focus is on the Stiefel or Grassmann manifold, revealing that many existing algorithms in the literature can be seen as specific instances within our proposed framework, and this algorithmic framework also induces several new special cases. Then, we conduct a thorough exploration of the convergence properties of these algorithms, considering various search directions and stepsizes. To achieve this, we extensively analyze the geometric properties of the projection onto compact matrix manifolds, allowing us to extend classical inequalities related to retractions from the literature. Building upon these insights, we establish the weak convergence, convergence rate, and global convergence of TGP algorithms under three distinct stepsizes. In cases where the compact matrix manifold is the Stiefel or Grassmann manifold, our convergence results either encompass or surpass those found in the literature. Finally, through a series of numerical experiments, we observe that the TGP algorithms, owing to their increased flexibility in choosing search directions, outperform classical gradient projection and retraction-based line-search algorithms in several scenarios.