In this paper we study the satisfiability and solutions of group equations when combinatorial, algebraic and language-theoretic constraints are imposed on the solutions. We show that the solutions to equations with length, lexicographic order, abelianisation or context-free constraints added, can be effectively produced in finitely generated virtually abelian groups. Crucially, we translate each of the constraints above into a rational set in an effective way, and so reduce each problem to solving equations with rational constraints, which is decidable and well understood in virtually abelian groups. A byproduct of our results is that the growth series of a virtually abelian group, with respect to any generating set and any weight, is effectively computable. This series is known to be rational by a result of Benson, but his proof is non-constructive.
Symmetry is a cornerstone of much of mathematics, and many probability distributions possess symmetries characterized by their invariance to a collection of group actions. Thus, many mathematical and statistical methods rely on such symmetry holding and ostensibly fail if symmetry is broken. This work considers under what conditions a sequence of probability measures asymptotically gains such symmetry or invariance to a collection of group actions. Considering the many symmetries of the Gaussian distribution, this work effectively proposes a non-parametric type of central limit theorem. That is, a Lipschitz function of a high dimensional random vector will be asymptotically invariant to the actions of certain compact topological groups. Applications of this include a partial law of the iterated logarithm for uniformly random points in an $\ell_p^n$-ball and an asymptotic equivalence between classical parametric statistical tests and their randomization counterparts even when invariance assumptions are violated.
A numerical method is proposed for simulation of composite open quantum systems. It is based on Lindblad master equations and adiabatic elimination. Each subsystem is assumed to converge exponentially towards a stationary subspace, slightly impacted by some decoherence channels and weakly coupled to the other subsystems. This numerical method is based on a perturbation analysis with an asymptotic expansion. It exploits the formulation of the slow dynamics with reduced dimension. It relies on the invariant operators of the local and nominal dissipative dynamics attached to each subsystem. Second-order expansion can be computed only with local numerical calculations. It avoids computations on the tensor-product Hilbert space attached to the full system. This numerical method is particularly well suited for autonomous quantum error correction schemes. Simulations of such reduced models agree with complete full model simulations for typical gates acting on one and two cat-qubits (Z, ZZ and CNOT) when the mean photon number of each cat-qubit is less than 8. For larger mean photon numbers and gates with three cat-qubits (ZZZ and CCNOT), full model simulations are almost impossible whereas reduced model simulations remain accessible. In particular, they capture both the dominant phase-flip error-rate and the very small bit-flip error-rate with its exponential suppression versus the mean photon number.
Models of complex technological systems inherently contain interactions and dependencies among their input variables that affect their joint influence on the output. Such models are often computationally expensive and few sensitivity analysis methods can effectively process such complexities. Moreover, the sensitivity analysis field as a whole pays limited attention to the nature of interaction effects, whose understanding can prove to be critical for the design of safe and reliable systems. In this paper, we introduce and extensively test a simple binning approach for computing sensitivity indices and demonstrate how complementing it with the smart visualization method, simulation decomposition (SimDec), can permit important insights into the behavior of complex engineering models. The simple binning approach computes first-, second-order effects, and a combined sensitivity index, and is considerably more computationally efficient than Sobol' indices. The totality of the sensitivity analysis framework provides an efficient and intuitive way to analyze the behavior of complex systems containing interactions and dependencies.
A new mechanical model on noncircular shallow tunnelling considering initial stress field is proposed in this paper by constraining far-field ground surface to eliminate displacement singularity at infinity, and the originally unbalanced tunnel excavation problem in existing solutions is turned to an equilibrium one of mixed boundaries. By applying analytic continuation, the mixed boundaries are transformed to a homogenerous Riemann-Hilbert problem, which is subsequently solved via an efficient and accurate iterative method with boundary conditions of static equilibrium, displacement single-valuedness, and traction along tunnel periphery. The Lanczos filtering technique is used in the final stress and displacement solution to reduce the Gibbs phenomena caused by the constrained far-field ground surface for more accurte results. Several numerical cases are conducted to intensively verify the proposed solution by examining boundary conditions and comparing with existing solutions, and all the results are in good agreements. Then more numerical cases are conducted to investigate the stress and deformation distribution along ground surface and tunnel periphery, and several engineering advices are given. Further discussions on the defects of the proposed solution are also conducted for objectivity.
We study the average case complexity of the uniform membership problem for subgroups of free groups, and we show that it is orders of magnitude smaller than the worst case complexity of the best known algorithms. This applies to subgroups given by a fixed number of generators as well as to subgroups given by an exponential number of generators. The main idea behind this result is to exploit a generic property of tuples of words, called the central tree property. An application is given to the average case complexity of the relative primitivity problem, using Shpilrain's recent algorithm to decide primitivity, whose average case complexity is a constant depending only on the rank of the ambient free group.
Differential geometric approaches are ubiquitous in several fields of mathematics, physics and engineering, and their discretizations enable the development of network-based mathematical and computational frameworks, which are essential for large-scale data science. The Forman-Ricci curvature (FRC) - a statistical measure based on Riemannian geometry and designed for networks - is known for its high capacity for extracting geometric information from complex networks. However, extracting information from dense networks is still challenging due to the combinatorial explosion of high-order network structures. Motivated by this challenge we sought a set-theoretic representation theory for high-order network cells and FRC, as well as their associated concepts and properties, which together provide an alternative and efficient formulation for computing high-order FRC in complex networks. We provide a pseudo-code, a software implementation coined FastForman, as well as a benchmark comparison with alternative implementations. Crucially, our representation theory reveals previous computational bottlenecks and also accelerates the computation of FRC. As a consequence, our findings open new research possibilities in complex systems where higher-order geometric computations are required.
In this paper we introduce a multilevel Picard approximation algorithm for general semilinear parabolic PDEs with gradient-dependent nonlinearities whose coefficient functions do not need to be constant. We also provide a full convergence and complexity analysis of our algorithm. To obtain our main results, we consider a particular stochastic fixed-point equation (SFPE) motivated by the Feynman-Kac representation and the Bismut-Elworthy-Li formula. We show that the PDE under consideration has a unique viscosity solution which coincides with the first component of the unique solution of the stochastic fixed-point equation. Moreover, if the PDE admits a strong solution, then the gradient of the unique solution of the PDE coincides with the second component of the unique solution of the stochastic fixed-point equation.
In this paper, we view the statistical inverse problems of partial differential equations (PDEs) as PDE-constrained regression and focus on learning the prediction function of the prior probability measures. From this perspective, we propose general generalization bounds for learning infinite-dimensionally defined prior measures in the style of the probability approximately correct Bayesian learning theory. The theoretical framework is rigorously defined on infinite-dimensional separable function space, which makes the theories intimately connected to the usual infinite-dimensional Bayesian inverse approach. Inspired by the concept of $\alpha$-differential privacy, a generalized condition (containing the usual Gaussian measures employed widely in the statistical inverse problems of PDEs) has been proposed, which allows the learned prior measures to depend on the measured data (the prediction function with measured data as input and the prior measure as output can be introduced). After illustrating the general theories, the specific settings of linear and nonlinear problems have been given and can be easily casted into our general theories to obtain concrete generalization bounds. Based on the obtained generalization bounds, infinite-dimensionally well-defined practical algorithms are formulated. Finally, numerical examples of the backward diffusion and Darcy flow problems are provided to demonstrate the potential applications of the proposed approach in learning the prediction function of the prior probability measures.
In this paper, we consider an inverse space-dependent source problem for a time-fractional diffusion equation. To deal with the ill-posedness of the problem, we transform the problem into an optimal control problem with total variational (TV) regularization. In contrast to the classical Tikhonov model incorporating $L^2$ penalty terms, the inclusion of a TV term proves advantageous in reconstructing solutions that exhibit discontinuities or piecewise constancy. The control problem is approximated by a fully discrete scheme, and convergence results are provided within this framework. Furthermore, a lineraed primal-dual iterative algorithm is proposed to solve the discrete control model based on an equivalent saddle-point reformulation, and several numerical experiments are presented to demonstrate the efficiency of the algorithm.
This paper proposes a simple method for balancing distributions of covariates for causal inference based on observational studies. The method makes it possible to balance an arbitrary number of quantiles (e.g., medians, quartiles, or deciles) together with means if necessary. The proposed approach is based on the theory of calibration estimators (Deville and S\"arndal 1992), in particular, calibration estimators for quantiles, proposed by Harms and Duchesne (2006). By modifying the entropy balancing method and the covariate balancing propensity score method, it is possible to balance the distributions of the treatment and control groups. The method does not require numerical integration, kernel density estimation or assumptions about the distributions; valid estimates can be obtained by drawing on existing asymptotic theory. Results of a simulation study indicate that the method efficiently estimates average treatment effects on the treated (ATT), the average treatment effect (ATE), the quantile treatment effect on the treated (QTT) and the quantile treatment effect (QTE), especially in the presence of non-linearity and mis-specification of the models. The proposed methods are implemented in an open source R package jointCalib.