亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The celebrated Kleene fixed point theorem is crucial in the mathematical modelling of recursive specifications in Denotational Semantics. In this paper we discuss whether the hypothesis of the aforementioned result can be weakened. An affirmative answer to the aforesaid inquiry is provided so that a characterization of those properties that a self-mapping must satisfy in order to guarantee that its set of fixed points is non-empty when no notion of completeness are assumed to be satisfied by the partially ordered set. Moreover, the case in which the partially ordered set is coming from a quasi-metric space is treated in depth. Finally, an application of the exposed theory is obtained. Concretely, a mathematical method to discuss the asymptotic complexity of those algorithms whose running time of computing fulfills a recurrence equation is presented. Moreover, the aforesaid method retrieves the fixed point based methods that appear in the literature for asymptotic complexity analysis of algorithms. However, our new method improves the aforesaid methods because it imposes fewer requirements than those that have been assumed in the literature and, in addition, it allows to state simultaneously upper and lower asymptotic bounds for the running time computing.

相關內容

Multiphysics simulations frequently require transferring solution fields between subproblems with non-matching spatial discretizations, typically using interpolation techniques. Standard methods are usually based on measuring the closeness between points by means of the Euclidean distance, which does not account for curvature, cuts, cavities or other non-trivial geometrical or topological features of the domain. This may lead to spurious oscillations in the interpolant in proximity to these features. To overcome this issue, we propose a modification to rescaled localized radial basis function (RL-RBF) interpolation to account for the geometry of the interpolation domain, by yielding conformity and fidelity to geometrical and topological features. The proposed method, referred to as RL-RBF-G, relies on measuring the geodesic distance between data points. RL-RBF-G removes spurious oscillations appearing in the RL-RBF interpolant, resulting in increased accuracy in domains with complex geometries. We demonstrate the effectiveness of RL-RBF-G interpolation through a convergence study in an idealized setting. Furthermore, we discuss the algorithmic aspects and the implementation of RL-RBF-G interpolation in a distributed-memory parallel framework, and present the results of a strong scalability test yielding nearly ideal results. Finally, we show the effectiveness of RL-RBF-G interpolation in multiphysics simulations by considering an application to a whole-heart cardiac electromecanics model.

We propose an efficient algorithm for matching two correlated Erd\H{o}s--R\'enyi graphs with $n$ vertices whose edges are correlated through a latent vertex correspondence. When the edge density $q= n^{- \alpha+o(1)}$ for a constant $\alpha \in [0,1)$, we show that our algorithm has polynomial running time and succeeds to recover the latent matching as long as the edge correlation is non-vanishing. This is closely related to our previous work on a polynomial-time algorithm that matches two Gaussian Wigner matrices with non-vanishing correlation, and provides the first polynomial-time random graph matching algorithm (regardless of the regime of $q$) when the edge correlation is below the square root of the Otter's constant (which is $\approx 0.338$).

In this paper, we propose an RADI-type method for large-scale stochastic continuous-time algebraic Riccati equations with sparse and low-rank structures. The so-called ISC method is developed by using the Incorporation idea together with different Shifts to accelerate the convergence and Compressions to reduce the storage and complexity. Numerical experiments are given to show its efficiency.

For problems of time-harmonic scattering by rational polygonal obstacles, embedding formulae express the far-field pattern induced by any incident plane wave in terms of the far-field patterns for a relatively small (frequency-independent) set of canonical incident angles. Although these remarkable formulae are exact in theory, here we demonstrate that: (i) they are highly sensitive to numerical errors in practice, and (ii) direct calculation of the coefficients in these formulae may be impossible for particular sets of canonical incident angles, even in exact arithmetic. Only by overcoming these practical issues can embedding formulae provide a highly efficient approach to computing the far-field pattern induced by a large number of incident angles. Here we address challenges (i) and (ii), supporting our theory with numerical experiments. Challenge (i) is solved using techniques from computational complex analysis: we reformulate the embedding formula as a complex contour integral and prove that this is much less sensitive to numerical errors. In practice, this contour integral can be efficiently evaluated by residue calculus. Challenge (ii) is addressed using techniques from numerical linear algebra: we oversample, considering more canonical incident angles than are necessary, thus expanding the set of valid coefficient vectors. The coefficient vector can then be selected using either a least squares approach or column subset selection.

Deep learning methods have access to be employed for solving physical systems governed by parametric partial differential equations (PDEs) due to massive scientific data. It has been refined to operator learning that focuses on learning non-linear mapping between infinite-dimensional function spaces, offering interface from observations to solutions. However, state-of-the-art neural operators are limited to constant and uniform discretization, thereby leading to deficiency in generalization on arbitrary discretization schemes for computational domain. In this work, we propose a novel operator learning algorithm, referred to as Dynamic Gaussian Graph Operator (DGGO) that expands neural operators to learning parametric PDEs in arbitrary discrete mechanics problems. The Dynamic Gaussian Graph (DGG) kernel learns to map the observation vectors defined in general Euclidean space to metric vectors defined in high-dimensional uniform metric space. The DGG integral kernel is parameterized by Gaussian kernel weighted Riemann sum approximating and using dynamic message passing graph to depict the interrelation within the integral term. Fourier Neural Operator is selected to localize the metric vectors on spatial and frequency domains. Metric vectors are regarded as located on latent uniform domain, wherein spatial and spectral transformation offer highly regular constraints on solution space. The efficiency and robustness of DGGO are validated by applying it to solve numerical arbitrary discrete mechanics problems in comparison with mainstream neural operators. Ablation experiments are implemented to demonstrate the effectiveness of spatial transformation in the DGG kernel. The proposed method is utilized to forecast stress field of hyper-elastic material with geometrically variable void as engineering application.

We prove non-asymptotic error bounds for particle gradient descent (PGD)~(Kuntz et al., 2023), a recently introduced algorithm for maximum likelihood estimation of large latent variable models obtained by discretizing a gradient flow of the free energy. We begin by showing that, for models satisfying a condition generalizing both the log-Sobolev and the Polyak--{\L}ojasiewicz inequalities (LSI and P{\L}I, respectively), the flow converges exponentially fast to the set of minimizers of the free energy. We achieve this by extending a result well-known in the optimal transport literature (that the LSI implies the Talagrand inequality) and its counterpart in the optimization literature (that the P{\L}I implies the so-called quadratic growth condition), and applying it to our new setting. We also generalize the Bakry--\'Emery Theorem and show that the LSI/P{\L}I generalization holds for models with strongly concave log-likelihoods. For such models, we further control PGD's discretization error, obtaining non-asymptotic error bounds. While we are motivated by the study of PGD, we believe that the inequalities and results we extend may be of independent interest.

The fundamental computational issues in Bayesian inverse problems (BIP) governed by partial differential equations (PDEs) stem from the requirement of repeated forward model evaluations. A popular strategy to reduce such costs is to replace expensive model simulations with computationally efficient approximations using operator learning, motivated by recent progress in deep learning. However, using the approximated model directly may introduce a modeling error, exacerbating the already ill-posedness of inverse problems. Thus, balancing between accuracy and efficiency is essential for the effective implementation of such approaches. To this end, we develop an adaptive operator learning framework that can reduce modeling error gradually by forcing the surrogate to be accurate in local areas. This is accomplished by adaptively fine-tuning the pre-trained approximate model with train- ing points chosen by a greedy algorithm during the posterior computational process. To validate our approach, we use DeepOnet to construct the surrogate and unscented Kalman inversion (UKI) to approximate the BIP solution, respectively. Furthermore, we present a rigorous convergence guarantee in the linear case using the UKI framework. The approach is tested on a number of benchmarks, including the Darcy flow, the heat source inversion problem, and the reaction-diffusion problem. The numerical results show that our method can significantly reduce computational costs while maintaining inversion accuracy.

We establish a theoretical framework of the particle relaxation method for uniform particle generation of Smoothed Particle Hydrodynamics. We achieve this by reformulating the particle relaxation as an optimization problem. The objective function is an integral difference between discrete particle-based and smoothed-analytical volume fractions. The analysis demonstrates that the particle relaxation method in the domain interior is essentially equivalent to employing a gradient descent approach to solve this optimization problem, and we can extend such an equivalence to the bounded domain by introducing a proper boundary term. Additionally, each periodic particle distribution has a spatially uniform particle volume, denoted as characteristic volume. The relaxed particle distribution has the largest characteristic volume, and the kernel cut-off radius determines this volume. This insight enables us to control the relaxed particle distribution by selecting the target kernel cut-off radius for a given kernel function.

Sylvester matrix equations are ubiquitous in scientific computing. However, few solution techniques exist for their generalized multiterm version, as they now arise in an increasingly large number of applications. In this work, we consider algebraic parameter-free preconditioning techniques for the iterative solution of generalized multiterm Sylvester equations. They consist in constructing low Kronecker rank approximations of either the operator itself or its inverse. While the former requires solving standard Sylvester equations in each iteration, the latter only requires matrix-matrix multiplications, which are highly optimized on modern computer architectures. Moreover, low Kronecker rank approximate inverses can be easily combined with sparse approximate inverse techniques, thereby enhancing their performance with little or no damage to their effectiveness.

The noncommutative sum-of-squares (ncSoS) hierarchy was introduced by Navascu\'{e}s-Pironio-Ac\'{i}n as a sequence of semidefinite programming relaxations for approximating values of noncommutative polynomial optimization problems, which were originally intended to generalize quantum values of nonlocal games. Recent work has started to analyze the hierarchy for approximating ground energies of local Hamiltonians, initially through rounding algorithms which output product states for degree-2 ncSoS applied to Quantum Max-Cut. Some rounding methods are known which output entangled states, but they use degree-4 ncSoS. Based on this, Hwang-Neeman-Parekh-Thompson-Wright conjectured that degree-2 ncSoS cannot beat product state approximations for Quantum Max-Cut and gave a partial proof relying on a conjectural generalization of Borrell's inequality. In this work we consider a family of Hamiltonians (called the quantum rotor model in condensed matter literature or lattice $O(k)$ vector model in quantum field theory) with infinite-dimensional local Hilbert space $L^{2}(S^{k - 1})$, and show that a degree-2 ncSoS relaxation approximates the ground state energy better than any product state.

北京阿比特科技有限公司