亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study monotone finite difference approximations for a broad class of reaction-diffusion problems, incorporating general symmetric L\'evy operators. By employing an adaptive time-stepping discretization, we derive the discrete Fujita critical exponent for these problems. Additionally, under general consistency assumptions, we establish the convergence of discrete blow-up times to their continuous counterparts. As complementary results, we also present the asymptotic-in-time behavior of discrete heat-type equations as well as an extensive analysis of discrete eigenvalue problems.

相關內容

This paper presents a physics-informed deep learning approach for predicting the replicator equation, allowing accurate forecasting of population dynamics. This methodological innovation allows us to derive governing differential or difference equations for systems that lack explicit mathematical models. We used the SINDy model first introduced by Fasel, Kaiser, Kutz, Brunton, and Brunt 2016a to get the replicator equation, which will significantly advance our understanding of evolutionary biology, economic systems, and social dynamics. By refining predictive models across multiple disciplines, including ecology, social structures, and moral behaviours, our work offers new insights into the complex interplay of variables shaping evolutionary outcomes in dynamic systems

This paper presents a unifying framework for Trefftz-like methods, which allows the analysis and construction of discretization methods based on the decomposition into, and coupling of, local and global problems. We apply the framework to provide a comprehensive error analysis for the Embedded Trefftz discontinuous Galerkin method, for a wide range of second-order scalar elliptic partial differential equations and a scalar reaction-advection problem. We also analyze quasi-Trefftz methods with our framework and build bridges to other methods that are similar in virtue.

Combinatorial problems such as combinatorial optimization and constraint satisfaction problems arise in decision-making across various fields of science and technology. In real-world applications, when multiple optimal or constraint-satisfying solutions exist, enumerating all these solutions -- rather than finding just one -- is often desirable, as it provides flexibility in decision-making. However, combinatorial problems and their enumeration versions pose significant computational challenges due to combinatorial explosion. To address these challenges, we propose enumeration algorithms for combinatorial optimization and constraint satisfaction problems using Ising machines. Ising machines are specialized devices designed to efficiently solve combinatorial problems. Typically, they sample low-cost solutions in a stochastic manner. Our enumeration algorithms repeatedly sample solutions to collect all desirable solutions. The crux of the proposed algorithms is their stopping criteria for sampling, which are derived based on probability theory. In particular, the proposed algorithms have theoretical guarantees that the failure probability of enumeration is bounded above by a user-specified value, provided that lower-cost solutions are sampled more frequently and equal-cost solutions are sampled with equal probability. Many physics-based Ising machines are expected to (approximately) satisfy these conditions. As a demonstration, we applied our algorithm using simulated annealing to maximum clique enumeration on random graphs. We found that our algorithm enumerates all maximum cliques in large dense graphs faster than a conventional branch-and-bound algorithm specially designed for maximum clique enumeration. This demonstrates the promising potential of our proposed approach.

Block majorization-minimization (BMM) is a simple iterative algorithm for constrained nonconvex optimization that sequentially minimizes majorizing surrogates of the objective function in each block while the others are held fixed. BMM entails a large class of optimization algorithms such as block coordinate descent and its proximal-point variant, expectation-minimization, and block projected gradient descent. We first establish that for general constrained nonsmooth nonconvex optimization, BMM with $\rho$-strongly convex and $L_g$-smooth surrogates can produce an $\epsilon$-approximate first-order optimal point within $\widetilde{O}((1+L_g+\rho^{-1})\epsilon^{-2})$ iterations and asymptotically converges to the set of first-order optimal points. Next, we show that BMM combined with trust-region methods with diminishing radius has an improved complexity of $\widetilde{O}((1+L_g) \epsilon^{-2})$, independent of the inverse strong convexity parameter $\rho^{-1}$, allowing improved theoretical and practical performance with `flat' surrogates. Our results hold robustly even when the convex sub-problems are solved as long as the optimality gaps are summable. Central to our analysis is a novel continuous first-order optimality measure, by which we bound the worst-case sub-optimality in each iteration by the first-order improvement the algorithm makes. We apply our general framework to obtain new results on various algorithms such as the celebrated multiplicative update algorithm for nonnegative matrix factorization by Lee and Seung, regularized nonnegative tensor decomposition, and the classical block projected gradient descent algorithm. Lastly, we numerically demonstrate that the additional use of diminishing radius can improve the convergence rate of BMM in many instances.

Adaptive cubic regularization methods for solving nonconvex problems need the efficient computation of the trial step, involving the minimization of a cubic model. We propose a new approach in which this model is minimized in a low dimensional subspace that, in contrast to classic approaches, is reused for a number of iterations. Whenever the trial step produced by the low-dimensional minimization process is unsatisfactory, we employ a regularized Newton step whose regularization parameter is a by-product of the model minimization over the low-dimensional subspace. We show that the worst-case complexity of classic cubic regularized methods is preserved, despite the possible regularized Newton steps. We focus on the large class of problems for which (sparse) direct linear system solvers are available and provide several experimental results showing the very large gains of our new approach when compared to standard implementations of adaptive cubic regularization methods based on direct linear solvers. Our first choice as projection space for the low-dimensional model minimization is the polynomial Krylov subspace; nonetheless, we also explore the use of rational Krylov subspaces in case where the polynomial ones lead to less competitive numerical results.

This paper deals with the application of probabilistic time integration methods to semi-explicit partial differential-algebraic equations of parabolic type and its semi-discrete counterparts, namely semi-explicit differential-algebraic equations of index 2. The proposed methods iteratively construct a probability distribution over the solution of deterministic problems, enhancing the information obtained from the numerical simulation. Within this paper, we examine the efficacy of the randomized versions of the implicit Euler method, the midpoint scheme, and exponential integrators of first and second order. By demonstrating the consistency and convergence properties of these solvers, we illustrate their utility in capturing the sensitivity of the solution to numerical errors. Our analysis establishes the theoretical validity of randomized time integration for constrained systems and offers insights into the calibration of probabilistic integrators for practical applications.

The augmented Lagrange method is employed to address the optimal control problem involving pointwise state constraints in parabolic equations. The strong convergence of the primal variables and the weak convergence of the dual variables are rigorously established. The sub-problems arising in the algorithm are solved using the Method of Successive Approximations (MSA), derived from Pontryagin's principle. Numerical experiments are provided to validate the convergence of the proposed algorithm.

This paper presents both a priori and a posteriori error analyses for a really pressure-robust virtual element method to approximate the incompressible Brinkman problem. We construct a divergence-preserving reconstruction operator using the Raviart-Thomas element for the discretization on the right-hand side. The optimal priori error estimates are carried out, which imply the velocity error in the energy norm is independent of both the continuous pressure and the viscosity. Taking advantage of the virtual element method's ability to handle more general polygonal meshes, we implement effective mesh refinement strategies and develop a residual-type a posteriori error estimator. This estimator is proven to provide global upper and local lower bounds for the discretization error. Finally, some numerical experiments demonstrate the robustness, accuracy, reliability and efficiency of the method.

This paper investigates logical consequence defined in terms of probability distributions, for a classical propositional language using a standard notion of probability. We examine three distinct probabilistic consequence notions, which we call material consequence, preservation consequence, and symmetric consequence. While material consequence is fully classical for any threshold, preservation consequence and symmetric consequence are subclassical, with only symmetric consequence gradually approaching classical logic at the limit threshold equal to 1. Our results extend earlier results obtained by J. Paris in a SET-FMLA setting to the SET-SET setting, and consider open thresholds beside closed ones. In the SET-SET setting, in particular, they reveal that probability 1 preservation does not yield classical logic, but supervaluationism, and conversely positive probability preservation yields subvaluationism.

Deep learning is usually described as an experiment-driven field under continuous criticizes of lacking theoretical foundations. This problem has been partially fixed by a large volume of literature which has so far not been well organized. This paper reviews and organizes the recent advances in deep learning theory. The literature is categorized in six groups: (1) complexity and capacity-based approaches for analyzing the generalizability of deep learning; (2) stochastic differential equations and their dynamic systems for modelling stochastic gradient descent and its variants, which characterize the optimization and generalization of deep learning, partially inspired by Bayesian inference; (3) the geometrical structures of the loss landscape that drives the trajectories of the dynamic systems; (4) the roles of over-parameterization of deep neural networks from both positive and negative perspectives; (5) theoretical foundations of several special structures in network architectures; and (6) the increasingly intensive concerns in ethics and security and their relationships with generalizability.

北京阿比特科技有限公司