亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We investigate expansions of Presburger arithmetic, i.e., the theory of the integers with addition and order, with additional structure related to exponentiation: either a function that takes a number to the power of $2$, or a predicate for the powers of $2$. The latter theory, denoted $\mathrm{PresPower}$, was introduced by B\"uchi as a first attempt at characterising the sets of tuples of numbers that can be expressed using finite automata; B\"uchi's method does not give an elementary upper bound, and the complexity of this theory has been open. The former theory, denoted as $\mathrm{PresExp}$, was shown decidable by Semenov; while the decision procedure for this theory differs radically from the automata-based method proposed by B\"uchi, the method is also non-elementary. And in fact, the theory with the power function has a non-elementary lower bound. In this paper, we show that while Semenov's and B\"uchi's approaches yield non-elementary blow-ups for $\mathrm{PresPower}$, the theory is in fact decidable in triply exponential time, similar to the best known quantifier-elimination algorithm for Presburger arithmetic. We also provide a $\mathrm{NExpTime}$ upper bound for the existential fragment of $\mathrm{PresExp}$, a step towards a finer-grained analysis of its complexity. Both these results are established by analysing a single parameterized satisfiability algorithm for $\mathrm{PresExp}$, which can be specialized to either the setting of $\mathrm{PresPower}$ or the existential theory of $\mathrm{PresExp}$. Besides the new upper bounds for the existential theory of $\mathrm{PresExp}$ and $\mathrm{PresPower}$, we believe our algorithm provides new intuition for the decidability of these theories, and for the features that lead to non-elementary blow-ups.

相關內容

In this article, we present the time-space Chebyshev pseudospectral method (TS-CPsM) to approximate a solution to the generalised Burgers-Fisher (gBF) equation. The Chebyshev-Gauss-Lobatto (CGL) points serve as the foundation for the recommended method, which makes use of collocations in both the time and space directions. Further, using a mapping, the non-homogeneous initial-boundary value problem is transformed into a homogeneous problem, and a system of algebraic equations is obtained. The numerical approach known as Newton-Raphson is implemented in order to get the desired results for the system. The proposed method's stability analysis has been performed. Different researchers' considerations on test problems have been explored to illustrate the robustness and practicality of the approach presented. The approximate solutions we found using the proposed method are highly accurate and significantly better than the existing results.

In this study, we examine numerical approximations for 2nd-order linear-nonlinear differential equations with diverse boundary conditions, followed by the residual corrections of the first approximations. We first obtain numerical results using the Galerkin weighted residual approach with Bernstein polynomials. The generation of residuals is brought on by the fact that our first approximation is computed using numerical methods. To minimize these residuals, we use the compact finite difference scheme of 4th-order convergence to solve the error differential equations in accordance with the error boundary conditions. We also introduce the formulation of the compact finite difference method of fourth-order convergence for the nonlinear BVPs. The improved approximations are produced by adding the error values derived from the approximations of the error differential equation to the weighted residual values. Numerical results are compared to the exact solutions and to the solutions available in the published literature to validate the proposed scheme, and high accuracy is achieved in all cases

Two numerical schemes are proposed and investigated for the Yang--Mills equations, which can be seen as a nonlinear generalisation of the Maxwell equations set on Lie algebra-valued functions, with similarities to certain formulations of General Relativity. Both schemes are built on the Discrete de Rham (DDR) method, and inherit from its main features: an arbitrary order of accuracy, and applicability to generic polyhedral meshes. They make use of the complex property of the DDR, together with a Lagrange-multiplier approach, to preserve, at the discrete level, a nonlinear constraint associated with the Yang--Mills equations. We also show that the schemes satisfy a discrete energy dissipation (the dissipation coming solely from the implicit time stepping). Issues around the practical implementations of the schemes are discussed; in particular, the assembly of the local contributions in a way that minimises the price we pay in dealing with nonlinear terms, in conjunction with the tensorisation coming from the Lie algebra. Numerical tests are provided using a manufactured solution, and show that both schemes display a convergence in $L^2$-norm of the potential and electrical fields in $\mathcal O(h^{k+1})$ (provided that the time step is of that order), where $k$ is the polynomial degree chosen for the DDR complex. We also numerically demonstrate the preservation of the constraint.

For modern gradient-based optimization, a developmental landmark is Nesterov's accelerated gradient descent method, which is proposed in [Nesterov, 1983], so shorten as Nesterov-1983. Afterward, one of the important progresses is its proximal generalization, named the fast iterative shrinkage-thresholding algorithm (FISTA), which is widely used in image science and engineering. However, it is unknown whether both Nesterov-1983 and FISTA converge linearly on the strongly convex function, which has been listed as the open problem in the comprehensive review [Chambolle and Pock, 2016, Appendix B]. In this paper, we answer this question by the use of the high-resolution differential equation framework. Along with the phase-space representation previously adopted, the key difference here in constructing the Lyapunov function is that the coefficient of the kinetic energy varies with the iteration. Furthermore, we point out that the linear convergence of both the two algorithms above has no dependence on the parameter $r$ on the strongly convex function. Meanwhile, it is also obtained that the proximal subgradient norm converges linearly.

We show that corner polyhedra and 3-connected Schnyder labelings join the growing list of planar structures that can be set in exact correspondence with (weighted) models of quadrant walks via a bijection due to Kenyon, Miller, Sheffield and Wilson. Our approach leads to a first polynomial time algorithm to count these structures, and to the determination of their exact asymptotic growth constants: the number $p_n$ of corner polyhedra and $s_n$ of 3-connected Schnyder labelings of size $n$ respectively satisfy $(p_n)^{1/n}\to 9/2$ and $(s_n)^{1/n}\to 16/3$ as $n$ goes to infinity. While the growth rates are rational, like in the case of previously known instances of such correspondences, the exponent of the asymptotic polynomial correction to the exponential growth does not appear to follow from the now standard Denisov-Wachtel approach, due to a bimodal behavior of the step set of the underlying tandem walk. However a heuristic argument suggests that these exponents are $-1-\pi/\arccos(9/16)\approx -4.23$ for $p_n$ and $-1-\pi/\arccos(22/27)\approx -6.08$ for $s_n$, which would imply that the associated series are not D-finite.

Classic learning theory suggests that proper regularization is the key to good generalization and robustness. In classification, current training schemes only target the complexity of the classifier itself, which can be misleading and ineffective. Instead, we advocate directly measuring the complexity of the decision boundary. Existing literature is limited in this area with few well-established definitions of boundary complexity. As a proof of concept, we start by analyzing ReLU neural networks, whose boundary complexity can be conveniently characterized by the number of affine pieces. With the help of tropical geometry, we develop a novel method that can explicitly count the exact number of boundary pieces, and as a by-product, the exact number of total affine pieces. Numerical experiments are conducted and distinctive properties of our boundary complexity are uncovered. First, the boundary piece count appears largely independent of other measures, e.g., total piece count, and $l_2$ norm of weights, during the training process. Second, the boundary piece count is negatively correlated with robustness, where popular robust training techniques, e.g., adversarial training or random noise injection, are found to reduce the number of boundary pieces.

We show that the sum of a sequence of integers can be computed in linear time on a Turing machine. In particular, the most obvious algorithm for this problem, which appears to require quadratic time due to carry propagation, actually runs in linear time by amortized analysis.

In this work we connect two notions: That of the nonparametric mode of a probability measure, defined by asymptotic small ball probabilities, and that of the Onsager-Machlup functional, a generalized density also defined via asymptotic small ball probabilities. We show that in a separable Hilbert space setting and under mild conditions on the likelihood, modes of a Bayesian posterior distribution based upon a Gaussian prior exist and agree with the minimizers of its Onsager-Machlup functional and thus also with weak posterior modes. We apply this result to inverse problems and derive conditions on the forward mapping under which this variational characterization of posterior modes holds. Our results show rigorously that in the limit case of infinite-dimensional data corrupted by additive Gaussian or Laplacian noise, nonparametric maximum a posteriori estimation is equivalent to Tikhonov-Phillips regularization. In comparison with the work of Dashti, Law, Stuart, and Voss (2013), the assumptions on the likelihood are relaxed so that they cover in particular the important case of white Gaussian process noise. We illustrate our results by applying them to a severely ill-posed linear problem with Laplacian noise, where we express the maximum a posteriori estimator analytically and study its rate of convergence in the small noise limit.

This paper considers the Westervelt equation, one of the most widely used models in nonlinear acoustics, and seeks to recover two spatially-dependent parameters of physical importance from time-trace boundary measurements. Specifically, these are the nonlinearity parameter $\kappa(x)$ often referred to as $B/A$ in the acoustics literature and the wave speed $c_0(x)$. The determination of the spatial change in these quantities can be used as a means of imaging. We consider identifiability from one or two boundary measurements as relevant in these applications. For a reformulation of the problem in terms of the squared slowness $\mathfrak{s}=1/c_0^2$ and the combined coefficient $\eta=\frac{B/A+2}{\varrho_0 c_0^4}$ we devise a frozen Newton method and prove its convergence. The effectiveness (and limitations) of this iterative scheme are demonstrated by numerical examples.

An $(n,m)$-graph is a graph with $n$ types of arcs and $m$ types of edges. A homomorphism of an $(n,m)$-graph $G$ to another $(n,m)$-graph $H$ is a vertex mapping that preserves the adjacencies along with their types and directions. The order of a smallest (with respect to the number of vertices) such $H$ is the $(n,m)$-chromatic number of $G$.Moreover, an $(n,m)$-relative clique $R$ of an $(n,m)$-graph $G$ is a vertex subset of $G$ for which no two distinct vertices of $R$ get identified under any homomorphism of $G$. The $(n,m)$-relative clique number of $G$, denoted by $\omega_{r(n,m)}(G)$, is the maximum $|R|$ such that $R$ is an $(n,m)$-relative clique of $G$. In practice, $(n,m)$-relative cliques are often used for establishing lower bounds of $(n,m)$-chromatic number of graph families. Generalizing an open problem posed by Sopena [Discrete Mathematics 2016] in his latest survey on oriented coloring, Chakroborty, Das, Nandi, Roy and Sen [Discrete Applied Mathematics 2022] conjectured that $\omega_{r(n,m)}(G) \leq 2 (2n+m)^2 + 2$ for any triangle-free planar $(n,m)$-graph $G$ and that this bound is tight for all $(n,m) \neq (0,1)$.In this article, we positively settle this conjecture by improving the previous upper bound of $\omega_{r(n,m)}(G) \leq 14 (2n+m)^2 + 2$ to $\omega_{r(n,m)}(G) \leq 2 (2n+m)^2 + 2$, and by finding examples of triangle-free planar graphs that achieve this bound. As a consequence of the tightness proof, we also establish a new lower bound of $2 (2n+m)^2 + 2$ for the $(n,m)$-chromatic number for the family of triangle-free planar graphs.

北京阿比特科技有限公司