亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Considered herein is a Jacobian-free Newton method for the numerical solution of nonlinear equations where the Jacobian is approximated using the complex-step derivative approximation. We demonstrate that this method converges for complex-step values sufficiently small and not necessarily tiny. Notably, in the case of scalar equations the convergence rate becomes quadratic as the complex-step tends to zero. On the other hand, in the case of systems of equations the rate is quadratic for any appropriately small value of the complex-step and not just in the limit to zero. This assertion is substantiated through numerical experiments. Furthermore, we demonstrate the method's seamless applicability in solving nonlinear systems that arise in the context of differential equations, employing it as a Jacobian-free Newton-Krylov method.

相關內容

We consider finite element approximations to the optimal constant for the Hardy inequality with exponent $p=2$ in bounded domains of dimension $n=1$ or $n \geq 3$. For finite element spaces of piecewise linear and continuous functions on a mesh of size $h$, we prove that the approximate Hardy constant converges to the optimal Hardy constant at a rate proportional to $1/| \log h |^2$. This result holds in dimension $n=1$, in any dimension $n \geq 3$ if the domain is the unit ball and the finite element discretization exploits the rotational symmetry of the problem, and in dimension $n=3$ for general finite element discretizations of the unit ball. In the first two cases, our estimates show excellent quantitative agreement with values of the discrete Hardy constant obtained computationally.

A (left) group code of length n is a linear code which is the image of a (left) ideal of a group algebra via an isomorphism from FG to Fn which maps G to the standard basis of Fn. Many classical linear codes have been shown to be group codes. In this paper we obtain a criterion to decide when a linear code is a group code in terms of its intrinsical properties in the ambient space Fn, which does not assume an a priori group algebra structure on Fn. As an application we provide a family of groups (including metacyclic groups) for which every two-sided group code is an abelian group code. It is well known that Reed-Solomon codes are cyclic and its parity check extensions are elementary abelian group codes. These two classes of codes are included in the class of Cauchy codes. Using our criterion we classify the Cauchy codes of some lengths which are left group codes and the possible group code structures on these codes.

We study the problem of identifying a small set $k\sim n^\theta$, $0<\theta<1$, of infected individuals within a large population of size $n$ by testing groups of individuals simultaneously. All tests are conducted concurrently. The goal is to minimise the total number of tests required. In this paper we make the (realistic) assumption that tests are noisy, i.e.\ that a group that contains an infected individual may return a negative test result or one that does not contain an infected individual may return a positive test results with a certain probability. The noise need not be symmetric. We develop an algorithm called SPARC that correctly identifies the set of infected individuals up to $o(k)$ errors with high probability with the asymptotically minimum number of tests. Additionally, we develop an algorithm called SPEX that exactly identifies the set of infected individuals w.h.p. with a number of tests that matches the information-theoretic lower bound for the constant column design, a powerful and well-studied test design.

We present a manifold-based autoencoder method for learning nonlinear dynamics in time, notably partial differential equations (PDEs), in which the manifold latent space evolves according to Ricci flow. This can be accomplished by simulating Ricci flow in a physics-informed setting, and manifold quantities can be matched so that Ricci flow is empirically achieved. With our methodology, the manifold is learned as part of the training procedure, so ideal geometries may be discerned, while the evolution simultaneously induces a more accommodating latent representation over static methods. We present our method on a range of numerical experiments consisting of PDEs that encompass desirable characteristics such as periodicity and randomness, remarking error on in-distribution and extrapolation scenarios.

The joint bidiagonalization (JBD) process iteratively reduces a matrix pair $\{A,L\}$ to two bidiagonal forms simultaneously, which can be used for computing a partial generalized singular value decomposition (GSVD) of $\{A,L\}$. The process has a nested inner-outer iteration structure, where the inner iteration usually can not be computed exactly. In this paper, we study the inaccurately computed inner iterations of JBD by first investigating influence of computational error of the inner iteration on the outer iteration, and then proposing a reorthogonalized JBD (rJBD) process to keep orthogonality of a part of Lanczos vectors. An error analysis of the rJBD is carried out to build up connections with Lanczos bidiagonalizations. The results are then used to investigate convergence and accuracy of the rJBD based GSVD computation. It is shown that the accuracy of computed GSVD components depend on the computing accuracy of inner iterations and condition number of $(A^T,L^T)^T$ while the convergence rate is not affected very much. For practical JBD based GSVD computations, our results can provide a guideline for choosing a proper computing accuracy of inner iterations in order to obtain approximate GSVD components with a desired accuracy. Numerical experiments are made to confirm our theoretical results.

In pure integer linear programming it is often desirable to work with polyhedra that are full-dimensional, and it is well known that it is possible to reduce any polyhedron to a full-dimensional one in polynomial time. More precisely, using the Hermite normal form, it is possible to map a non full-dimensional polyhedron to a full-dimensional isomorphic one in a lower-dimensional space, while preserving integer vectors. In this paper, we extend the above result simultaneously in two directions. First, we consider mixed integer vectors instead of integer vectors, by leveraging on the concept of "integer reflexive generalized inverse." Second, we replace polyhedra with convex quadratic sets, which are sets obtained from polyhedra by enforcing one additional convex quadratic inequality. We study structural properties of convex quadratic sets, and utilize them to obtain polynomial time algorithms to recognize full-dimensional convex quadratic sets, and to find an affine function that maps a non full-dimensional convex quadratic set to a full-dimensional isomorphic one in a lower-dimensional space, while preserving mixed integer vectors. We showcase the applicability and the potential impact of these results by showing that they can be used to prove that mixed integer convex quadratic programming is fixed parameter tractable with parameter the number of integer variables. Our algorithm unifies and extends the known polynomial time solvability of pure integer convex quadratic programming in fixed dimension and of convex quadratic programming.

Motivated by the desire to understand stochastic algorithms for nonconvex optimization that are robust to their hyperparameter choices, we analyze a mini-batched prox-linear iterative algorithm for the problem of recovering an unknown rank-1 matrix from rank-1 Gaussian measurements corrupted by noise. We derive a deterministic recursion that predicts the error of this method and show, using a non-asymptotic framework, that this prediction is accurate for any batch-size and a large range of step-sizes. In particular, our analysis reveals that this method, though stochastic, converges linearly from a local initialization with a fixed step-size to a statistical error floor. Our analysis also exposes how the batch-size, step-size, and noise level affect the (linear) convergence rate and the eventual statistical estimation error, and we demonstrate how to use our deterministic predictions to perform hyperparameter tuning (e.g. step-size and batch-size selection) without ever running the method. On a technical level, our analysis is enabled in part by showing that the fluctuations of the empirical iterates around our deterministic predictions scale with the error of the previous iterate.

A rectangulation is a decomposition of a rectangle into finitely many rectangles. Via natural equivalence relations, rectangulations can be seen as combinatorial objects with a rich structure, with links to lattice congruences, flip graphs, polytopes, lattice paths, Hopf algebras, etc. In this paper, we first revisit the structure of the respective equivalence classes: weak rectangulations that preserve rectangle-segment adjacencies, and strong rectangulations that preserve rectangle-rectangle adjacencies. We thoroughly investigate posets defined by adjacency in rectangulations of both kinds, and unify and simplify known bijections between rectangulations and permutation classes. This yields a uniform treatment of mappings between permutations and rectangulations that unifies the results from earlier contributions, and emphasizes parallelism and differences between the weak and the strong cases. Then, we consider the special case of guillotine rectangulations, and prove that they can be characterized - under all known mappings between permutations and rectangulations - by avoidance of two mesh patterns that correspond to "windmills" in rectangulations. This yields new permutation classes in bijection with weak guillotine rectangulations, and the first known permutation class in bijection with strong guillotine rectangulations. Finally, we address enumerative issues and prove asymptotic bounds for several families of strong rectangulations.

Several physical problems modeled by second-order elliptic equations can be efficiently solved using mixed finite elements of the Raviart-Thomas family RTk for N-simplexes, introduced in the seventies. In case Neumann conditions are prescribed on a curvilinear boundary, the normal component of the flux variable should preferably not take up values at nodes shifted to the boundary of the approximating polytope in the corresponding normal direction. This is because the method's accuracy downgrades, which was shown in previous papers by the first author et al. In that work an order-preserving technique was studied, based on a parametric version of these elements with curved simplexes. In this article an alternative with straight-edged triangles for two-dimensional problems is proposed. The key point of this method is a Petrov-Galerkin formulation of the mixed problem, in which the test-flux space is a little different from the shape-flux space. After describing the underlying variant of RTk we show that it gives rise to a uniformly stable and optimally convergent method, taking the Poisson equation as a model problem.

Random probabilities are a key component to many nonparametric methods in Statistics and Machine Learning. To quantify comparisons between different laws of random probabilities several works are starting to use the elegant Wasserstein over Wasserstein distance. In this paper we prove that the infinite-dimensionality of the space of probabilities drastically deteriorates its sample complexity, which is slower than any polynomial rate in the sample size. We thus propose a new distance that preserves many desirable properties of the former while achieving a parametric rate of convergence. In particular, our distance 1) metrizes weak convergence; 2) can be estimated numerically through samples with low complexity; 3) can be bounded analytically from above and below. The main ingredient are integral probability metrics, which lead to the name hierarchical IPM.

北京阿比特科技有限公司