亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we present, evaluate and analyse the performance of parallel synchronous Jacobi algorithms by different partitioned procedures including band-row splitting, band-row sparsity pattern splitting and substructuring splitting, when solving sparse large linear systems. Numerical experiments performed on a set of academic 3D Laplace equation and on a real gravity matrices arising from the Chicxulub crater are exhibited, and show the impact of splitting on parallel synchronous iterations when solving sparse large linear systems. The numerical results clearly show the interest of substructuring methods compared to band-row splitting strategies.

相關內容

We develop a class of mixed-integer formulations for disjunctive constraints intermediate to the big-M and convex hull formulations in terms of relaxation strength. The main idea is to capture the best of both the big-M and convex hull formulations: a computationally light formulation with a tight relaxation. The "$P$-split" formulations are based on a lifted transformation that splits convex additively separable constraints into $P$ partitions and forms the convex hull of the linearized and partitioned disjunction. We analyze the continuous relaxation of the $P$-split formulations and show that, under certain assumptions, the formulations form a hierarchy starting from a big-M equivalent and converging to the convex hull. The goal of the $P$-split formulations is to form a strong approximation of the convex hull through a computationally simpler formulation. We computationally compare the $P$-split formulations against big-M and convex hull formulations on 320 test instances. The test problems include K-means clustering, P_ball problems, and optimization over trained ReLU neural networks. The computational results show promising potential of the $P$-split formulations. For many of the test problems, $P$-split formulations are solved with a similar number of explored nodes as the convex hull formulation, while reducing the solution time by an order of magnitude and outperforming big-M both in time and number of explored nodes.

In this paper we study systems of autonomous algebraic ODEs in several differential indeterminates. We develop a notion of algebraic dimension of such systems by considering them as algebraic systems. Afterwards we apply differential elimination and analyze the behavior of the dimension in the resulting Thomas decomposition. For such systems of algebraic dimension one, we show that all formal Puiseux series solutions can be approximated up to an arbitrary order by convergent solutions. We show that the existence of Puiseux series and algebraic solutions can be decided algorithmically. Moreover, we present a symbolic algorithm to compute all algebraic solutions. The output can either be represented by triangular systems or by their minimal polynomials.

Many papers in the field of integer linear programming (ILP, for short) are devoted to problems of the type $\max\{c^\top x \colon A x = b,\, x \in \mathbb{Z}^n_{\geq 0}\}$, where all the entries of $A,b,c$ are integer, parameterized by the number of rows of $A$ and $\|A\|_{\max}$. This class of problems is known under the name of ILP problems in the standard form, adding the word "bounded" if $x \leq u$, for some integer vector $u$. Recently, many new sparsity, proximity, and complexity results were obtained for bounded and unbounded ILP problems in the standard form. In this paper, we consider ILP problems in the canonical form $$\max\{c^\top x \colon b_l \leq A x \leq b_r,\, x \in \mathbb{Z}^n\},$$ where $b_l$ and $b_r$ are integer vectors. We assume that the integer matrix $A$ has the rank $n$, $(n + m)$ rows, $n$ columns, and parameterize the problem by $m$ and $\Delta(A)$, where $\Delta(A)$ is the maximum of $n \times n$ sub-determinants of $A$, taken in the absolute value. We show that any ILP problem in the standard form can be polynomially reduced to some ILP problem in the canonical form, preserving $m$ and $\Delta(A)$, but the reverse reduction is not always possible. More precisely, we define the class of generalized ILP problems in the standard form, which includes an additional group constraint, and prove the equivalence to ILP problems in the canonical form. We generalize known sparsity, proximity, and complexity bounds for ILP problems in the canonical form. Additionally, sometimes, we strengthen previously known results for ILP problems in the canonical form, and, sometimes, we give shorter proofs. Finally, we consider the special cases of $m \in \{0,1\}$. By this way, we give specialised sparsity, proximity, and complexity bounds for the problems on simplices, Knapsack problems and Subset-Sum problems.

We present a novel hybrid strategy based on machine learning to improve curvature estimation in the level-set method. The proposed inference system couples enhanced neural networks with standard numerical schemes to compute curvature more accurately. The core of our hybrid framework is a switching mechanism that relies on well established numerical techniques to gauge curvature. If the curvature magnitude is larger than a resolution-dependent threshold, it uses a neural network to yield a better approximation. Our networks are multilayer perceptrons fitted to synthetic data sets composed of sinusoidal- and circular-interface samples at various configurations. To reduce data set size and training complexity, we leverage the problem's characteristic symmetry and build our models on just half of the curvature spectrum. These savings lead to a powerful inference system able to outperform any of its numerical or neural component alone. Experiments with stationary, smooth interfaces show that our hybrid solver is notably superior to conventional numerical methods in coarse grids and along steep interface regions. Compared to prior research, we have observed outstanding gains in precision after training the regression model with data pairs from more than a single interface type and transforming data with specialized input preprocessing. In particular, our findings confirm that machine learning is a promising venue for reducing or removing mass loss in the level-set method.

An evolving surface finite element discretisation is analysed for the evolution of a closed two-dimensional surface governed by a system coupling a generalised forced mean curvature flow and a reaction--diffusion process on the surface, inspired by a gradient flow of a coupled energy. Two algorithms are proposed, both based on a system coupling the diffusion equation to evolution equations for geometric quantities in the velocity law for the surface. One of the numerical methods is proved to be convergent in the $H^1$ norm with optimal-order for finite elements of degree at least two. We present numerical experiments illustrating the convergence behaviour and demonstrating the qualitative properties of the flow: preservation of mean convexity, loss of convexity, weak maximum principles, and the occurrence of self-intersections.

Consider using the right-preconditioned GMRES (AB-GMRES) for obtaining the minimum-norm solution of inconsistent underdetermined systems of linear equations. Morikuni (Ph.D. thesis, 2013) showed that for some inconsistent and ill-conditioned problems, the iterates may diverge. This is mainly because the Hessenberg matrix in the GMRES method becomes very ill-conditioned so that the backward substitution of the resulting triangular system becomes numerically unstable. We propose a stabilized GMRES based on solving the normal equations corresponding to the above triangular system using the standard Cholesky decomposition. This has the effect of shifting upwards the tiny singular values of the Hessenberg matrix which lead to an inaccurate solution. We analyze why the method works. Numerical experiments show that the proposed method is robust and efficient, not only for applying AB-GMRES to underdetermined systems, but also for applying GMRES to severely ill-conditioned range-symmetric systems of linear equations.

Proving linear inequalities and identities of Shannon's information measures, possibly with linear constraints on the information measures, is an important problem in information theory. For this purpose, ITIP and other variant algorithms have been developed and implemented, which are all based on solving a linear program (LP). In particular, an identity $f = 0$ is verified by solving two LPs, one for $f \ge 0$ and one for $f \le 0$. In this paper, we develop a set of algorithms that can be implemented by symbolic computation. Based on these algorithms, procedures for verifying linear information inequalities and identities are devised. Compared with LP-based algorithms, our procedures can produce analytical proofs that are both human-verifiable and free of numerical errors. Our procedures are also more efficient computationally. For constrained inequalities, by taking advantage of the algebraic structure of the problem, the size of the LP that needs to be solved can be significantly reduced. For identities, instead of solving two LPs, the identity can be verified directly with very little computation.

The maximum-cut problem is one of the fundamental problems in combinatorial optimization. With the advent of quantum computers, both the maximum-cut and the equivalent quadratic unconstrained binary optimization problem have experienced much interest in recent years. This article aims to advance the state of the art in the exact solution of both problems -- by using mathematical programming techniques on digital computers. The main focus lies on sparse problem instances, although also dense ones can be solved. We enhance several algorithmic components such as reduction techniques and cutting-plane separation algorithms, and combine them in an exact branch-and-cut solver. Furthermore, we provide a parallel implementation. The new solver is shown to significantly outperform existing state-of-the-art software for sparse MaxCut and QUBO instances. Furthermore, we improve the best known bounds for several instances from the 7th DIMACS Challenge and the QPLIB, and solve some of them (for the first time) to optimality.

With the ever growing importance of uncertainty and sensitivity analysis of complex model evaluations and the difficulty of their timely realizations comes a need for more efficient numerical operations. Non-intrusive Polynomial Chaos methods are highly efficient and accurate to map input-output relationships to investigate complex models. There is a lot of potential to increase the efficacy of the method regarding the selected sampling scheme. We examined state-of-the-art sampling schemes categorized in space-filling-optimal designs such as Latin Hypercube sampling and L1 optimal sampling and compare their empirical performance against standard random sampling. The analysis was performed in the context of L1 minimization using the least-angle regression algorithm to fit the gPC regression models. The sampling schemes are thoroughly investigated by evaluating the quality of the constructed surrogate models considering distinct test cases representing different problem classes covering low, medium and high dimensional problems. Finally, the samplings schemes are tested on an application example to estimate the sensitivity of the self-impedance of a probe, which is used to measure the impedance of biological tissues at different frequencies. Due to the random nature, we compared the sampling schemes using statistical stability measures and evaluated the success rates to construct a surrogate model with an accuracy of <0.1%. We observed strong differences in the convergence properties of the methods between the analyzed test functions.

Superpixel segmentation has become an important research problem in image processing. In this paper, we propose an Iterative Spanning Forest (ISF) framework, based on sequences of Image Foresting Transforms, where one can choose i) a seed sampling strategy, ii) a connectivity function, iii) an adjacency relation, and iv) a seed pixel recomputation procedure to generate improved sets of connected superpixels (supervoxels in 3D) per iteration. The superpixels in ISF structurally correspond to spanning trees rooted at those seeds. We present five ISF methods to illustrate different choices of its components. These methods are compared with approaches from the state-of-the-art in effectiveness and efficiency. The experiments involve 2D and 3D datasets with distinct characteristics, and a high level application, named sky image segmentation. The theoretical properties of ISF are demonstrated in the supplementary material and the results show that some of its methods are competitive with or superior to the best baselines in effectiveness and efficiency.

北京阿比特科技有限公司