亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Linear inverse problems arise in diverse engineering fields especially in signal and image reconstruction. The development of computational methods for linear inverse problems with sparsity tool is one of the recent trends in this area. The so-called optimal $k$-thresholding is a newly introduced method for sparse optimization and linear inverse problems. Compared to other sparsity-aware algorithms, the advantage of optimal $k$-thresholding method lies in that it performs thresholding and error metric reduction simultaneously and thus works stably and robustly for solving medium-sized linear inverse problems. However, the runtime of this method remains high when the problem size is relatively large. The purpose of this paper is to propose an acceleration strategy for this method. Specifically, we propose a heavy-ball-based optimal $k$-thresholding (HBOT) algorithm and its relaxed variants for sparse linear inverse problems. The convergence of these algorithms is shown under the restricted isometry property. In addition, the numerical performance of the heavy-ball-based relaxed optimal $k$-thresholding pursuit (HBROTP) has been studied, and simulations indicate that HBROTP admits robust capability for signal and image reconstruction even in noisy environments.

相關內容

In a task where many similar inverse problems must be solved, evaluating costly simulations is impractical. Therefore, replacing the model $y$ with a surrogate model $y_s$ that can be evaluated quickly leads to a significant speedup. The approximation quality of the surrogate model depends strongly on the number, position, and accuracy of the sample points. With an additional finite computational budget, this leads to a problem of (computer) experimental design. In contrast to the selection of sample points, the trade-off between accuracy and effort has hardly been studied systematically. We therefore propose an adaptive algorithm to find an optimal design in terms of position and accuracy. Pursuing a sequential design by incrementally appending the computational budget leads to a convex and constrained optimization problem. As a surrogate, we construct a Gaussian process regression model. We measure the global approximation error in terms of its impact on the accuracy of the identified parameter and aim for a uniform absolute tolerance, assuming that $y_s$ is computed by finite element calculations. A priori error estimates and a coarse estimate of computational effort relate the expected improvement of the surrogate model error to computational effort, resulting in the most efficient combination of sample point and evaluation tolerance. We also allow for improving the accuracy of already existing sample points by continuing previously truncated finite element solution procedures.

Variational regularization is commonly used to solve linear inverse problems, and involves augmenting a data fidelity by a regularizer. The regularizer is used to promote a priori information, and is weighted by a regularization parameter. Selection of an appropriate regularization parameter is critical, with various choices leading to very different reconstructions. Existing strategies such as the discrepancy principle and L-curve can be used to determine a suitable parameter value, but in recent years a supervised machine learning approach called bilevel learning has been employed. Bilevel learning is a powerful framework to determine optimal parameters, and involves solving a nested optimisation problem. While previous strategies enjoy various theoretical results, the well-posedness of bilevel learning in this setting is still a developing field. One necessary property is positivity of the determined regularization parameter. In this work, we provide a new condition that better characterises positivity of optimal regularization parameters than the existing theory. Numerical results verify and explore this new condition for both small and large dimensional problems.

We solve acoustic scattering problems by means of the isogeometric boundary integral equation method. In order to avoid spurious modes, we apply the combined field integral equations for either sound-hard scatterers or sound-soft scatterers. These integral equations are discretized by Galerkin's method, which especially enables the mathematically correct regularization of the hypersingular integral operator. In order to circumvent densely populated system matrices, we employ the isogeometric fast multipole method. The result is an algorithm that scales essentially linear in the number of boundary elements. Numerical experiments are performed which show the feasibility and the performance of the approach.

We establish globally optimal solutions to a class of fractional optimization problems on a class of constraint sets, whose key characteristics are as follows: 1) The numerator and the denominator of the objective function are both convex, semi-algebraic, Lipschitz continuous and differentiable with Lipschitz continuous gradients on the constraint set. 2) The constraint set is closed, convex and semi-algebraic. Compared with Dinkelbach's approach, our novelty falls into the following aspects: 1) Dinkelbach's has to solve a concave maximization problem in each iteration, which is nontrivial to obtain a solution, while ours only needs to conduct one proximity gradient operation in each iteration. 2) Dinkelbach's requires at least one nonnegative point for the numerator to proceed the algorithm, but ours does not, which is available to a much wider class of situations. 3) Dinkelbach's requires a closed and bounded constraint set, while ours only needs the closedness but not necessarily the boundedness. Therefore, our approach is viable for many more practical models, like optimizing the Sharpe ratio (SR) or the Information ratio in mathematical finance. Numerical experiments show that our approach achieves the ground-truth solutions in two simple examples. For real-world financial data, it outperforms several existing approaches for SR maximization.

The inverse kinematics (IK) problem for many common robot manipulators may be decomposed into canonical subproblems which are solved by finding the angles on circles where they intersect with other geometric objects. We present new algebraic solutions and geometric interpretations for six subproblems using a linear algebra approach, and we demonstrate significant computational performance improvements over existing IK methods. We show that IK for any 6-dof all revolute (6R) robot with three intersecting or parallel joint axes may be solved in closed form using subproblem decomposition. For any other 6R robot, subproblem decomposition reduces finding all IK solutions to a search over one or two joint angles. The first three subproblems, called the Paden-Kahan subproblems, are Subproblem 1: Circle and Point, Subproblem 2: Two Circles, and Subproblem 3: Circle and Sphere. The other three subproblems, which have not been extensively covered in the literature, are Subproblem 4: Circle and Plane, Subproblem 5: Three Circles, and Subproblem 6: Four Circles. Our approach also finds the least-squares solutions for Subproblems 1-4 when the exact solution does not exist.

The Multiple Drone-Delivery Scheduling Problem (MDSP) is a scheduling problem that optimizes the maximum reward earned by a set of $m$ drones executing a sequence of deliveries on a truck delivery route. The current best-known approximation algorithm for the problem is a $\frac{1}{4}$-approximation algorithm developed by Jana and Mandal (2022). In this paper, we propose exact and approximation algorithms for the general MDSP, as well as a unit-cost variant. We first propose a greedy algorithm which we show to be a $\frac{1}{3}$-approximation algorithm for the general MDSP problem formulation, provided the number of conflicting intervals is less than the number of drones. We then introduce a unit-cost variant of MDSP and we devise an exact dynamic programming algorithm that runs in polynomial time when the number of drones $m$ can be assumed to be a constant.

In this study, we examine numerical approximations for 2nd-order linear-nonlinear differential equations with diverse boundary conditions, followed by the residual corrections of the first approximations. We first obtain numerical results using the Galerkin weighted residual approach with Bernstein polynomials. The generation of residuals is brought on by the fact that our first approximation is computed using numerical methods. To minimize these residuals, we use the compact finite difference scheme of 4th-order convergence to solve the error differential equations in accordance with the error boundary conditions. We also introduce the formulation of the compact finite difference method of fourth-order convergence for the nonlinear BVPs. The improved approximations are produced by adding the error values derived from the approximations of the error differential equation to the weighted residual values. Numerical results are compared to the exact solutions and to the solutions available in the published literature to validate the proposed scheme, and high accuracy is achieved in all cases

We present an analysis of large-scale load balancing systems, where the processing time distribution of tasks depends on both the task and server types. Our study focuses on the asymptotic regime, where the number of servers and task types tend to infinity in proportion. In heterogeneous environments, commonly used load balancing policies such as Join Fastest Idle Queue and Join Fastest Shortest Queue exhibit poor performance and even shrink the stability region. Interestingly, prior to this work, finding a scalable policy with a provable performance guarantee in this setup remained an open question. To address this gap, we propose and analyze two asymptotically delay-optimal dynamic load balancing policies. The first policy efficiently reserves the processing capacity of each server for ``good" tasks and routes tasks using the vanilla Join Idle Queue policy. The second policy, called the speed-priority policy, significantly increases the likelihood of assigning tasks to the respective ``good" servers capable of processing them at high speeds. By leveraging a framework inspired by the graphon literature and employing the mean-field method and stochastic coupling arguments, we demonstrate that both policies achieve asymptotic zero queuing. Specifically, as the system scales, the probability of a typical task being assigned to an idle server approaches 1.

We present a generalized FDTD scheme to simulate moving electromagnetic structures with arbitrary space-time configurations. This scheme is a local adaptation and 2+1-dimensional extension of the uniform and 1+1-dimensional scheme recently reported in [1]. The local adaptation, which is allowed by the inherently matched nature of the generalized Yee cell to the conventional Yee cell, extends the range of applicability of the scheme in [1] to moving structures that involve multiple and arbitrary velocity profiles while being fully compatible with conventional absorbing boundary conditions and standard treatments of medium dispersion. We show that a direct application of the conventional FDTD scheme predicts qualitatively correct spectral transitions but quantitatively erroneous scattering amplitudes, we infer from this observation generalized, hybrid - physical and auxiliary (non-physical) - fields that automatically satisfy moving boundary conditions in the laboratory frame, and accordingly establish local update equations based on the related Maxwell's equations and constitutive relations. We finally validate and illustrate the proposed method by three canonical examples - a space-time interface, a space-time wedge and a space-time accelerated interface - whose combination represent arbitrary space-time configurations. The proposed scheme fills an important gap in the open literature on computational electromagnetics and offers an unprecedented, direct solution for moving structures in commercial software platforms.

When and why can a neural network be successfully trained? This article provides an overview of optimization algorithms and theory for training neural networks. First, we discuss the issue of gradient explosion/vanishing and the more general issue of undesirable spectrum, and then discuss practical solutions including careful initialization and normalization methods. Second, we review generic optimization methods used in training neural networks, such as SGD, adaptive gradient methods and distributed methods, and theoretical results for these algorithms. Third, we review existing research on the global issues of neural network training, including results on bad local minima, mode connectivity, lottery ticket hypothesis and infinite-width analysis.

北京阿比特科技有限公司