亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider multilevel decompositions of piecewise constants on simplicial meshes that are stable in $H^{-s}$ for $s\in (0,1)$. Proofs are given in the case of uniformly and locally refined meshes. Our findings can be applied to define local multilevel diagonal preconditioners that lead to bounded condition numbers (independent of the mesh-sizes and levels) and have optimal computational complexity. Furthermore, we discuss multilevel norms based on local (quasi-)projection operators that allow the efficient evaluation of negative order Sobolev norms. Numerical examples and a discussion on several extensions and applications conclude this article.

相關內容

iOS 8 提供的應用間和應用跟系統的功能交互特性。
  • Today (iOS and OS X): widgets for the Today view of Notification Center
  • Share (iOS and OS X): post content to web services or share content with others
  • Actions (iOS and OS X): app extensions to view or manipulate inside another app
  • Photo Editing (iOS): edit a photo or video in Apple's Photos app with extensions from a third-party apps
  • Finder Sync (OS X): remote file storage in the Finder with support for Finder content annotation
  • Storage Provider (iOS): an interface between files inside an app and other apps on a user's device
  • Custom Keyboard (iOS): system-wide alternative keyboards

Source:

This paper presents a linear prioritized local algorithm that computes large independent sets on a random $d$-regular graph with small and fixed degree $d$. We studied experimentally the independence ratio obtained by the algorithm when $ d \in [3,100]$. For all $d \in [5,100]$, our results are larger than lower bounds calculated by exact methods, thus providing improved estimates of lower bounds.

Bregman proximal point algorithm (BPPA), as one of the centerpieces in the optimization toolbox, has been witnessing emerging applications. With simple and easy to implement update rule, the algorithm bears several compelling intuitions for empirical successes, yet rigorous justifications are still largely unexplored. We study the computational properties of BPPA through classification tasks with separable data, and demonstrate provable algorithmic regularization effects associated with BPPA. We show that BPPA attains non-trivial margin, which closely depends on the condition number of the distance generating function inducing the Bregman divergence. We further demonstrate that the dependence on the condition number is tight for a class of problems, thus showing the importance of divergence in affecting the quality of the obtained solutions. In addition, we extend our findings to mirror descent (MD), for which we establish similar connections between the margin and Bregman divergence. We demonstrate through a concrete example, and show BPPA/MD converges in direction to the maximal margin solution with respect to the Mahalanobis distance. Our theoretical findings are among the first to demonstrate the benign learning properties BPPA/MD, and also provide corroborations for a careful choice of divergence in the algorithmic design.

In this paper, we propose an infinite-dimensional version of the Stein variational gradient descent (iSVGD) method for solving Bayesian inverse problems. The method can generate approximate samples from posteriors efficiently. Based on the concepts of operator-valued kernels and function-valued reproducing kernel Hilbert spaces, a rigorous definition is given for the infinite-dimensional objects, e.g., the Stein operator, which are proved to be the limit of finite-dimensional ones. Moreover, a more efficient iSVGD with preconditioning operators is constructed by generalizing the change of variables formula and introducing a regularity parameter. The proposed algorithms are applied to an inverse problem of the steady state Darcy flow equation. Numerical results confirm our theoretical findings and demonstrate the potential applications of the proposed approach in the posterior sampling of large-scale nonlinear statistical inverse problems.

We establish uniform error bounds of time-splitting Fourier pseudospectral (TSFP) methods for the nonlinear Klein--Gordon equation (NKGE) with weak power-type nonlinearity and $O(1)$ initial data, while the nonlinearity strength is characterized by $\varepsilon^{p}$ with a constant $p \in \mathbb{N}^+$ and a dimensionless parameter $\varepsilon \in (0, 1]$, for the long-time dynamics up to the time at $O(\varepsilon^{-\beta})$ with $0 \leq \beta \leq p$. In fact, when $0 < \varepsilon \ll 1$, the problem is equivalent to the long-time dynamics of NKGE with small initial data and $O(1)$ nonlinearity strength, while the amplitude of the initial data (and the solution) is at $O(\varepsilon)$. By reformulating the NKGE into a relativistic nonlinear Schr\"{o}dinger equation, we adapt the TSFP method to discretize it numerically. By using the method of mathematical induction to bound the numerical solution, we prove uniform error bounds at $O(h^{m}+\varepsilon^{p-\beta}\tau^2)$ of the TSFP method with $h$ mesh size, $\tau$ time step and $m\ge2$ depending on the regularity of the solution. The error bounds are uniformly accurate for the long-time simulation up to the time at $O(\varepsilon^{-\beta})$ and uniformly valid for $\varepsilon\in(0,1]$. Especially, the error bounds are uniformly at the second order rate for the large time step $\tau = O(\varepsilon^{-(p-\beta)/2})$ in the parameter regime $0\le\beta <p$. Numerical results are reported to confirm our error bounds in the long-time regime. Finally, the TSFP method and its error bounds are extended to a highly oscillatory complex NKGE which propagates waves with wavelength at $O(1)$ in space and $O(\varepsilon^{\beta})$ in time and wave velocity at $O(\varepsilon^{-\beta})$.

Recent years have witnessed a rapid growth of distributed machine learning (ML) frameworks, which exploit the massive parallelism of computing clusters to expedite ML training. However, the proliferation of distributed ML frameworks also introduces many unique technical challenges in computing system design and optimization. In a networked computing cluster that supports a large number of training jobs, a key question is how to design efficient scheduling algorithms to allocate workers and parameter servers across different machines to minimize the overall training time. Toward this end, in this paper, we develop an online scheduling algorithm that jointly optimizes resource allocation and locality decisions. Our main contributions are three-fold: i) We develop a new analytical model that considers both resource allocation and locality; ii) Based on an equivalent reformulation and observations on the worker-parameter server locality configurations, we transform the problem into a mixed packing and covering integer program, which enables approximation algorithm design; iii) We propose a meticulously designed approximation algorithm based on randomized rounding and rigorously analyze its performance. Collectively, our results contribute to the state of the art of distributed ML system optimization and algorithm design.

For time-dependent problems with high-contrast multiscale coefficients, the time step size for explicit methods is affected by the magnitude of the coefficient parameter. With a suitable construction of multiscale space, one can achieve a stable temporal splitting scheme where the time step size is independent of the contrast. Consider the parabolic equation with heterogeneous diffusion parameter, the flow rates vary significantly in different regions due to the high-contrast features of the diffusivity. In this work, we aim to introduce a multirate partially explicit splitting scheme to achieve efficient simulation with the desired accuracy. We first design multiscale subspaces to handle flow with different speed. For the fast flow, we obtain a low-dimensional subspace with respect to the high-diffusive component and adopt an implicit time discretization scheme. The other multiscale subspace will take care of the slow flow, and the corresponding degrees of freedom are treated explicitly. Then a multirate time stepping is introduced for the two parts. The stability of the multirate methods is analyzed for the partially explicit scheme. Moreover, we derive local error estimators corresponding to the two components of the solutions and provide an upper bound of the errors. An adaptive local temporal refinement framework is then proposed to achieve higher computational efficiency. Several numerical tests are presented to demonstrate the performance of the proposed method.

In the MAXSPACE problem, given a set of ads A, one wants to place a subset A' of A into K slots B_1, ..., B_K of size L. Each ad A_i in A has size s_i and frequency w_i. A schedule is feasible if the total size of ads in any slot is at most L, and each ad A_i in A' appears in exactly w_i slots. The goal is to find a feasible schedule that maximizes the space occupied in all slots. We introduce MAXSPACE-RDWV, a MAXSPACE generalization with release dates, deadlines, variable frequency, and generalized profit. In MAXSPACE-RDWV each ad A_i has a release date r_i >= 1, a deadline d_i >= r_i, a profit v_i that may not be related with s_i and lower and upper bounds w^min_i and w^max_i for frequency. In this problem, an ad may only appear in a slot B_j with r_i <= j <= d_i, and the goal is to find a feasible schedule that maximizes the sum of values of scheduled ads. This paper presents some algorithms based on meta-heuristics Greedy Randomized Adaptive Search Procedure (GRASP), Variable Neighborhood Search (VNS), Local Search, and Tabu Search for MAXSPACE and MAXSPACE-RDWV. We compare our proposed algorithms with Hybrid-GA proposed by Kumar et al. (2006). We also create a version of Hybrid-GA for MAXSPACE-RDWV and compare it with our meta-heuristics for this problem. Some meta-heuristics, such as VNS and GRASP+VNS, have better results than Hybrid-GA for both problems.

Adaptive mesh refinement is a key component of efficient unstructured space-time finite element methods. Underlying any adaptive mesh refinement scheme is, of course, a method for local refinement of simplices. However, simplex bisection algorithms in dimension greater than three have strict mesh preconditions which can be hard to satisfy. We prove that certain four-dimensional simplex space-time meshes can be handled with a relaxed precondition. Namely, we prove that if a tetrahedral mesh is 4-colorable, then we can produce a 4D simplex mesh which always satisfies the bisection precondition. We also briefly discuss strategies to handle tetrahedral meshes which are not 4-colorable.

We study active learning of homogeneous $s$-sparse halfspaces in $\mathbb{R}^d$ under the setting where the unlabeled data distribution is isotropic log-concave and each label is flipped with probability at most $\eta$ for a parameter $\eta \in \big[0, \frac12\big)$, known as the bounded noise. Even in the presence of mild label noise, i.e. $\eta$ is a small constant, this is a challenging problem and only recently have label complexity bounds of the form $\tilde{O}\big(s \cdot \mathrm{polylog}(d, \frac{1}{\epsilon})\big)$ been established in [Zhang, 2018] for computationally efficient algorithms. In contrast, under high levels of label noise, the label complexity bounds achieved by computationally efficient algorithms are much worse: the best known result of [Awasthi et al., 2016] provides a computationally efficient algorithm with label complexity $\tilde{O}\big((\frac{s \ln d}{\epsilon})^{2^{\mathrm{poly}(1/(1-2\eta))}} \big)$, which is label-efficient only when the noise rate $\eta$ is a fixed constant. In this work, we substantially improve on it by designing a polynomial time algorithm for active learning of $s$-sparse halfspaces, with a label complexity of $\tilde{O}\big(\frac{s}{(1-2\eta)^4} \mathrm{polylog} (d, \frac 1 \epsilon) \big)$. This is the first efficient algorithm with label complexity polynomial in $\frac{1}{1-2\eta}$ in this setting, which is label-efficient even for $\eta$ arbitrarily close to $\frac12$. Our active learning algorithm and its theoretical guarantees also immediately translate to new state-of-the-art label and sample complexity results for full-dimensional active and passive halfspace learning under arbitrary bounded noise. The key insight of our algorithm and analysis is a new interpretation of online learning regret inequalities, which may be of independent interest.

Methods that align distributions by minimizing an adversarial distance between them have recently achieved impressive results. However, these approaches are difficult to optimize with gradient descent and they often do not converge well without careful hyperparameter tuning and proper initialization. We investigate whether turning the adversarial min-max problem into an optimization problem by replacing the maximization part with its dual improves the quality of the resulting alignment and explore its connections to Maximum Mean Discrepancy. Our empirical results suggest that using the dual formulation for the restricted family of linear discriminators results in a more stable convergence to a desirable solution when compared with the performance of a primal min-max GAN-like objective and an MMD objective under the same restrictions. We test our hypothesis on the problem of aligning two synthetic point clouds on a plane and on a real-image domain adaptation problem on digits. In both cases, the dual formulation yields an iterative procedure that gives more stable and monotonic improvement over time.

北京阿比特科技有限公司