亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Kaczmarz method for solving a linear system $Ax = b$ interprets such a system as a collection of equations $\left\langle a_i, x\right\rangle = b_i$, where $a_i$ is the $i-$th row of $A$, then picks such an equation and corrects $x_{k+1} = x_k + \lambda a_i$ where $\lambda$ is chosen so that the $i-$th equation is satisfied. Convergence rates are difficult to establish. Assuming the rows to be normalized, $\|a_i\|_{\ell^2}=1$, Strohmer \& Vershynin established that if the order of equations is chosen at random, $\mathbb{E}~ \|x_k - x\|_{\ell^2}$ converges exponentially. We prove that if the $i-$th row is selected with likelihood proportional to $\left|\left\langle a_i, x_k \right\rangle - b_i\right|^{p}$, where $0<p<\infty$, then $\mathbb{E}~\|x_k - x\|_{\ell^2}$ converges faster than the purely random method. As $p \rightarrow \infty$, the method de-randomizes and explains, among other things, why the maximal correction method works well. We empirically observe that the method computes approximations of small singular vectors of $A$ as a byproduct.

相關內容

We describe a new approach to derive numerical approximations of boundary conditions for high-order accurate finite-difference approximations. The approach, called the Local Compatibility Boundary Condition (LCBC) method, uses boundary conditions and compatibility boundary conditions derived from the governing equations, as well as interior and boundary grid values, to construct a local polynomial, whose degree matches the order of accuracy of the interior scheme, centered at each boundary point. The local polynomial is then used to derive a discrete formula for each ghost point in terms of the data. This approach leads to centered approximations that are generally more accurate and stable than one-sided approximations. Moreover, the stencil approximations are local since they do not couple to neighboring ghost-point values which can occur with traditional compatibility conditions. The local polynomial is derived using continuous operators and derivatives which enables the automatic construction of stencil approximations at different orders of accuracy. The LCBC method is developed here for problems governed by second-order partial differential equations, and it is verified for a wide range of sample problems, both time-dependent and time-independent, in two space dimensions and for schemes up to sixth-order accuracy.

We propose a novel method for sampling and optimization tasks based on a stochastic interacting particle system. We explain how this method can be used for the following two goals: (i) generating approximate samples from a given target distribution; (ii) optimizing a given objective function. The approach is derivative-free and affine invariant, and is therefore well-suited for solving inverse problems defined by complex forward models: (i) allows generation of samples from the Bayesian posterior and (ii) allows determination of the maximum a posteriori estimator. We investigate the properties of the proposed family of methods in terms of various parameter choices, both analytically and by means of numerical simulations. The analysis and numerical simulation establish that the method has potential for general purpose optimization tasks over Euclidean space; contraction properties of the algorithm are established under suitable conditions, and computational experiments demonstrate wide basins of attraction for various specific problems. The analysis and experiments also demonstrate the potential for the sampling methodology in regimes in which the target distribution is unimodal and close to Gaussian; indeed we prove that the method recovers a Laplace approximation to the measure in certain parametric regimes and provide numerical evidence that this Laplace approximation attracts a large set of initial conditions in a number of examples.

Reallocation scheduling is one of the most fundamental problems in various areas such as supply chain management, logistics, and transportation science. In this paper, we introduce the reallocation problem that models the scheduling in which products are with fixed cost, non-fungible, and reallocated in parallel, and comprehensively study the complexity of the problem under various settings of the transition time, product size, and capacities. We show that the problem can be solved in polynomial time for a fundamental setting where the product size and transition time are both uniform. We also show that the feasibility of the problem is NP-complete even for little more general settings, which implies that no polynomial-time algorithm constructs a feasible schedule of the problem unless P$=$NP. We then consider the relaxation of the problem, which we call the capacity augmentation, and derive a reallocation schedule feasible with the augmentation such that the completion time is at most the optimal of the original problem. When the warehouse capacity is sufficiently large, we design constant-factor approximation algorithms under all the settings. We also show the relationship between the reallocation problem and the bin packing problem when the warehouse and carry-in capacities are sufficiently large.

We study the greedy-based online algorithm for edge-weighted matching with (one-sided) vertex arrivals in bipartite graphs, and edge arrivals in general graphs. This algorithm was first studied more than a decade ago by Korula and P\'al for the bipartite case in the random-order model. While the weighted bipartite matching problem is solved in the random-order model, this is not the case in recent and exciting online models in which the online player is provided with a sample, and the arrival order is adversarial. The greedy-based algorithm is arguably the most natural and practical algorithm to be applied in these models. Despite its simplicity and appeal, and despite being studied in multiple works, the greedy-based algorithm was not fully understood in any of the studied online models, and its actual performance remained an open question for more than a decade. We provide a thorough analysis of the greedy-based algorithm in several online models. For vertex arrivals in bipartite graphs, we characterize the exact competitive-ratio of this algorithm in the random-order model, for any arrival order of the vertices subsequent to the sampling phase (adversarial and random orders in particular). We use it to derive tight analysis in the recent adversarial-order model with a sample (AOS model) for any sample size, providing the first result in this model beyond the simple secretary problem. Then, we generalize and strengthen the black box method of converting results in the random-order model to single-sample prophet inequalities, and use it to derive the state-of-the-art single-sample prophet inequality for the problem. Finally, we use our new techniques to analyze the greedy-based algorithm for edge arrivals in general graphs and derive results in all the mentioned online models. In this case as well, we improve upon the state-of-the-art single-sample prophet inequality.

In this work, we investigate the recovery of a parameter in a diffusion process given by the order of derivation in time for a class of diffusion type equations, including both classical and time-fractional diffusion equations, from the flux measurement observed at one point on the boundary. The mathematical model for time-fractional diffusion equations involves a Djrbashian-Caputo fractional derivative in time. We prove a uniqueness result in an unknown medium (e.g., diffusion coefficients, obstacle, initial condition and source), i.e., the recovery of the order of derivation in a diffusion process having several pieces of unknown information. The proof relies on the analyticity of the solution at large time, asymptotic decay behavior, strong maximum principle of the elliptic problem and suitable application of the Hopf lemma. Further we provide an easy-to-implement reconstruction algorithm based on a nonlinear least-squares formulation, and several numerical experiments are presented to complement the theoretical analysis.

We develop approximation algorithms for set-selection problems with deterministic constraints, but random objective values, i.e., stochastic probing problems. When the goal is to maximize the objective, approximation algorithms for probing problems are well-studied. On the other hand, few techniques are known for minimizing the objective, especially in the adaptive setting, where information about the random objective is revealed during the set-selection process and allowed to influence it. For minimization problems in particular, incorporating adaptivity can have a considerable effect on performance. In this work, we seek approximation algorithms that compare well to the optimal adaptive policy. We develop new techniques for adaptive minimization, applying them to a few problems of interest. The core technique we develop here is an approximate reduction from an adaptive expectation minimization problem to a set of adaptive probability minimization problems which we call threshold problems. By providing near-optimal solutions to these threshold problems, we obtain bicriteria adaptive policies. We apply this method to obtain an adaptive approximation algorithm for the MIN-ELEMENT problem, where the goal is to adaptively pick random variables to minimize the expected minimum value seen among them, subject to a knapsack constraint. This partially resolves an open problem raised in Goel et. al's "How to probe for an extreme value". We further consider three extensions on the MIN-ELEMENT problem, where our objective is the sum of the smallest k element-weights, or the weight of the min-weight basis of a given matroid, or where the constraint is not given by a knapsack but by a matroid constraint. For all three variations we explore, we develop adaptive approximation algorithms for their corresponding threshold problems, and prove their near-optimality via coupling arguments.

We adapt recent tools developed for the analysis of Stochastic Gradient Descent (SGD) in non-convex optimization to obtain convergence guarantees and sample complexities for the vanilla policy gradient (PG) -- REINFORCE and GPOMDP. Our only assumptions are that the expected return is smooth w.r.t. the policy parameters and that the second moment of its gradient satisfies a certain \emph{ABC assumption}. The ABC assumption allows for the second moment of the gradient to be bounded by $A\geq 0$ times the suboptimality gap, $B \geq 0$ times the norm of the full batch gradient and an additive constant $C \geq 0$, or any combination of aforementioned. We show that the ABC assumption is more general than the commonly used assumptions on the policy space to prove convergence to a stationary point. We provide a single convergence theorem under the ABC assumption, and show that, despite the generality of the ABC assumption, we recover the $\widetilde{\mathcal{O}}(\epsilon^{-4})$ sample complexity of PG. Our convergence theorem also affords greater flexibility in the choice of hyper parameters such as the step size and places no restriction on the batch size $m$. Even the single trajectory case (i.e., $m=1$) fits within our analysis. We believe that the generality of the ABC assumption may provide theoretical guarantees for PG to a much broader range of problems that have not been previously considered.

Knowledge graph reasoning, which aims at predicting the missing facts through reasoning with the observed facts, is critical to many applications. Such a problem has been widely explored by traditional logic rule-based approaches and recent knowledge graph embedding methods. A principled logic rule-based approach is the Markov Logic Network (MLN), which is able to leverage domain knowledge with first-order logic and meanwhile handle their uncertainty. However, the inference of MLNs is usually very difficult due to the complicated graph structures. Different from MLNs, knowledge graph embedding methods (e.g. TransE, DistMult) learn effective entity and relation embeddings for reasoning, which are much more effective and efficient. However, they are unable to leverage domain knowledge. In this paper, we propose the probabilistic Logic Neural Network (pLogicNet), which combines the advantages of both methods. A pLogicNet defines the joint distribution of all possible triplets by using a Markov logic network with first-order logic, which can be efficiently optimized with the variational EM algorithm. In the E-step, a knowledge graph embedding model is used for inferring the missing triplets, while in the M-step, the weights of logic rules are updated based on both the observed and predicted triplets. Experiments on multiple knowledge graphs prove the effectiveness of pLogicNet over many competitive baselines.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司