亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we consider non-diffusive variational problems with mixed boundary conditions and (distributional and weak) gradient constraints. The upper bound in the constraint is either a function or a Borel measure, leading to the state space being a Sobolev one or the space of functions of bounded variation. We address existence and uniqueness of the model under low regularity assumptions, and rigorously identify its Fenchel pre-dual problem. The latter in some cases is posed on a non-standard space of Borel measures with square integrable divergences. We also establish existence and uniqueness of solutions to this pre-dual problem under some assumptions. We conclude the paper by introducing a mixed finite-element method to solve the primal-dual system. The numerical examples confirm our theoretical findings.

相關內容

We consider decentralized consensus optimization when workers sample data from non-identical distributions and perform variable amounts of work due to slow nodes known as stragglers. The problem of non-identical distributions and the problem of variable amount of work have been previously studied separately. In our work we analyze them together under a unified system model. We study the convergence of the optimization algorithm when combining worker outputs under two heuristic methods: (1) weighting equally, and (2) weighting by the amount of work completed by each. We prove convergence of the two methods under perfect consensus, assuming straggler statistics are independent and identical across all workers for all iterations. Our numerical results show that under approximate consensus the second method outperforms the first method for both convex and non-convex objective functions. We make use of the theory on minimum variance unbiased estimator (MVUE) to evaluate the existence of an optimal method for combining worker outputs. While we conclude that neither of the two heuristic methods are optimal, we also show that an optimal method does not exist.

In this work, we introduce two algorithmic frameworks, named Bregman extragradient method and Bregman extrapolation method, for solving saddle point problems. The proposed frameworks not only include the well-known extragradient and optimistic gradient methods as special cases, but also generate new variants such as sparse extragradient and extrapolation methods. With the help of the recent concept of relative Lipschitzness and some Bregman distance related tools, we are able to show certain upper bounds in terms of Bregman distances for gap-type measures. Further, we use those bounds to deduce the convergence rate of $\cO(1/k)$ for the Bregman extragradient and Bregman extrapolation methods applied to solving smooth convex-concave saddle point problems. Our theory recovers the main discovery made in [Mokhtari et al. (2020), SIAM J. Optim., 20, pp. 3230-3251] for more general algorithmic frameworks with weaker assumptions via a conceptually different approach.

We propose local space-time approximation spaces for parabolic problems that are optimal in the sense of Kolmogorov and may be employed in multiscale and domain decomposition methods. The diffusion coefficient can be arbitrarily rough in space and time. To construct local approximation spaces we consider a compact transfer operator that acts on the space of local solutions and covers the full time dimension. The optimal local spaces are then given by the left singular vectors of the transfer operator. To prove compactness of the latter we combine a suitable parabolic Caccioppoli inequality with the compactness theorem of Aubin-Lions. In contrast to the elliptic setting [I. Babu\v{s}ka and R. Lipton, Multiscale Model. Simul., 9 (2011), pp. 373-406] we need an additional regularity result to combine the two results. Furthermore, we employ the generalized finite element method to couple local spaces and construct an approximation of the global solution. Since our approach yields reduced space-time bases, the computation of the global approximation does not require a time stepping method and is thus computationally efficient. Moreover, we derive rigorous local and global a priori error bounds. In detail, we bound the global approximation error in a graph norm by the local errors in the $L^2(H^1)$-norm, noting that the space the transfer operator maps to is equipped with this norm. Numerical experiments demonstrate an exponential decay of the singular values of the transfer operator and the local and global approximation errors for problems with high contrast or multiscale structure regarding space and time.

Bayesian statistical inference for Generalized Linear Models (GLMs) with parameters lying on a constrained space is of general interest (e.g., in monotonic or convex regression), but often constructing valid prior distributions supported on a subspace spanned by a set of linear inequality constraints can be challenging, especially when some of the constraints might be binding leading to a lower dimensional subspace. For the general case with canonical link, it is shown that a generalized truncated multivariate normal supported on a desired subspace can be used. Moreover, it is shown that such prior distribution facilitates the construction of a general purpose product slice sampling method to obtain (approximate) samples from corresponding posterior distribution, making the inferential method computationally efficient for a wide class of GLMs with an arbitrary set of linear inequality constraints. The proposed product slice sampler is shown to be uniformly ergodic, having a geometric convergence rate under a set of mild regularity conditions satisfied by many popular GLMs (e.g., logistic and Poisson regressions with constrained coefficients). One of the primary advantages of the proposed Bayesian estimation method over classical methods is that uncertainty of parameter estimates is easily quantified by using the samples simulated from the path of the Markov Chain of the slice sampler. Numerical illustrations using simulated data sets are presented to illustrate the superiority of the proposed methods compared to some existing methods in terms of sampling bias and variances. In addition, real case studies are presented using data sets for fertilizer-crop production and estimating the SCRAM rate in nuclear power plants.

The use of orthogonal projections on high-dimensional input and target data in learning frameworks is studied. First, we investigate the relations between two standard objectives in dimension reduction, maximizing variance and preservation of pairwise relative distances. The derivation of their asymptotic correlation and numerical experiments tell that a projection usually cannot satisfy both objectives. In a standard classification problem we determine projections on the input data that balance them and compare subsequent results. Next, we extend our application of orthogonal projections to deep learning frameworks. We introduce new variational loss functions that enable integration of additional information via transformations and projections of the target data. In two supervised learning problems, clinical image segmentation and music information classification, the application of the proposed loss functions increase the accuracy.

Proximal Policy Optimization (PPO) is a highly popular model-free reinforcement learning (RL) approach. However, in continuous state and actions spaces and a Gaussian policy -- common in computer animation and robotics -- PPO is prone to getting stuck in local optima. In this paper, we observe a tendency of PPO to prematurely shrink the exploration variance, which naturally leads to slow progress. Motivated by this, we borrow ideas from CMA-ES, a black-box optimization method designed for intelligent adaptive Gaussian exploration, to derive PPO-CMA, a novel proximal policy optimization approach that can expand the exploration variance on objective function slopes and shrink the variance when close to the optimum. This is implemented by using separate neural networks for policy mean and variance and training the mean and variance in separate passes. Our experiments demonstrate a clear improvement over vanilla PPO in many difficult OpenAI Gym MuJoCo tasks.

This paper proposes a model-free Reinforcement Learning (RL) algorithm to synthesise policies for an unknown Markov Decision Process (MDP), such that a linear time property is satisfied. We convert the given property into a Limit Deterministic Buchi Automaton (LDBA), then construct a synchronized MDP between the automaton and the original MDP. According to the resulting LDBA, a reward function is then defined over the state-action pairs of the product MDP. With this reward function, our algorithm synthesises a policy whose traces satisfies the linear time property: as such, the policy synthesis procedure is "constrained" by the given specification. Additionally, we show that the RL procedure sets up an online value iteration method to calculate the maximum probability of satisfying the given property, at any given state of the MDP - a convergence proof for the procedure is provided. Finally, the performance of the algorithm is evaluated via a set of numerical examples. We observe an improvement of one order of magnitude in the number of iterations required for the synthesis compared to existing approaches.

We consider the exploration-exploitation trade-off in reinforcement learning and we show that an agent imbued with a risk-seeking utility function is able to explore efficiently, as measured by regret. The parameter that controls how risk-seeking the agent is can be optimized exactly, or annealed according to a schedule. We call the resulting algorithm K-learning and show that the corresponding K-values are optimistic for the expected Q-values at each state-action pair. The K-values induce a natural Boltzmann exploration policy for which the `temperature' parameter is equal to the risk-seeking parameter. This policy achieves an expected regret bound of $\tilde O(L^{3/2} \sqrt{S A T})$, where $L$ is the time horizon, $S$ is the number of states, $A$ is the number of actions, and $T$ is the total number of elapsed time-steps. This bound is only a factor of $L$ larger than the established lower bound. K-learning can be interpreted as mirror descent in the policy space, and it is similar to other well-known methods in the literature, including Q-learning, soft-Q-learning, and maximum entropy policy gradient, and is closely related to optimism and count based exploration methods. K-learning is simple to implement, as it only requires adding a bonus to the reward at each state-action and then solving a Bellman equation. We conclude with a numerical example demonstrating that K-learning is competitive with other state-of-the-art algorithms in practice.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

Current image captioning methods are usually trained via (penalized) maximum likelihood estimation. However, the log-likelihood score of a caption does not correlate well with human assessments of quality. Standard syntactic evaluation metrics, such as BLEU, METEOR and ROUGE, are also not well correlated. The newer SPICE and CIDEr metrics are better correlated, but have traditionally been hard to optimize for. In this paper, we show how to use a policy gradient (PG) method to directly optimize a linear combination of SPICE and CIDEr (a combination we call SPIDEr): the SPICE score ensures our captions are semantically faithful to the image, while CIDEr score ensures our captions are syntactically fluent. The PG method we propose improves on the prior MIXER approach, by using Monte Carlo rollouts instead of mixing MLE training with PG. We show empirically that our algorithm leads to easier optimization and improved results compared to MIXER. Finally, we show that using our PG method we can optimize any of the metrics, including the proposed SPIDEr metric which results in image captions that are strongly preferred by human raters compared to captions generated by the same model but trained to optimize MLE or the COCO metrics.

北京阿比特科技有限公司