亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The probabilistic satisfiability of a logical expression is a fundamental concept known as the partition function in statistical physics and field theory, an evaluation of a related graph's Tutte polynomial in mathematics, and the Moore-Shannon network reliability of that graph in engineering. It is the crucial element for decision-making under uncertainty. Not surprisingly, it is provably hard to compute exactly or even to approximate. Many of these applications are concerned only with a subset of problems for which the solutions are monotonic functions. Here we extend the weak- and strong-coupling methods of statistical physics to heterogeneous satisfiability problems and introduce a novel approach to constructing lower and upper bounds on the approximation error for monotonic problems. These bounds combine information from both perturbative analyses to produce bounds that are tight in the sense that they are saturated by some problem instance that is compatible with all the information contained in either approximation.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · 近似 · 優化器 · 值函數近似 · Analysis ·
2022 年 7 月 22 日

In this paper, we study a sequential decision making problem faced by e-commerce carriers related to when to send out a vehicle from the central depot to serve customer requests, and in which order to provide the service, under the assumption that the time at which parcels arrive at the depot is stochastic and dynamic. The objective is to maximize the number of parcels that can be delivered during the service hours. We propose two reinforcement learning approaches for solving this problem, one based on a policy function approximation (PFA) and the second on a value function approximation (VFA). Both methods are combined with a look-ahead strategy, in which future release dates are sampled in a Monte-Carlo fashion and a tailored batch approach is used to approximate the value of future states. Our PFA and VFA make a good use of branch-and-cut-based exact methods to improve the quality of decisions. We also establish sufficient conditions for partial characterization of optimal policy and integrate them into PFA/VFA. In an empirical study based on 720 benchmark instances, we conduct a competitive analysis using upper bounds with perfect information and we show that PFA and VFA greatly outperform two alternative myopic approaches. Overall, PFA provides best solutions, while VFA (which benefits from a two-stage stochastic optimization model) achieves a better tradeoff between solution quality and computing time.

Shape-constrained density estimation is an important topic in mathematical statistics. We focus on densities on $\mathbb{R}^d$ that are log-concave, and we study geometric properties of the maximum likelihood estimator (MLE) for weighted samples. Cule, Samworth, and Stewart showed that the logarithm of the optimal log-concave density is piecewise linear and supported on a regular subdivision of the samples. This defines a map from the space of weights to the set of regular subdivisions of the samples, i.e. the face poset of their secondary polytope. We prove that this map is surjective. In fact, every regular subdivision arises in the MLE for some set of weights with positive probability, but coarser subdivisions appear to be more likely to arise than finer ones. To quantify these results, we introduce a continuous version of the secondary polytope, whose dual we name the Samworth body. This article establishes a new link between geometric combinatorics and nonparametric statistics, and it suggests numerous open problems.

The monotone minimal perfect hash function (MMPHF) problem is the following indexing problem. Given a set $S= \{s_1,\ldots,s_n\}$ of $n$ distinct keys from a universe $U$ of size $u$, create a data structure $DS$ that answers the following query: \[ RankOp(q) = \text{rank of } q \text{ in } S \text{ for all } q\in S ~\text{ and arbitrary answer otherwise.} \] Solutions to the MMPHF problem are in widespread use in both theory and practice. The best upper bound known for the problem encodes $DS$ in $O(n\log\log\log u)$ bits and performs queries in $O(\log u)$ time. It has been an open problem to either improve the space upper bound or to show that this somewhat odd looking bound is tight. In this paper, we show the latter: specifically that any data structure (deterministic or randomized) for monotone minimal perfect hashing of any collection of $n$ elements from a universe of size $u$ requires $\Omega(n \cdot \log\log\log{u})$ expected bits to answer every query correctly. We achieve our lower bound by defining a graph $\mathbf{G}$ where the nodes are the possible ${u \choose n}$ inputs and where two nodes are adjacent if they cannot share the same $DS$. The size of $DS$ is then lower bounded by the log of the chromatic number of $\mathbf{G}$. Finally, we show that the fractional chromatic number (and hence the chromatic number) of $\mathbf{G}$ is lower bounded by $2^{\Omega(n \log\log\log u)}$.

With the aid of hardware and software developments, there has been a surge of interests in solving partial differential equations by deep learning techniques, and the integration with domain decomposition strategies has recently attracted considerable attention due to its enhanced representation and parallelization capacity of the network solution. While there are already several works that substitute the numerical solver of overlapping Schwarz methods with the deep learning approach, the non-overlapping counterpart has not been thoroughly studied yet because of the inevitable interface overfitting problem that would propagate the errors to neighbouring subdomains and eventually hamper the convergence of outer iteration. In this work, a novel learning approach, i.e., the compensated deep Ritz method, is proposed to enable the flux transmission across subregion interfaces with guaranteed accuracy, thereby allowing us to construct effective learning algorithms for realizing the more general non-overlapping domain decomposition methods in the presence of overfitted interface conditions. Numerical experiments on a series of elliptic boundary value problems including the regular and irregular interfaces, low and high dimensions, smooth and high-contrast coefficients on multidomains are carried out to validate the effectiveness of our proposed domain decomposition learning algorithms.

Stochastic rounding (SR) offers an alternative to the deterministic IEEE-754 floating-point rounding modes. In some applications such as PDEs, ODEs and neural networks, SR empirically improves the numerical behavior and convergence to accurate solutions while no sound theoretical background has been provided. Recent works by Ipsen, Zhou, Higham, and Mary have computed SR probabilistic error bounds for basic linear algebra kernels. For example, the inner product SR probabilistic bound of the forward error is proportional to $\sqrt$ nu instead of nu for the default rounding mode. To compute the bounds, these works show that the errors accumulated in computation form a martingale. This paper proposes an alternative framework to characterize SR errors based on the computation of the variance. We pinpoint common error patterns in numerical algorithms and propose a lemma that bounds their variance. For each probability and through Bienaym{\'e}-Chebyshev inequality, this bound leads to better probabilistic error bound in several situations. Our method has the advantage of providing a tight probabilistic bound for all algorithms fitting our model. We show how the method can be applied to give SR error bounds for the inner product and Horner polynomial evaluation.

The flow-driven spectral chaos (FSC) is a recently developed method for tracking and quantifying uncertainties in the long-time response of stochastic dynamical systems using the spectral approach. The method uses a novel concept called 'enriched stochastic flow maps' as a means to construct an evolving finite-dimensional random function space that is both accurate and computationally efficient in time. In this paper, we present a multi-element version of the FSC method (the ME-FSC method for short) to tackle (mainly) those dynamical systems that are inherently discontinuous over the probability space. In ME-FSC, the random domain is partitioned into several elements, and then the problem is solved separately on each random element using the FSC method. Subsequently, results are aggregated to compute the probability moments of interest using the law of total probability. To demonstrate the effectiveness of the ME-FSC method in dealing with discontinuities and long-time integration of stochastic dynamical systems, four representative numerical examples are presented in this paper, including the Van-der-Pol oscillator problem and the Kraichnan-Orszag three-mode problem. Results show that the ME-FSC method is capable of solving problems that have strong nonlinear dependencies over the probability space, both reliably and at low computational cost.

Functors with an instance of the Traversable type class can be thought of as data structures which permit a traversal of their elements. This has been made precise by the correspondence between traversable functors and finitary containers (also known as polynomial functors) -- established in the context of total, necessarily terminating, functions. However, the Haskell language is non-strict and permits functions that do not terminate. It has long been observed that traversals can at times in fact operate over infinite lists, for example in distributing the Reader applicative. The result of such a traversal remains an infinite structure, however it nonetheless is productive -- i.e. successive amounts of finite computation yield either termination or successive results. To investigate this phenomenon, we draw on tools from guarded recursion, making use of equational reasoning directly in Haskell.

Policy learning using historical observational data is an important problem that has found widespread applications. Examples include selecting offers, prices, advertisements to send to customers, as well as selecting which medication to prescribe to a patient. However, existing literature rests on the crucial assumption that the future environment where the learned policy will be deployed is the same as the past environment that has generated the data -- an assumption that is often false or too coarse an approximation. In this paper, we lift this assumption and aim to learn a distributionally robust policy with incomplete observational data. We first present a policy evaluation procedure that allows us to assess how well the policy does under the worst-case environment shift. We then establish a central limit theorem type guarantee for this proposed policy evaluation scheme. Leveraging this evaluation scheme, we further propose a novel learning algorithm that is able to learn a policy that is robust to adversarial perturbations and unknown covariate shifts with a performance guarantee based on the theory of uniform convergence. Finally, we empirically test the effectiveness of our proposed algorithm in synthetic datasets and demonstrate that it provides the robustness that is missing using standard policy learning algorithms. We conclude the paper by providing a comprehensive application of our methods in the context of a real-world voting dataset.

We present a new approach-the ALVar estimator-to estimation of asymptotic variance in sequential Monte Carlo methods, or, particle filters. The method, which adjusts adaptively the lag of the estimator proposed in [Olsson, J. and Douc, R. (2019). Numerically stable online estimation of variance in particle filters. Bernoulli, 25(2), pp. 1504-1535] applies to very general distribution flows and particle filters, including auxiliary particle filters with adaptive resampling. The algorithm operates entirely online, in the sense that it is able to monitor the variance of the particle filter in real time and with, on the average, constant computational complexity and memory requirements per iteration. Crucially, it does not require the calibration of any algorithmic parameter. Estimating the variance only on the basis of the genealogy of the propagated particle cloud, without additional simulations, the routine requires only minor code additions to the underlying particle algorithm. Finally, we prove that the ALVar estimator is consistent for the true asymptotic variance as the number of particles tends to infinity and illustrate numerically its superiority to existing approaches.

Non-convex optimization is ubiquitous in modern machine learning. Researchers devise non-convex objective functions and optimize them using off-the-shelf optimizers such as stochastic gradient descent and its variants, which leverage the local geometry and update iteratively. Even though solving non-convex functions is NP-hard in the worst case, the optimization quality in practice is often not an issue -- optimizers are largely believed to find approximate global minima. Researchers hypothesize a unified explanation for this intriguing phenomenon: most of the local minima of the practically-used objectives are approximately global minima. We rigorously formalize it for concrete instances of machine learning problems.

北京阿比特科技有限公司