亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

For $m, d \in {\mathbb N}$, a jittered sampling point set $P$ having $N = m^d$ points in $[0,1)^d$ is constructed by partitioning the unit cube $[0,1)^d$ into $m^d$ axis-aligned cubes of equal size and then placing one point independently and uniformly at random in each cube. We show that there are constants $c \ge 0$ and $C$ such that for all $d$ and all $m \ge d$ the expected non-normalized star discrepancy of a jittered sampling point set satisfies \[c \,dm^{\frac{d-1}{2}} \sqrt{1 + \log(\tfrac md)} \le {\mathbb E} D^*(P) \le C\, dm^{\frac{d-1}{2}} \sqrt{1 + \log(\tfrac md)}.\] This discrepancy is thus smaller by a factor of $\Theta\big(\sqrt{\frac{1+\log(m/d)}{m/d}}\,\big)$ than the one of a uniformly distributed random point set of $m^d$ points. This result improves both the upper and the lower bound for the discrepancy of jittered sampling given by Pausinger and Steinerberger (Journal of Complexity (2016)). It also removes the asymptotic requirement that $m$ is sufficiently large compared to $d$.

相關內容

多學科的復雜性雜志發表原始研究論文,包含大量的數學結果的復雜性,大致構思。在計算復雜度方面,重點是在RealS上的復雜性,下界和最優算法。《復雜性雜志》還出版了提供主要新算法或在上界取得重大進展的文章以及一些其他的計算模型,如圖靈機模型。官網鏈接: · 極小點 · AIM · SimPLe · 情景 ·
2022 年 1 月 14 日

We introduce a computational origami problem which we call the segment folding problem: given a set of $n$ line-segments in the plane the aim is to make creases along all segments in the minimum number of folding steps. Note that a folding might alter the relative position between the segments, and a segment could split into two. We show that it is NP-hard to determine whether $n$ line segments can be folded in $n$ simple folding operations.

This paper deals with robust inference for parametric copula models. Estimation using Canonical Maximum Likelihood might be unstable, especially in the presence of outliers. We propose to use a procedure based on the Maximum Mean Discrepancy (MMD) principle. We derive non-asymptotic oracle inequalities, consistency and asymptotic normality of this new estimator. In particular, the oracle inequality holds without any assumption on the copula family, and can be applied in the presence of outliers or under misspecification. Moreover, in our MMD framework, the statistical inference of copula models for which there exists no density with respect to the Lebesgue measure on $[0,1]^d$, as the Marshall-Olkin copula, becomes feasible. A simulation study shows the robustness of our new procedures, especially compared to pseudo-maximum likelihood estimation. An R package implementing the MMD estimator for copula models is available.

Petri nets, equivalently presentable as vector addition systems with states, are an established model of concurrency with widespread applications. The reachability problem, where we ask whether from a given initial configuration there exists a sequence of valid execution steps reaching a given final configuration, is the central algorithmic problem for this model. The complexity of the problem has remained, until recently, one of the hardest open questions in verification of concurrent systems. A first upper bound has been provided only in 2015 by Leroux and Schmitz, then refined by the same authors to non-primitive recursive Ackermannian upper bound in 2019. The exponential space lower bound, shown by Lipton already in 1976, remained the only known for over 40 years until a breakthrough non-elementary lower bound by Czerwi{\'n}ski, Lasota, Lazic, Leroux and Mazowiecki in 2019. Finally, a matching Ackermannian lower bound announced this year by Czerwi{\'n}ski and Orlikowski, and independently by Leroux, established the complexity of the problem. Our primary contribution is an improvement of the former construction, making it conceptually simpler and more direct. On the way we improve the lower bound for vector addition systems with states in fixed dimension (or, equivalently, Petri nets with fixed number of places): while Czerwi{\'n}ski and Orlikowski prove $F_k$-hardness (hardness for $k$th level in Grzegorczyk Hierarchy) in dimension $6k$, our simplified construction yields $F_k$-hardness already in dimension $3k+2$.

Robust discrete optimization is a highly active field of research where a plenitude of combinations between decision criteria, uncertainty sets and underlying nominal problems are considered. Usually, a robust problem becomes harder to solve than its nominal counterpart, even if it remains in the same complexity class. For this reason, specialized solution algorithms have been developed. To further drive the development of stronger solution algorithms and to facilitate the comparison between methods, a set of benchmark instances is necessary but so far missing. In this paper we propose a further step towards this goal by proposing several instance generation procedures for combinations of min-max, min-max regret, two-stage and recoverable robustness with interval, discrete or budgeted uncertainty sets. Besides sampling methods that go beyond the simple uniform sampling method that is the de-facto standard to produce instances, also optimization models to construct hard instances are considered. Using a selection problem for the nominal ground problem, we are able to generate instances that are several orders of magnitudes harder to solve than uniformly sampled instances when solving them with a general mixed-integer programming solver. All instances and generator codes are made available online.

In this article we prove that a class of Goppa codes whose Goppa polynomial is of the form $g(x) = x + x^q + \cdots + x^{q^{m-1}}$ where $m \geq 3$ (i.e. $g(x)$ is a trace polynomial from a field extension of degree $m \geq 3$) has a better minimum distance than what the Goppa bound $d \geq 2deg(g(x))+1$ implies. Our improvement is based on finding another Goppa polynomial $h$ such that $C(L,g) = C(M, h)$ but $deg(h) > deg(g)$. This is a significant improvement over Trace Goppa codes over quadratic field extensions (i.e. the case $m = 2$), as the Goppa bound for the quadratic case is sharp.

Moment methods are an important means of density estimation, but they are generally strongly dependent on the choice of feasible functions, which severely affects the performance. We propose a non-classical parameterization for density estimation using the sample moments, which does not require the choice of such functions. The parameterization is induced by the Kullback-Leibler distance, and the solution of it, which is proved to exist and be unique subject to simple prior that does not depend on data, can be obtained by convex optimization. Simulation results show the performance of the proposed estimator in estimating multi-modal densities which are mixtures of different types of functions.

Sampling-based motion planning algorithms are widely used in robotics because they are very effective in high-dimensional spaces. However, the success rate and quality of the solutions are determined by an adequate selection of their parameters such as the distance between states, the local planner, and the sampling distribution. For robots with large configuration spaces or dynamic restrictions, selecting these parameters is a challenging task. This paper proposes a method for improving the performance to a set of the most popular sampling-based algorithms, the Rapidly-exploring Random Trees (RRTs) by adjusting the sampling method. The idea is to replace the uniform probability density function (U-PDF) with a custom distribution (C-PDF) learned from previously successful queries in similar tasks. With a few samples, our method builds a custom distribution that allows the RRT to grow to promising states that will lead to a solution. We tested our method in several autonomous driving tasks such as parking maneuvers, obstacle clearance and under narrow passages scenarios. The results show that the proposed method outperforms the original RRT and several improved versions in terms of success rate, tree density and computation time. In addition, the proposed method requires a relatively small set of examples, unlike current deep learning techniques that require a vast amount of examples.

In this study, we analyze the efficiency of a protocol with discrete modulation of continuous variable non-Gaussian states, the coherent states having one photon added and then one photon subtracted (PASCS). We calculate the secure key generation rate against collective attacks using the fact that Eve's information can be bounded based on the protocol with Gaussian modulation, which in turn is unconditionally secure. Our results for a four-state protocol show that the PASCS always outperforms the equivalent coherent states protocol under the same environmental conditions. Interestingly, we find that for the protocol using discrete-modulated PASCS, the noisier the line, the better will be its performance compared to the protocol using coherent states. Thus, our proposal proves to be advantageous for performing quantum key distribution in non-ideal situations.

We propose a general and scalable approximate sampling strategy for probabilistic models with discrete variables. Our approach uses gradients of the likelihood function with respect to its discrete inputs to propose updates in a Metropolis-Hastings sampler. We show empirically that this approach outperforms generic samplers in a number of difficult settings including Ising models, Potts models, restricted Boltzmann machines, and factorial hidden Markov models. We also demonstrate the use of our improved sampler for training deep energy-based models on high dimensional discrete data. This approach outperforms variational auto-encoders and existing energy-based models. Finally, we give bounds showing that our approach is near-optimal in the class of samplers which propose local updates.

The Normalized Cut (NCut) objective function, widely used in data clustering and image segmentation, quantifies the cost of graph partitioning in a way that biases clusters or segments that are balanced towards having lower values than unbalanced partitionings. However, this bias is so strong that it avoids any singleton partitions, even when vertices are very weakly connected to the rest of the graph. Motivated by the B\"uhler-Hein family of balanced cut costs, we propose the family of Compassionately Conservative Balanced (CCB) Cut costs, which are indexed by a parameter that can be used to strike a compromise between the desire to avoid too many singleton partitions and the notion that all partitions should be balanced. We show that CCB-Cut minimization can be relaxed into an orthogonally constrained $\ell_{\tau}$-minimization problem that coincides with the problem of computing Piecewise Flat Embeddings (PFE) for one particular index value, and we present an algorithm for solving the relaxed problem by iteratively minimizing a sequence of reweighted Rayleigh quotients (IRRQ). Using images from the BSDS500 database, we show that image segmentation based on CCB-Cut minimization provides better accuracy with respect to ground truth and greater variability in region size than NCut-based image segmentation.

北京阿比特科技有限公司