亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Log-concave sampling has witnessed remarkable algorithmic advances in recent years, but the corresponding problem of proving lower bounds for this task has remained elusive, with lower bounds previously known only in dimension one. In this work, we establish the following query lower bounds: (1) sampling from strongly log-concave and log-smooth distributions in dimension $d\ge 2$ requires $\Omega(\log \kappa)$ queries, which is sharp in any constant dimension, and (2) sampling from Gaussians in dimension $d$ (hence also from general log-concave and log-smooth distributions in dimension $d$) requires $\widetilde \Omega(\min(\sqrt\kappa \log d, d))$ queries, which is nearly sharp for the class of Gaussians. Here $\kappa$ denotes the condition number of the target distribution. Our proofs rely upon (1) a multiscale construction inspired by work on the Kakeya conjecture in geometric measure theory, and (2) a novel reduction that demonstrates that block Krylov algorithms are optimal for this problem, as well as connections to lower bound techniques based on Wishart matrices developed in the matrix-vector query literature.

相關內容

Malware detection is an important topic of current cybersecurity, and Machine Learning appears to be one of the main considered solutions even if certain problems to generalize to new malware remain. In the aim of exploring the potential of quantum machine learning on this domain, our previous work showed that quantum neural networks do not perform well on image-based malware detection when using a few qubits. In order to enhance the performances of our quantum algorithms for malware detection using images, without increasing the resources needed in terms of qubits, we implement a new preprocessing of our dataset using Grayscale method, and we couple it with a model composed of five distributed quantum convolutional networks and a scoring function. We get an increase of around 20 \% of our results, both on the accuracy of the test and its F1-score.

This paper is concerned with recovering the solution of a final value problem associated with a parabolic equation involving a non linear source and a non-local term, which to the best of our knowledge has not been studied earlier. It is shown that the considered problem is ill-posed, and thus, some regularization method has to be employed in order to obtain stable approximations. In this regard, we obtain regularized approximations by solving some non linear integral equations which is derived by considering a truncated version of the Fourier expansion of the sought solution. Under different Gevrey smoothness assumptions on the exact solution, we provide parameter choice strategies and obtain the error estimates. A key tool in deriving such estimates is a version of Gr{\"o}nwalls' inequality for iterated integrals, which perhaps, is proposed and analysed for the first time.

The regularity of refinable functions has been analysed in an extensive literature and is well-understood in two cases: 1) univariate 2) multivariate with an isotropic dilation matrix. The general (non-isotropic) case offered a great resistance. It was done only recently by developing the matrix method. In this paper we make the next step and extend the Littlewood-Paley type method, which is very efficient in the aforementioned special cases, to general equations with arbitrary dilation matrices. This gives formulas for the higher order regularity in $W_2^k(\mathbb{R}^n)$ by means of the Perron eigenvalue of a finite-dimensional linear operator on a special cone. Applying those results to recently introduced tile B-splines, we prove that they can have a higher smoothness than the classical ones of the same order. Moreover, the two-digit tile B-splines have the minimal support of the mask among all refinable functions of the same order of approximation. This proves, in particular, the lowest algorithmic complexity of the corresponding subdivision schemes. Examples and numerical results are provided.

Spatial data can come in a variety of different forms, but two of the most common generating models for such observations are random fields and point processes. Whilst it is known that spectral analysis can unify these two different data forms, specific methodology for the related estimation is yet to be developed. In this paper, we solve this problem by extending multitaper estimation, to estimate the spectral density matrix function for multivariate spatial data, where processes can be any combination of either point processes or random fields. We discuss finite sample and asymptotic theory for the proposed estimators, as well as specific details on the implementation, including how to perform estimation on non-rectangular domains and the correct implementation of multitapering for processes sampled in different ways, e.g. continuously vs on a regular grid.

In this article, we study nonparametric inference for a covariate-adjusted regression function. This parameter captures the average association between a continuous exposure and an outcome after adjusting for other covariates. In particular, under certain causal conditions, this parameter corresponds to the average outcome had all units been assigned to a specific exposure level, known as the causal dose-response curve. We propose a debiased local linear estimator of the covariate-adjusted regression function, and demonstrate that our estimator converges pointwise to a mean-zero normal limit distribution. We use this result to construct asymptotically valid confidence intervals for function values and differences thereof. In addition, we use approximation results for the distribution of the supremum of an empirical process to construct asymptotically valid uniform confidence bands. Our methods do not require undersmoothing, permit the use of data-adaptive estimators of nuisance functions, and our estimator attains the optimal rate of convergence for a twice differentiable function. We illustrate the practical performance of our estimator using numerical studies and an analysis of the effect of air pollution exposure on cardiovascular mortality.

Many existing covariate shift adaptation methods estimate sample weights to be used in the risk estimation in order to mitigate the gap between the source and the target distribution. However, non-parametrically estimating the optimal weights typically involves computationally expensive hyper-parameter tuning that is crucial to the final performance. In this paper, we propose a new non-parametric approach to covariate shift adaptation which avoids estimating weights and has no hyper-parameter to be tuned. Our basic idea is to label unlabeled target data according to the $k$-nearest neighbors in the source dataset. Our analysis indicates that setting $k = 1$ is an optimal choice. Thanks to this property, there is no need to tune any hyper-parameters, unlike other non-parametric methods. Moreover, our method achieves a running time quasi-linear in the sample size with a theoretical guarantee, for the first time in the literature to the best of our knowledge. Our results include sharp rates of convergence for estimating the joint probability distribution of the target data. In particular, the variance of our estimators has the same rate of convergence as for standard parametric estimation despite their non-parametric nature. Our numerical experiments show that proposed method brings drastic reduction in the running time with accuracy comparable to that of the state-of-the-art methods.

Recently defined expectile regions capture the idea of centrality with respect to a multivariate distribution, but fail to describe the tail behavior while it is not at all clear what should be understood by a tail of a multivariate distribution. Therefore, cone expectile sets are introduced which take into account a vector preorder for the multi-dimensional data points. This provides a way of describing and clustering a multivariate distribution/data cloud with respect to an order relation. Fundamental properties of cone expectiles including dual representations of both expectile regions and cone expectile sets are established. It is shown that set-valued sublinear risk measures can be constructed from cone expectile sets in the same way as in the univariate case. Inverse functions of cone expectiles are defined which should be considered as rank functions rather than depth functions. Finally, expectile orders for random vectors are introduced and characterized via expectile rank functions.

We introduce an extension of first-order logic that comes equipped with additional predicates for reasoning about an abstract state. Sequents in the logic comprise a main formula together with pre- and postconditions in the style of Hoare logic, and the axioms and rules of the logic ensure that the assertions about the state compose in the correct way. The main result of the paper is a realizability interpretation of our logic that extracts programs into a mixed functional/imperative language. All programs expressible in this language act on the state in a sequential manner, and we make this intuition precise by interpreting them in a semantic metatheory using the state monad. Our basic framework is very general, and our intention is that it can be instantiated and extended in a variety of different ways. We outline in detail one such extension: A monadic version of Heyting arithmetic with a wellfounded while rule, and conclude by outlining several other directions for future work.

The log-rank conjecture, a longstanding problem in communication complexity, has persistently eluded resolution for decades. Consequently, some recent efforts have focused on potential approaches for establishing the conjecture in the special case of XOR functions, where the communication matrix is lifted from a boolean function, and the rank of the matrix equals the Fourier sparsity of the function, which is the number of its nonzero Fourier coefficients. In this note, we refute two conjectures. The first has origins in Montanaro and Osborne (arXiv'09) and is considered in Tsang et al. (FOCS'13), and the second one is due to Mande and Sanyal (FSTTCS'20). These conjectures were proposed in order to improve the best-known bound of Lovett (STOC'14) regarding the log-rank conjecture in the special case of XOR functions. Both conjectures speculate that the set of nonzero Fourier coefficients of the boolean function has some strong additive structure. We refute these conjectures by constructing two specific boolean functions tailored to each.

In recent years, object detection has experienced impressive progress. Despite these improvements, there is still a significant gap in the performance between the detection of small and large objects. We analyze the current state-of-the-art model, Mask-RCNN, on a challenging dataset, MS COCO. We show that the overlap between small ground-truth objects and the predicted anchors is much lower than the expected IoU threshold. We conjecture this is due to two factors; (1) only a few images are containing small objects, and (2) small objects do not appear enough even within each image containing them. We thus propose to oversample those images with small objects and augment each of those images by copy-pasting small objects many times. It allows us to trade off the quality of the detector on large objects with that on small objects. We evaluate different pasting augmentation strategies, and ultimately, we achieve 9.7\% relative improvement on the instance segmentation and 7.1\% on the object detection of small objects, compared to the current state of the art method on MS COCO.

北京阿比特科技有限公司