亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study deviation of U-statistics when samples have heavy-tailed distribution so the kernel of the U-statistic does not have bounded exponential moments at any positive point. We obtain an exponential upper bound for the tail of the U-statistics which clearly denotes two regions of tail decay, the first is a Gaussian decay and the second behaves like the tail of the kernel. For several common U-statistics, we also show the upper bound has the right rate of decay as well as sharp constants by obtaining rough logarithmic limits which in turn can be used to develop LDP for U-statistics. In spite of usual LDP results in the literature, processes we consider in this work have LDP speed slower than their sample size $n$.

相關內容

迄今為止,產品設(she)計(ji)師最(zui)友(you)好的(de)交互動畫軟件(jian)。

We consider a high-dimensional dynamic pricing problem under non-stationarity, where a firm sells products to $T$ sequentially arriving consumers that behave according to an unknown demand model with potential changes at unknown times. The demand model is assumed to be a high-dimensional generalized linear model (GLM), allowing for a feature vector in $\mathbb R^d$ that encodes products and consumer information. To achieve optimal revenue (i.e., least regret), the firm needs to learn and exploit the unknown GLMs while monitoring for potential change-points. To tackle such a problem, we first design a novel penalized likelihood-based online change-point detection algorithm for high-dimensional GLMs, which is the first algorithm in the change-point literature that achieves optimal minimax localization error rate for high-dimensional GLMs. A change-point detection assisted dynamic pricing (CPDP) policy is further proposed and achieves a near-optimal regret of order $O(s\sqrt{\Upsilon_T T}\log(Td))$, where $s$ is the sparsity level and $\Upsilon_T$ is the number of change-points. This regret is accompanied with a minimax lower bound, demonstrating the optimality of CPDP (up to logarithmic factors). In particular, the optimality with respect to $\Upsilon_T$ is seen for the first time in the dynamic pricing literature, and is achieved via a novel accelerated exploration mechanism. Extensive simulation experiments and a real data application on online lending illustrate the efficiency of the proposed policy and the importance and practical value of handling non-stationarity in dynamic pricing.

Privacy protection methods, such as differentially private mechanisms, introduce noise into resulting statistics which often results in complex and intractable sampling distributions. In this paper, we propose to use the simulation-based "repro sample" approach to produce statistically valid confidence intervals and hypothesis tests based on privatized statistics. We show that this methodology is applicable to a wide variety of private inference problems, appropriately accounts for biases introduced by privacy mechanisms (such as by clamping), and improves over other state-of-the-art inference methods such as the parametric bootstrap in terms of the coverage and type I error of the private inference. We also develop significant improvements and extensions for the repro sample methodology for general models (not necessarily related to privacy), including 1) modifying the procedure to ensure guaranteed coverage and type I errors, even accounting for Monte Carlo error, and 2) proposing efficient numerical algorithms to implement the confidence intervals and $p$-values.

It is a common phenomenon that for high-dimensional and nonparametric statistical models, rate-optimal estimators balance squared bias and variance. Although this balancing is widely observed, little is known whether methods exist that could avoid the trade-off between bias and variance. We propose a general strategy to obtain lower bounds on the variance of any estimator with bias smaller than a prespecified bound. This shows to which extent the bias-variance trade-off is unavoidable and allows to quantify the loss of performance for methods that do not obey it. The approach is based on a number of abstract lower bounds for the variance involving the change of expectation with respect to different probability measures as well as information measures such as the Kullback-Leibler or $\chi^2$-divergence. In a second part of the article, the abstract lower bounds are applied to several statistical models including the Gaussian white noise model, a boundary estimation problem, the Gaussian sequence model and the high-dimensional linear regression model. For these specific statistical applications, different types of bias-variance trade-offs occur that vary considerably in their strength. For the trade-off between integrated squared bias and integrated variance in the Gaussian white noise model, we propose to combine the general strategy for lower bounds with a reduction technique. This allows us to reduce the original problem to a lower bound on the bias-variance trade-off for estimators with additional symmetry properties in a simpler statistical model. In the Gaussian sequence model, different phase transitions of the bias-variance trade-off occur. Although there is a non-trivial interplay between bias and variance, the rate of the squared bias and the variance do not have to be balanced in order to achieve the minimax estimation rate.

In the design of wireless receivers, DNNs can be combined with traditional model-based receiver algorithms to realize modular hybrid model-based/data-driven architectures that can account for domain knowledge. Such architectures typically include multiple modules, each carrying out a different functionality. Conventionally trained DNN-based modules are known to produce poorly calibrated, typically overconfident, decisions. This implies that an incorrect decision may propagate through the architecture without any indication of its insufficient accuracy. To address this problem, we present a novel combination of Bayesian learning with hybrid model-based/data-driven architectures for wireless receiver design. The proposed methodology, referred to as modular model-based Bayesian learning, results in better calibrated modules, improving accuracy and calibration of the overall receiver. We demonstrate this approach for the recently proposed DeepSIC MIMO receiver, showing significant improvements with respect to the state-of-the-art learning methods.

We introduce and analyze a discontinuous Galerkin method for the numerical modelling of the equations of Multiple-Network Poroelastic Theory (MPET) in the dynamic formulation. The MPET model can comprehensively describe functional changes in the brain considering multiple scales of fluids. Concerning the spatial discretization, we employ a high-order discontinuous Galerkin method on polygonal and polyhedral grids and we derive stability and a priori error estimates. The temporal discretization is based on a coupling between a Newmark $\beta$-method for the momentum equation and a $\theta$-method for the pressure equations. After the presentation of some verification numerical tests, we perform a convergence analysis using an agglomerated mesh of a geometry of a brain slice. Finally we present a simulation in a three dimensional patient-specific brain reconstructed from magnetic resonance images. The model presented in this paper can be regarded as a preliminary attempt to model the perfusion in the brain.

Computing accurate splines of degree greater than three is still a challenging task in today's applications. In this type of interpolation, high-order derivatives are needed on the given mesh. As these derivatives are rarely known and are often not easy to approximate accurately, high-degree splines are difficult to obtain using standard approaches. In Beaudoin (1998), Beaudoin and Beauchemin (2003), and Pepin et al. (2019), a new method to compute spline approximations of low or high degree from equidistant interpolation nodes based on the discrete Fourier transform is analyzed. The accuracy of this method greatly depends on the accuracy of the boundary conditions. An algorithm for the computation of the boundary conditions can be found in Beaudoin (1998), and Beaudoin and Beauchemin (2003). However, this algorithm lacks robustness since the approximation of the boundary conditions is strongly dependant on the choice of $\theta$ arbitrary parameters, $\theta$ being the degree of the spline. The goal of this paper is therefore to propose two new robust algorithms, independent of arbitrary parameters, for the computation of the boundary conditions in order to obtain accurate splines of any degree. Numerical results will be presented to show the efficiency of these new approaches.

Like most modern blockchain networks, Ethereum has relied on economic incentives to promote honest participation in the chain's consensus. The distributed character of the platform, together with the "randomness" or "luck" factor that both proof of work (PoW) and proof of stake (PoS) provide when electing the next block proposer, pushed the industry to model and improve the reward system of the system. With several improvements to predict PoW block proposal rewards and to maximize the extractable rewards of the same ones, the ultimate Ethereum's transition to PoS applied in the Paris Hard-Fork, more generally known as "The Merge", has meant a significant modification on the reward system in the platform. In this paper, we aim to break down both theoretically and empirically the new reward system in this post-merge era. We present a highly detailed description of the different rewards and their share among validators' rewards. Ultimately, we offer a study that uses the presented reward model to analyze the performance of the network during this transition.

We present and analyze a high-order discontinuous Galerkin method for the space discretization of the wave propagation model in thermo-poroelastic media. The proposed scheme supports general polytopal grids. Stability analysis and $hp$-version error estimates in suitable energy norms are derived for the semi-discrete problem. The fully-discrete scheme is then obtained based on employing an implicit Newmark-$\beta$ time integration scheme. A wide set of numerical simulations is reported, both for the verification of the theoretical estimates and for examples of physical interest. A comparison with the results of the poroelastic model is provided too, highlighting the differences between the predictive capabilities of the two models.

With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.

北京阿比特科技有限公司