亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We introduce a multiple testing procedure that controls the median of the proportion of false discoveries (FDP) in a flexible way. The procedure only requires a vector of p-values as input and is comparable to the Benjamini-Hochberg method, which controls the mean of the FDP. Our method allows freely choosing one or several values of alpha after seeing the data -- unlike Benjamini-Hochberg, which can be very liberal when alpha is chosen post hoc. We prove these claims and illustrate them with simulations. Our procedure is inspired by a popular estimator of the total number of true hypotheses. We adapt this estimator to provide simultaneously median unbiased estimators of the FDP, valid for finite samples. This simultaneity allows for the claimed flexibility. Our approach does not assume independence. The time complexity of our method is linear in the number of hypotheses, after sorting the p-values.

相關內容

Device redundancy is one of the most well-known mechanisms in distributed systems to increase the overall system fault tolerance and, consequently, trustworthiness. Existing algorithms in this regard aim to exchange a significant number of messages among nodes to identify and agree which communication links or nodes are faulty. This approach greatly degrades the performance of those wireless communication networks exposed to limited available bandwidth and/or energy consumption due to messages flooding. Lately, quantum-assisted mechanisms have been envisaged as an appealing alternative to improve the performance in this kind of communication networks and have been shown to obtain levels of performance close to the ones achieved in ideal conditions. The purpose of this paper is to further explore this approach by using super-additivity and superposed quantum trajectories in quantum Internet to obtain a higher system trustworthiness. More specifically, the wireless communication network that supports the permafrost telemetry service for the Antarctica together with five operational modes (three of them using classical techniques and two of them using quantum-assisted mechanisms) have been simulated. Obtained results show that the new quantum-assisted mechanisms can increase the system performance by up to a 28%.

Experimental particle physics uses machine learning for many of tasks, where one application is to classify signal and background events. The classification can be used to bin an analysis region to enhance the expected significance for a mass resonance search. In natural language processing, one of the leading neural network architectures is the transformer. In this work, an event classifier transformer is proposed to bin an analysis region, in which the network is trained with special techniques. The techniques developed here can enhance the significance and reduce the correlation between the network's output and the reconstructed mass. It is found that this trained network can perform better than boosted decision trees and feed-forward networks.

The implication problem for conditional independence (CI) asks whether the fact that a probability distribution obeys a given finite set of CI relations implies that a further CI statement also holds in this distribution. This problem has a long and fascinating history, cumulating in positive results about implications now known as the semigraphoid axioms as well as impossibility results about a general finite characterization of CI implications. Motivated by violation of faithfulness assumptions in causal discovery, we study the implication problem in the special setting where the CI relations are obtained from a directed acyclic graphical (DAG) model along with one additional CI statement. Focusing on the Gaussian case, we give a complete characterization of when such an implication is graphical by using algebraic techniques. Moreover, prompted by the relevance of strong faithfulness in statistical guarantees for causal discovery algorithms, we give a graphical solution for an approximate CI implication problem, in which we ask whether small values of one additional partial correlation entail small values for yet a further partial correlation.

Several branches of computing use a system's physical dynamics to do computation. We show that the dynamics of an underdamped harmonic oscillator can perform multifunctional computation, solving distinct problems at distinct times within a single dynamical trajectory. Oscillator computing usually focuses on the oscillator's phase as the information-carrying component. Here we focus on the time-resolved amplitude of an oscillator whose inputs influence its frequency, which has a natural parallel as the activity of a time-dependent neural unit. Because the activity of the unit at fixed time is a nonmonotonic function of the input, the unit can solve nonlinearly-separable problems such as XOR. Because the activity of the unit at fixed input is a nonmonotonic function of time, the unit is multifunctional in a temporal sense, able to carry out distinct nonlinear computations at distinct times within the same dynamical trajectory. Time-resolved computing of this nature can be done in or out of equilibrium, with the natural time evolution of the system giving us multiple computations for the price of one.

In inverse problems, one attempts to infer spatially variable functions from indirect measurements of a system. To practitioners of inverse problems, the concept of "information" is familiar when discussing key questions such as which parts of the function can be inferred accurately and which cannot. For example, it is generally understood that we can identify system parameters accurately only close to detectors, or along ray paths between sources and detectors, because we have "the most information" for these places. Although referenced in many publications, the "information" that is invoked in such contexts is not a well understood and clearly defined quantity. Herein, we present a definition of information density that is based on the variance of coefficients as derived from a Bayesian reformulation of the inverse problem. We then discuss three areas in which this information density can be useful in practical algorithms for the solution of inverse problems, and illustrate the usefulness in one of these areas -- how to choose the discretization mesh for the function to be reconstructed -- using numerical experiments.

Cross-validation is usually employed to evaluate the performance of a given statistical methodology. When such a methodology depends on a number of tuning parameters, cross-validation proves to be helpful to select the parameters that optimize the estimated performance. In this paper, however, a very different and nonstandard use of cross-validation is investigated. Instead of focusing on the cross-validated parameters, the main interest is switched to the estimated value of the error criterion at optimal performance. It is shown that this approach is able to provide consistent and efficient estimates of some density functionals, with the noteworthy feature that these estimates do not rely on the choice of any further tuning parameter, so that, in that sense, they can be considered to be purely empirical. Here, a base case of application of this new paradigm is developed in full detail, while many other possible extensions are hinted as well.

The complexity of the list homomorphism problem for signed graphs appears difficult to classify. Existing results focus on special classes of signed graphs, such as trees and reflexive signed graphs. Irreflexive signed graphs are in a certain sense the heart of the problem, as noted by a recent paper of Kim and Siggers. We focus on a special class of irreflexive signed graphs, namely those in which the unicoloured edges form a spanning path or cycle, which we call separable signed graphs. We classify the complexity of list homomorphisms to these separable signed graphs; we believe that these signed graphs will play an important role for the general resolution of the irreflexive case. We also relate our results to a conjecture of Kim and Siggers concerning the special case of semi-balanced irreflexive signed graphs; we have proved the conjecture in another paper, and the present results add structural information to that topic.

Any interactive protocol between a pair of parties can be reliably simulated in the presence of noise with a multiplicative overhead on the number of rounds (Schulman 1996). The reciprocal of the best (least) overhead is called the interactive capacity of the noisy channel. In this work, we present lower bounds on the interactive capacity of the binary erasure channel. Our lower bound improves the best known bound due to Ben-Yishai et al. 2021 by roughly a factor of 1.75. The improvement is due to a tighter analysis of the correctness of the simulation protocol using error pattern analysis. More precisely, instead of using the well-known technique of bounding the least number of erasures needed to make the simulation fail, we identify and bound the probability of specific erasure patterns causing simulation failure. We remark that error pattern analysis can be useful in solving other problems involving stochastic noise, such as bounding the interactive capacity of different channels.

We introduce a new approach for estimating the invariant density of a multidimensional diffusion when dealing with high-frequency observations blurred by independent noises. We consider the intermediate regime, where observations occur at discrete time instances $k\Delta_n$ for $k=0,\dots,n$, under the conditions $\Delta_n\to 0$ and $n\Delta_n\to\infty$. Our methodology involves the construction of a kernel density estimator that uses a pre-averaging technique to proficiently remove noise from the data while preserving the analytical characteristics of the underlying signal and its asymptotic properties. The rate of convergence of our estimator depends on both the anisotropic regularity of the density and the intensity of the noise. We establish conditions on the intensity of the noise that ensure the recovery of convergence rates similar to those achievable without any noise. Furthermore, we prove a Bernstein concentration inequality for our estimator, from which we derive an adaptive procedure for the kernel bandwidth selection.

This is a preleminary work. Overdamped Langevin dynamics are reversible stochastic differential equations which are commonly used to sample probability measures in high dimensional spaces, such as the ones appearing in computational statistical physics and Bayesian inference. By varying the diffusion coefficient, there are in fact infinitely many reversible overdamped Langevin dynamics which preserve the target probability measure at hand. This suggests to optimize the diffusion coefficient in order to increase the convergence rate of the dynamics, as measured by the spectral gap of the generator associated with the stochastic differential equation. We analytically study this problem here, obtaining in particular necessary conditions on the optimal diffusion coefficient. We also derive an explicit expression of the optimal diffusion in some homogenized limit. Numerical results, both relying on discretizations of the spectral gap problem and Monte Carlo simulations of the stochastic dynamics, demonstrate the increased quality of the sampling arising from an appropriate choice of the diffusion coefficient.

北京阿比特科技有限公司