亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We address the problem of testing conditional mean and conditional variance for non-stationary data. We build e-values and p-values for four types of non-parametric composite hypotheses with specified mean and variance as well as other conditions on the shape of the data-generating distribution. These shape conditions include symmetry, unimodality, and their combination. Using the obtained e-values and p-values, we construct tests via e-processes, also known as testing by betting, as well as some tests based on combining p-values for comparison. Although we mainly focus on one-sided tests, the two-sided test for the mean is also studied. Simulation and empirical studies are conducted under a few settings, and they illustrate features of the methods based on e-processes.

相關內容

We consider M-estimation problems, where the target value is determined using a minimizer of an expected functional of a Levy process. With discrete observations from the Levy process, we can produce a "quasi-path" by shuffling increments of the Levy process, we call it a quasi-process. Under a suitable sampling scheme, a quasi-process can converge weakly to the true process according to the properties of the stationary and independent increments. Using this resampling technique, we can estimate objective functionals similar to those estimated using the Monte Carlo simulations, and it is available as a contrast function. The M-estimator based on these quasi-processes can be consistent and asymptotically normal.

The performance of machine learning models can be impacted by changes in data over time. A promising approach to address this challenge is invariant learning, with a particular focus on a method known as invariant risk minimization (IRM). This technique aims to identify a stable data representation that remains effective with out-of-distribution (OOD) data. While numerous studies have developed IRM-based methods adaptive to data augmentation scenarios, there has been limited attention on directly assessing how well these representations preserve their invariant performance under varying conditions. In our paper, we propose a novel method to evaluate invariant performance, specifically tailored for IRM-based methods. We establish a bridge between the conditional expectation of an invariant predictor across different environments through the likelihood ratio. Our proposed criterion offers a robust basis for evaluating invariant performance. We validate our approach with theoretical support and demonstrate its effectiveness through extensive numerical studies.These experiments illustrate how our method can assess the invariant performance of various representation techniques.

We present a new approach to compute eigenvalues and eigenvectors of locally definite multiparameter eigenvalue problems by its signed multiindex. The method has the interpretation of a semismooth Newton method applied to certain functions that have a unique zero. We can therefore show local quadratic convergence, and for certain extreme eigenvalues even global linear convergence of the method. Local definiteness is a weaker condition than right and left definiteness, which is often considered for multiparameter eigenvalue problems. These conditions are naturally satisfied for multiparameter Sturm-Liouville problems that arise when separation of variables can be applied to multidimensional boundary eigenvalue problems.

This paper addresses the inverse scattering problem in the domain Omega. The input data, measured outside Omega, involve the waves generated by the interaction of plane waves with various directions and unknown scatterers fully occluded inside Omega. The output of this problem is the spatially dielectric constant of these scatterers. Our approach to solving this problem consists of two primary stages. Initially, we eliminate the unknown dielectric constant from the governing equation, resulting in a system of partial differential equations. Subsequently, we develop the Carleman contraction mapping method to effectively tackle this system. It is noteworthy to highlight this method's robustness. It does not request a precise initial guess of the true solution, and its computational cost is not expensive. Some numerical examples are presented.

When analyzing large datasets, it is common to select a model prior to making inferences. For reliable inferences, it is important to make adjustments that account for the model selection process, resulting in selective inferences. Our paper introduces an asymptotic pivot to infer about the effects of selected variables on conditional quantile functions. Utilizing estimators from smoothed quantile regression, our proposed pivot is easy to compute and ensures asymptotically-exact selective inferences without making strict distributional assumptions about the response variable. At the core of the pivot is the use of external randomization, which enables us to utilize the full sample for both selection and inference without the need to partition the data into independent data subsets or discard data at either step. On simulated data, we find that: (i) the asymptotic confidence intervals based on our pivot achieve the desired coverage rates, even in cases where sample splitting fails due to insufficient sample size for inference; (ii) our intervals are consistently shorter than those produced by sample splitting across various models and signal settings. We report similar findings when we apply our approach to study risk factors for low birth weights in a publicly accessible dataset of US birth records from 2022.

Gaussian random number generators attract a widespread interest due to their applications in several fields. Important requirements include easy implementation, tail accuracy, and, finally, a flat spectrum. In this work, we study the applicability of uniform pseudorandom binary generators in combination with the Central Limit Theorem to propose an easy to implement, efficient and flexible algorithm that leverages the properties of the pseudorandom binary generator used as an input, specially with respect to the correlation measure of higher order, to guarantee the quality of the generated samples. Our main result provides a relationship between the pseudorandomness of the input and the statistical moments of the output. We propose a design based on the combination of pseudonoise sequences commonly used on wireless communications with known hardware implementation, which can generate sequences with guaranteed statistical distribution properties sufficient for many real life applications and simple machinery. Initial computer simulations on this construction show promising results in the quality of the output and the computational resources in terms of required memory and complexity.

In relational verification, judicious alignment of computational steps facilitates proof of relations between programs using simple relational assertions. Relational Hoare logics (RHL) provide compositional rules that embody various alignments of executions. Seemingly more flexible alignments can be expressed in terms of product automata based on program transition relations. A single degenerate alignment rule (self-composition), atop a complete Hoare logic, comprises a RHL for $\forall\forall$ properties that is complete in the ordinary logical sense (Cook'78). The notion of alignment completeness was previously proposed as a more satisfactory measure, and some rules were shown to be alignment complete with respect to a few ad hoc forms of alignment automata. This paper proves alignment completeness with respect to a general class of $\forall\forall$ alignment automata, for a RHL comprised of standard rules together with a rule of semantics-preserving rewrites based on Kleene algebra with tests. A new logic for $\forall\exists$ properties is introduced and shown to be alignment complete. The $\forall\forall$ and $\forall\exists$ automata are shown to be semantically complete. Thus the logics are both complete in the ordinary sense. Recent work by D'Osualdo et al highlights the importance of completeness relative to assumptions (which we term entailment completeness), and presents $\forall\forall$ examples seemingly beyond the scope of RHLs. Additional rules enable these examples to be proved in our RHL, shedding light on the open problem of entailment completeness.

Evaluations of model editing currently only use the `next few token' completions after a prompt. As a result, the impact of these methods on longer natural language generation is largely unknown. We introduce long-form evaluation of model editing (LEME) a novel evaluation protocol that measures the efficacy and impact of model editing in long-form generative settings. Our protocol consists of a machine-rated survey and a classifier which correlates well with human ratings. Importantly, we find that our protocol has very little relationship with previous short-form metrics (despite being designed to extend efficacy, generalization, locality, and portability into a long-form setting), indicating that our method introduces a novel set of dimensions for understanding model editing methods. Using this protocol, we benchmark a number of model editing techniques and present several findings including that, while some methods (ROME and MEMIT) perform well in making consistent edits within a limited scope, they suffer much more from factual drift than other methods. Finally, we present a qualitative analysis that illustrates common failure modes in long-form generative settings including internal consistency, lexical cohesion, and locality issues.

In data assimilation, an ensemble provides a nonintrusive way to evolve a probability density described by a nonlinear prediction model. Although a large ensemble size is required for statistical accuracy, the ensemble size is typically limited to a small number due to the computational cost of running the prediction model, which leads to a sampling error. Several methods, such as localization, exist to mitigate the sampling error, often requiring problem-dependent fine-tuning and design. This work introduces another sampling error mitigation method using a smoothness constraint in the Fourier space. In particular, this work smoothes out the spectrum of the system to increase the stability and accuracy even under a small ensemble size. The efficacy of the new idea is validated through a suite of stringent test problems, including Lorenz 96 and Kuramoto-Sivashinsky turbulence models.

We consider the problem of causal inference based on observational data (or the related missing data problem) with a binary or discrete treatment variable. In that context we study counterfactual density estimation, which provides more nuanced information than counterfactual mean estimation (i.e., the average treatment effect). We impose the shape-constraint of log-concavity (a unimodality constraint) on the counterfactual densities, and then develop doubly robust estimators of the log-concave counterfactual density (based on an augmented inverse-probability weighted pseudo-outcome), and show the consistency in various global metrics of that estimator. Based on that estimator we also develop asymptotically valid pointwise confidence intervals for the counterfactual density.

北京阿比特科技有限公司