亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The recent thought-provoking paper by Hansen [2022, Econometrica] proved that the Gauss-Markov theorem continues to hold without the requirement that competing estimators are linear in the vector of outcomes. Despite the elegant proof, it was shown by the authors and other researchers that the main result in the earlier version of Hansen's paper does not extend the classic Gauss-Markov theorem because no nonlinear unbiased estimator exists under his conditions. To address the issue, Hansen [2022] added statements in the latest version with new conditions under which nonlinear unbiased estimators exist. Motivated by the lively discussion, we study a fundamental problem: what estimators are unbiased for a given class of linear models? We first review a line of highly relevant work dating back to the 1960s, which, unfortunately, have not drawn enough attention. Then, we introduce notation that allows us to restate and unify results from earlier work and Hansen [2022]. The new framework also allows us to highlight differences among previous conclusions. Lastly, we establish new representation theorems for unbiased estimators under different restrictions on the linear model, allowing the coefficients and covariance matrix to take only a finite number of values, the higher moments of the estimator and the dependent variable to exist, and the error distribution to be discrete, absolutely continuous, or dominated by another probability measure. Our results substantially generalize the claims of parallel commentaries on Hansen [2022] and a remarkable result by Koopmann [1982].

相關內容

Bayesian optimization (BO) primarily uses Gaussian processes (GP) as the key surrogate model, mostly with a simple stationary and separable kernel function such as the widely used squared-exponential kernel with automatic relevance determination (SE-ARD). However, such simple kernel specifications are deficient in learning functions with complex features, such as being nonstationary, nonseparable, and multimodal. Approximating such functions using a local GP, even in a low-dimensional space, will require a large number of samples, not to mention in a high-dimensional setting. In this paper, we propose to use Bayesian Kernelized Tensor Factorization (BKTF) -- as a new surrogate model -- for BO in a D-dimensional Cartesian product space. Our key idea is to approximate the underlying D-dimensional solid with a fully Bayesian low-rank tensor CP decomposition, in which we place GP priors on the latent basis functions for each dimension to encode local consistency and smoothness. With this formulation, information from each sample can be shared not only with neighbors but also across dimensions. Although BKTF no longer has an analytical posterior, we can still efficiently approximate the posterior distribution through Markov chain Monte Carlo (MCMC) and obtain prediction and full uncertainty quantification (UQ). We conduct numerical experiments on both standard BO testing problems and machine learning hyperparameter tuning problems, and our results confirm the superiority of BKTF in terms of sample efficiency.

We propose the use of U-statistics to reduce variance for gradient estimation in importance-weighted variational inference. The key observation is that, given a base gradient estimator that requires $m > 1$ samples and a total of $n > m$ samples to be used for estimation, lower variance is achieved by averaging the base estimator on overlapping batches of size $m$ than disjoint batches, as currently done. We use classical U-statistic theory to analyze the variance reduction, and propose novel approximations with theoretical guarantees to ensure computational efficiency. We find empirically that U-statistic variance reduction can lead to modest to significant improvements in inference performance on a range of models, with little computational cost.

In this work, we focus on the Neumann-Neumann method (NNM), which is one of the most popular non-overlapping domain decomposition methods. Even though the NNM is widely used and proves itself very efficient when applied to discrete problems in practical applications, it is in general not well defined at the continuous level when the geometric decomposition involves cross-points. Our goals are to investigate this well-posedness issue and to provide a complete analysis of the method at the continuous level, when applied to a simple elliptic problem on a configuration involving one cross-point. More specifically, we prove that the algorithm generates solutions that are singular near the cross-points. We also exhibit the type of singularity introduced by the method, and show how it propagates through the iterations. Then, based on this analysis, we design a new set of transmission conditions that makes the new NNM geometrically convergent for this simple configuration. Finally, we illustrate our results with numerical experiments.

The increasing scale of large language models (LLMs) brings emergent abilities to various complex tasks requiring reasoning, such as arithmetic and commonsense reasoning. It is known that the effective design of task-specific prompts is critical for LLMs' ability to produce high-quality answers. In particular, an effective approach for complex question-and-answer tasks is example-based prompting with chain-of-thought (CoT) reasoning, which significantly improves the performance of LLMs. However, current CoT methods rely on a fixed set of human-annotated exemplars, which are not necessarily the most effective examples for different tasks. This paper proposes a new method, Active-Prompt, to adapt LLMs to different tasks with task-specific example prompts (annotated with human-designed CoT reasoning). For this purpose, we propose a solution to the key problem of determining which questions are the most important and helpful ones to annotate from a pool of task-specific queries. By borrowing ideas from the related problem of uncertainty-based active learning, we introduce several metrics to characterize the uncertainty so as to select the most uncertain questions for annotation. Experimental results demonstrate the superiority of our proposed method, achieving state-of-the-art on eight complex reasoning tasks. Further analyses of different uncertainty metrics, pool sizes, zero-shot learning, and accuracy-uncertainty relationship demonstrate the effectiveness of our method. Our code will be available at //github.com/shizhediao/active-prompt.

Recent development in high-dimensional statistical inference has necessitated concentration inequalities for a broader range of random variables. We focus on sub-Weibull random variables, which extend sub-Gaussian or sub-exponential random variables to allow heavy-tailed distributions. This paper presents concentration inequalities for independent sub-Weibull random variables with finite Generalized Bernstein-Orlicz norms, providing generalized Bernstein's inequalities and Rosenthal-type moment bounds. The tightness of the proposed bounds is shown through lower bounds of the concentration inequalities obtained via the Paley-Zygmund inequality. The results are applied to a graphical model inference problem, improving previous sample complexity bounds.

The problem of permutation-invariant learning over set representations is particularly relevant in the field of multi-agent systems -- a few potential applications include unsupervised training of aggregation functions in graph neural networks (GNNs), neural cellular automata on graphs, and prediction of scenes with multiple objects. Yet existing approaches to set encoding and decoding tasks present a host of issues, including non-permutation-invariance, fixed-length outputs, reliance on iterative methods, non-deterministic outputs, computationally expensive loss functions, and poor reconstruction accuracy. In this paper we introduce a Permutation-Invariant Set Autoencoder (PISA), which tackles these problems and produces encodings with significantly lower reconstruction error than existing baselines. PISA also provides other desirable properties, including a similarity-preserving latent space, and the ability to insert or remove elements from the encoding. After evaluating PISA against baseline methods, we demonstrate its usefulness in a multi-agent application. Using PISA as a subcomponent, we introduce a novel GNN architecture which serves as a generalised communication scheme, allowing agents to use communication to gain full observability of a system.

Given a set of points $P$ and a set of regions $\mathcal{O}$, an incidence is a pair $(p,o ) \in P \times \mathcal{O}$ such that $p \in o$. We obtain a number of new results on a classical question in combinatorial geometry: What is the number of incidences (under certain restrictive conditions)? We prove a bound of $O\bigl( k n(\log n/\log\log n)^{d-1} \bigr)$ on the number of incidences between $n$ points and $n$ axis-parallel boxes in $\mathbb{R}^d$, if no $k$ boxes contain $k$ common points, that is, if the incidence graph between the points and the boxes does not contain $K_{k,k}$ as a subgraph. This new bound improves over previous work, by Basit, Chernikov, Starchenko, Tao, and Tran (2021), by more than a factor of $\log^d n$ for $d >2$. Furthermore, it matches a lower bound implied by the work of Chazelle (1990), for $k=2$, thus settling the question for points and boxes. We also study several other variants of the problem. For halfspaces, using shallow cuttings, we get a linear bound in two and three dimensions. We also present linear (or near linear) bounds for shapes with low union complexity, such as pseudodisks and fat triangles.

The Work Disability Functional Assessment Battery (WD-FAB) is a multidimensional item response theory (IRT) instrument designed for assessing work-related mental and physical function based on responses to an item bank. In prior iterations it was developed using traditional means -- linear factorization and null hypothesis statistical testing for item partitioning/selection, and finally, posthoc calibration of disjoint unidimensional IRT models. As a result, the WD-FAB, like many other IRT instruments, is a posthoc model. Its item partitioning, based on exploratory factor analysis, is blind to the final nonlinear IRT model and is not performed in a manner consistent with goodness of fit to the final model. In this manuscript, we develop a Bayesian hierarchical model for self-consistently performing the following simultaneous tasks: scale factorization, item selection, parameter identification, and response scoring. This method uses sparsity-based shrinkage to obviate the linear factorization and null hypothesis statistical tests that are usually required for developing multidimensional IRT models, so that item partitioning is consistent with the ultimate nonlinear factor model. We also analogize our multidimensional IRT model to probabilistic autoencoders, specifying an encoder function that amortizes the inference of ability parameters from item responses. The encoder function is equivalent to the "VBE" step in a stochastic variational Bayesian expectation maximization (VBEM) procedure that we use for approxiamte Bayesian inference on the entire model. We use the method on a sample of WD-FAB item responses and compare the resulting item discriminations to those obtained using the traditional posthoc method.

Variational Inference (VI) is an attractive alternative to Markov Chain Monte Carlo (MCMC) due to its computational efficiency in the case of large datasets and/or complex models with high-dimensional parameters. However, evaluating the accuracy of variational approximations remains a challenge. Existing methods characterize the quality of the whole variational distribution, which is almost always poor in realistic applications, even if specific posterior functionals such as the component-wise means or variances are accurate. Hence, these diagnostics are of practical value only in limited circumstances. To address this issue, we propose the TArgeted Diagnostic for Distribution Approximation Accuracy (TADDAA), which uses many short parallel MCMC chains to obtain lower bounds on the error of each posterior functional of interest. We also develop a reliability check for TADDAA to determine when the lower bounds should not be trusted. Numerical experiments validate the practical utility and computational efficiency of our approach on a range of synthetic distributions and real-data examples, including sparse logistic regression and Bayesian neural network models.

In a seminal paper in 2013, Witt showed that the (1+1) Evolutionary Algorithm with standard bit mutation needs time $(1+o(1))n \ln n/p_1$ to find the optimum of any linear function, as long as the probability $p_1$ to flip exactly one bit is $\Theta(1)$. In this paper we investigate how this result generalizes if standard bit mutation is replaced by an arbitrary unbiased mutation operator. This situation is notably different, since the stochastic domination argument used for the lower bound by Witt no longer holds. In particular, starting closer to the optimum is not necessarily an advantage, and OneMax is no longer the easiest function for arbitrary starting position. Nevertheless, we show that Witt's result carries over if $p_1$ is not too small and if the number of flipped bits has bounded expectation~$\mu$. Notably, this includes some of the heavy-tail mutation operators used in fast genetic algorithms, but not all of them. We also give examples showing that algorithms with unbounded $\mu$ have qualitatively different trajectories close to the optimum.

北京阿比特科技有限公司