亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In Bayesian inference, a widespread technique to approximately sample from and compute statistics of a high-dimensional posterior is to use the Laplace approximation, a Gaussian proxy to the posterior. The Laplace approximation accuracy improves as sample size grows, but the question of how fast dimension $d$ can grow with sample size $n$ has not been fully resolved. Prior works have shown that $d^3\ll n$ is a sufficient condition for accuracy of the approximation. But by deriving the leading order contribution to the TV error, we show that $d^2\ll n$ is sufficient. We show for a logistic regression posterior that this growth condition is necessary.

相關內容

Randomized trials balance all covariates on average and provide the gold standard for estimating treatment effects. Chance imbalances nevertheless exist more or less in realized treatment allocations and intrigue an important question: what should we do in case the treatment groups differ with respect to some important baseline characteristics? A common strategy is to conduct a {\it preliminary test} of the balance of baseline covariates after randomization, and invoke covariate adjustment for subsequent inference if and only if the realized allocation fails some prespecified criterion. Although such practice is intuitive and popular among practitioners, the existing literature has so far only evaluated its properties under strong parametric model assumptions in theory and simulation, yielding results of limited generality. To fill this gap, we examine two strategies for conducting preliminary test-based covariate adjustment by regression, and evaluate the validity and efficiency of the resulting inferences from the randomization-based perspective. As it turns out, the preliminary-test estimator based on the analysis of covariance can be even less efficient than the unadjusted difference in means, and risks anticonservative confidence intervals based on normal approximation even with the robust standard error. The preliminary-test estimator based on the fully interacted specification is on the other hand less efficient than its counterpart under the {\it always-adjust} strategy, and yields overconservative confidence intervals based on normal approximation. Based on theory and simulation, we echo the existing literature and do not recommend the preliminary-test procedure for covariate adjustment in randomized trials.

We consider a non-linear Bayesian data assimilation model for the periodic two-dimensional Navier-Stokes equations with initial condition modelled by a Gaussian process prior. We show that if the system is updated with sufficiently many discrete noisy measurements of the velocity field, then the posterior distribution eventually concentrates near the ground truth solution of the time evolution equation, and in particular that the initial condition is recovered consistently by the posterior mean vector field. We further show that the convergence rate can in general not be faster than inverse logarithmic in sample size, but describe specific conditions on the initial conditions when faster rates are possible. In the proofs we provide an explicit quantitative estimate for backward uniqueness of solutions of the two-dimensional Navier-Stokes equations.

The criticality problem in nuclear engineering asks for the principal eigen-pair of a Boltzmann operator describing neutron transport in a reactor core. Being able to reliably design, and control such reactors requires assessing these quantities within quantifiable accuracy tolerances. In this paper we propose a paradigm that deviates from the common practice of approximately solving the corresponding spectral problem with a fixed, presumably sufficiently fine discretization. Instead, the present approach is based on first contriving iterative schemes, formulated in function space, that are shown to converge at a quantitative rate without assuming any a priori excess regularity properties, and that exploit only properties of the optical parameters in the underlying radiative transfer model. We develop the analytical and numerical tools for approximately realizing each iteration step withing judiciously chosen accuracy tolerances, verified by a posteriori estimates, so as to still warrant quantifiable convergence to the exact eigen-pair. This is carried out in full first for a Newton scheme. Since this is only locally convergent we analyze in addition the convergence of a power iteration in function space to produce sufficiently accurate initial guesses. Here we have to deal with intrinsic difficulties posed by compact but unsymmetric operators preventing standard arguments used in the finite dimensional case. Our main point is that we can avoid any condition on an initial guess to be already in a small neighborhood of the exact solution. We close with a discussion of remaining intrinsic obstructions to a certifiable numerical implementation, mainly related to not knowing the gap between the principal eigenvalue and the next smaller one in modulus.

In this paper we derive tight lower bounds resolving the hardness status of several fundamental weighted matroid problems. One notable example is budgeted matroid independent set, for which we show there is no fully polynomial-time approximation scheme (FPTAS), indicating the Efficient PTAS of [Doron-Arad, Kulik and Shachnai, SOSA 2023] is the best possible. Furthermore, we show that there is no pseudo-polynomial time algorithm for exact weight matroid independent set, implying the algorithm of [Camerini, Galbiati and Maffioli, J. Algorithms 1992] for representable matroids cannot be generalized to arbitrary matroids. Similarly, we show there is no Fully PTAS for constrained minimum basis of a matroid and knapsack cover with a matroid, implying the existing Efficient PTAS for the former is optimal. For all of the above problems, we obtain unconditional lower bounds in the oracle model, where the independent sets of the matroid can be accessed only via a membership oracle. We complement these results by showing that the same lower bounds hold under standard complexity assumptions, even if the matroid is encoded as part of the instance. All of our bounds are based on a specifically structured family of paving matroids.

This work addresses a version of the two-armed Bernoulli bandit problem where the sum of the means of the arms is one (the symmetric two-armed Bernoulli bandit). In a regime where the gap between these means goes to zero as the number of prediction periods approaches infinity, i.e., the difficulty of detecting the gap increases as the sample size increases, we obtain the leading order terms of the minmax optimal regret and pseudoregret for this problem by associating each of them with a solution of a linear heat equation. Our results improve upon the previously known results; specifically, we explicitly compute these leading order terms in three different scaling regimes for the gap. Additionally, we obtain new non-asymptotic bounds for any given time horizon. Although optimal player strategies are not known for more general bandit problems, there is significant interest in considering how regret accumulates under specific player strategies, even when they are not known to be optimal. We expect that the methods of this paper should be useful in settings of that type.

In this paper, we extend the Generalized Finite Difference Method (GFDM) on unknown compact submanifolds of the Euclidean domain, identified by randomly sampled data that (almost surely) lie on the interior of the manifolds. Theoretically, we formalize GFDM by exploiting a representation of smooth functions on the manifolds with Taylor's expansions of polynomials defined on the tangent bundles. We illustrate the approach by approximating the Laplace-Beltrami operator, where a stable approximation is achieved by a combination of Generalized Moving Least-Squares algorithm and novel linear programming that relaxes the diagonal-dominant constraint for the estimator to allow for a feasible solution even when higher-order polynomials are employed. We establish the theoretical convergence of GFDM in solving Poisson PDEs and numerically demonstrate the accuracy on simple smooth manifolds of low and moderate high co-dimensions as well as unknown 2D surfaces. For the Dirichlet Poisson problem where no data points on the boundaries are available, we employ GFDM with the volume-constraint approach that imposes the boundary conditions on data points close to the boundary. When the location of the boundary is unknown, we introduce a novel technique to detect points close to the boundary without needing to estimate the distance of the sampled data points to the boundary. We demonstrate the effectiveness of the volume-constraint employed by imposing the boundary conditions on the data points detected by this new technique compared to imposing the boundary conditions on all points within a certain distance from the boundary, where the latter is sensitive to the choice of truncation distance and require the knowledge of the boundary location.

Computing the diameter of a graph, i.e. the largest distance, is a fundamental problem that is central in fine-grained complexity. In undirected graphs, the Strong Exponential Time Hypothesis (SETH) yields a lower bound on the time vs. approximation trade-off that is quite close to the upper bounds. In \emph{directed} graphs, however, where only some of the upper bounds apply, much larger gaps remain. Since $d(u,v)$ may not be the same as $d(v,u)$, there are multiple ways to define the problem, the two most natural being the \emph{(one-way) diameter} ($\max_{(u,v)} d(u,v)$) and the \emph{roundtrip diameter} ($\max_{u,v} d(u,v)+d(v,u)$). In this paper we make progress on the outstanding open question for each of them. -- We design the first algorithm for diameter in sparse directed graphs to achieve $n^{1.5-\varepsilon}$ time with an approximation factor better than $2$. The new upper bound trade-off makes the directed case appear more similar to the undirected case. Notably, this is the first algorithm for diameter in sparse graphs that benefits from fast matrix multiplication. -- We design new hardness reductions separating roundtrip diameter from directed and undirected diameter. In particular, a $1.5$-approximation in subquadratic time would refute the All-Nodes $k$-Cycle hypothesis, and any $(2-\varepsilon)$-approximation would imply a breakthrough algorithm for approximate $\ell_{\infty}$-Closest-Pair. Notably, these are the first conditional lower bounds for diameter that are not based on SETH.

The widespread use of maximum Jeffreys'-prior penalized likelihood in binomial-response generalized linear models, and in logistic regression, in particular, are supported by the results of Kosmidis and Firth (2021, Biometrika), who show that the resulting estimates are also always finite-valued, even in cases where the maximum likelihood estimates are not, which is a practical issue regardless of the size of the data set. In logistic regression, the implied adjusted score equations are formally bias-reducing in asymptotic frameworks with a fixed number of parameters and appear to deliver a substantial reduction in the persistent bias of the maximum likelihood estimator in high-dimensional settings where the number of parameters grows asymptotically linearly and slower than the number of observations. In this work, we develop and present two new variants of iteratively reweighted least squares for estimating generalized linear models with adjusted score equations for mean bias reduction and maximization of the likelihood penalized by a positive power of the Jeffreys-prior penalty, which eliminate the requirement of storing $O(n)$ quantities in memory, and can operate with data sets that exceed computer memory or even hard drive capacity. We achieve that through incremental QR decompositions, which enable IWLS iterations to have access only to data chunks of predetermined size. We assess the procedures through a real-data application with millions of observations, and in high-dimensional logistic regression, where a large-scale simulation experiment produces concrete evidence for the existence of a simple adjustment to the maximum Jeffreys'-penalized likelihood estimates that delivers high accuracy in terms of signal recovery even in cases where estimates from ML and other recently-proposed corrective methods do not exist.

The last decade has seen many attempts to generalise the definition of modes, or MAP estimators, of a probability distribution $\mu$ on a space $X$ to the case that $\mu$ has no continuous Lebesgue density, and in particular to infinite-dimensional Banach and Hilbert spaces $X$. This paper examines the properties of and connections among these definitions. We construct a systematic taxonomy -- or `periodic table' -- of modes that includes the established notions as well as large hitherto-unexplored classes. We establish implications between these definitions and provide counterexamples to distinguish them. We also distinguish those definitions that are merely `grammatically correct' from those that are `meaningful' in the sense of satisfying certain `common-sense' axioms for a mode, among them the correct handling of discrete measures and those with continuous Lebesgue densities. However, despite there being 17 such `meaningful' definitions of mode, we show that none of them satisfy the `merging property', under which the modes of $\mu|_{A}$, $\mu|_{B}$ and $\mu|_{A \cup B}$ enjoy a straightforward relationship for well-separated positive-mass events $A,B \subseteq X$.

Analysis of high-dimensional data, where the number of covariates is larger than the sample size, is a topic of current interest. In such settings, an important goal is to estimate the signal level $\tau^2$ and noise level $\sigma^2$, i.e., to quantify how much variation in the response variable can be explained by the covariates, versus how much of the variation is left unexplained. This thesis considers the estimation of these quantities in a semi-supervised setting, where for many observations only the vector of covariates $X$ is given with no responses $Y$. Our main research question is: how can one use the unlabeled data to better estimate $\tau^2$ and $\sigma^2$? We consider two frameworks: a linear regression model and a linear projection model in which linearity is not assumed. In the first framework, while linear regression is used, no sparsity assumptions on the coefficients are made. In the second framework, the linearity assumption is also relaxed and we aim to estimate the signal and noise levels defined by the linear projection. We first propose a naive estimator which is unbiased and consistent, under some assumptions, in both frameworks. We then show how the naive estimator can be improved by using zero-estimators, where a zero-estimator is a statistic arising from the unlabeled data, whose expected value is zero. In the first framework, we calculate the optimal zero-estimator improvement and discuss ways to approximate the optimal improvement. In the second framework, such optimality does no longer hold and we suggest two zero-estimators that improve the naive estimator although not necessarily optimally. Furthermore, we show that our approach reduces the variance for general initial estimators and we present an algorithm that potentially improves any initial estimator. Lastly, we consider four datasets and study the performance of our suggested methods.

北京阿比特科技有限公司