亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Sparse linear regression methods for high-dimensional data often assume that residuals have constant variance. When this assumption is violated, it can lead to bias in estimated coefficients, prediction intervals with improper length, and increased type I errors. This paper proposes a heteroscedastic (H) high-dimensional linear regression model through a partitioned empirical Bayes Expectation Conditional Maximization (H-PROBE) algorithm. H-PROBE is a computationally efficient maximum a posteriori (MAP) estimation approach based on a Parameter-Expanded Expectation-Conditional-Maximization (PX-ECM) algorithm. It requires minimal prior assumptions on the regression parameters through plug-in empirical Bayes estimates of hyperparameters. The variance model uses recent advances in multivariate log-Gamma distribution theory and can include covariates hypothesized to impact heterogeneity. The motivation of our approach is a study relating Aphasia Quotient (AQ) to high-resolution T2 neuroimages of brain damage in stroke patients. AQ is a vital measure of language impairment and informs treatment decisions, but it is challenging to measure and subject to heteroscedastic errors. As a result, it is of clinical importance -- and the goal of this paper -- to use high-dimensional neuroimages to predict and provide prediction intervals for AQ that accurately reflect the heterogeneity in the residual variance. Our analysis demonstrates that H-PROBE can use markers of heterogeneity to provide prediction interval widths that are narrower than standard methods without sacrificing coverage. Further, through extensive simulation studies, we exhibit that the proposed approach results in superior prediction, variable selection, and predictive inference than competing methods.

相關內容

This article shows that a large class of posterior measures that are absolutely continuous with respect to a Gaussian prior have strong maximum a posteriori estimators in the sense of Dashti et al. (2013). This result holds in any separable Banach space and applies in particular to nonparametric Bayesian inverse problems with additive noise. When applied to Bayesian inverse problems, this significantly extends existing results on maximum a posteriori estimators by relaxing the conditions on the log-likelihood and on the space in which the inverse problem is set.

In the field of explainable Artificial Intelligence (XAI), sequential counterfactual (SCF) examples are often used to alter the decision of a trained classifier by implementing a sequence of modifications to the input instance. Although certain test-time algorithms aim to optimize for each new instance individually, recently Reinforcement Learning (RL) methods have been proposed that seek to learn policies for discovering SCFs, thereby enhancing scalability. As is typical in RL, the formulation of the RL problem, including the specification of state space, actions, and rewards, can often be ambiguous. In this work, we identify shortcomings in existing methods that can result in policies with undesired properties, such as a bias towards specific actions. We propose to use the output probabilities of the classifier to create a more informative reward, to mitigate this effect.

We establish a general convergence theory of the Rayleigh--Ritz method and the refined Rayleigh--Ritz method for computing some simple eigenpair ($\lambda_{*},x_{*}$) of a given analytic nonlinear eigenvalue problem (NEP). In terms of the deviation $\varepsilon$ of $x_{*}$ from a given subspace $\mathcal{W}$, we establish a priori convergence results on the Ritz value, the Ritz vector and the refined Ritz vector, and present sufficient convergence conditions for them. The results show that, as $\varepsilon\rightarrow 0$, there is a Ritz value that unconditionally converges to $\lambda_*$ and the corresponding refined Ritz vector does so too but the Ritz vector may fail to converge and even may not be unique. We also present an error bound for the approximate eigenvector in terms of the computable residual norm of a given approximate eigenpair, and give lower and upper bounds for the error of the refined Ritz vector and the Ritz vector as well as for that of the corresponding residual norms. These results nontrivially extend some convergence results on these two methods for the linear eigenvalue problem to the NEP. Examples are constructed to illustrate some of the results.

We propose a data segmentation methodology for the high-dimensional linear regression problem where regression parameters are allowed to undergo multiple changes. The proposed methodology, MOSEG, proceeds in two stages: first, the data are scanned for multiple change points using a moving window-based procedure, which is followed by a location refinement stage. MOSEG enjoys computational efficiency thanks to the adoption of a coarse grid in the first stage, and achieves theoretical consistency in estimating both the total number and the locations of the change points, under general conditions permitting serial dependence and non-Gaussianity. We also propose MOSEG.MS, a multiscale extension of MOSEG which, while comparable to MOSEG in terms of computational complexity, achieves theoretical consistency for a broader parameter space where large parameter shifts over short intervals and small changes over long stretches of stationarity are simultaneously allowed. We demonstrate good performance of the proposed methods in comparative simulation studies and in an application to predicting the equity premium.

Motivated by models of human decision making proposed to explain commonly observed deviations from conventional expected value preferences, we formulate two stochastic multi-armed bandit problems with distorted probabilities on the reward distributions: the classic $K$-armed bandit and the linearly parameterized bandit settings. We consider the aforementioned problems in the regret minimization as well as best arm identification framework for multi-armed bandits. For the regret minimization setting in $K$-armed as well as linear bandit problems, we propose algorithms that are inspired by Upper Confidence Bound (UCB) algorithms, incorporate reward distortions, and exhibit sublinear regret. For the $K$-armed bandit setting, we derive an upper bound on the expected regret for our proposed algorithm, and then we prove a matching lower bound to establish the order-optimality of our algorithm. For the linearly parameterized setting, our algorithm achieves a regret upper bound that is of the same order as that of regular linear bandit algorithm called Optimism in the Face of Uncertainty Linear (OFUL) bandit algorithm, and unlike OFUL, our algorithm handles distortions and an arm-dependent noise model. For the best arm identification problem in the $K$-armed bandit setting, we propose algorithms, derive guarantees on their performance, and also show that these algorithms are order optimal by proving matching fundamental limits on performance. For best arm identification in linear bandits, we propose an algorithm and establish sample complexity guarantees. Finally, we present simulation experiments which demonstrate the advantages resulting from using distortion-aware learning algorithms in a vehicular traffic routing application.

Selecting the step size for the Metropolis-adjusted Langevin algorithm (MALA) is necessary in order to obtain satisfactory performance. However, finding an adequate step size for an arbitrary target distribution can be a difficult task and even the best step size can perform poorly in specific regions of the space when the target distribution is sufficiently complex. To resolve this issue we introduce autoMALA, a new Markov chain Monte Carlo algorithm based on MALA that automatically sets its step size at each iteration based on the local geometry of the target distribution. We prove that autoMALA has the correct invariant distribution, despite continual automatic adjustments of the step size. Our experiments demonstrate that autoMALA is competitive with related state-of-the-art MCMC methods, in terms of the number of log density evaluations per effective sample, and it outperforms state-of-the-art samplers on targets with varying geometries. Furthermore, we find that autoMALA tends to find step sizes comparable to optimally-tuned MALA when a fixed step size suffices for the whole domain.

Contraction in Wasserstein 1-distance with explicit rates is established for generalized Hamiltonian Monte Carlo with stochastic gradients under possibly nonconvex conditions. The algorithms considered include splitting schemes of kinetic Langevin diffusion. As consequence, quantitative Gaussian concentration bounds are provided for empirical averages. Convergence in Wasserstein 2-distance, total variation and relative entropy are also given, together with numerical bias estimates.

Numerical resolution of high-dimensional nonlinear PDEs remains a huge challenge due to the curse of dimensionality. Starting from the weak formulation of the Lawson-Euler scheme, this paper proposes a stochastic particle method (SPM) by tracking the deterministic motion, random jump, resampling and reweighting of particles. Real-valued weighted particles are adopted by SPM to approximate the high-dimensional solution, which automatically adjusts the point distribution to intimate the relevant feature of the solution. A piecewise constant reconstruction with virtual uniform grid is employed to evaluate the nonlinear terms, which fully exploits the intrinsic adaptive characteristic of SPM. Combining both can SPM achieve the goal of adaptive sampling in time. Numerical experiments on the 6-D Allen-Cahn equation and the 7-D Hamiltonian-Jacobi-Bellman equation demonstrate the potential of SPM in solving high-dimensional nonlinear PDEs efficiently while maintaining an acceptable accuracy.

Most identification methods of unknown parameters of linear regression equations (LRE) ensure only boundedness of a parametric error in the presence of additive perturbations, which is almost always unacceptable for practical scenarios. In this paper, a new identification law is proposed to overcome this drawback and guarantee asymptotic convergence of the unknown parameters estimation error to zero in case the mentioned additive perturbation meets special averaging conditions. Theoretical results are illustrated by numerical simulations.

In this paper we consider the finite element approximation of Maxwell's problem and analyse the prescription of essential boundary conditions in a weak sense using Nitsche's method. To avoid indefiniteness of the problem, the original equations are augmented with the gradient of a scalar field that allows one to impose the zero divergence of the magnetic induction, even if the exact solution for this scalar field is zero. Two finite element approximations are considered, namely, one in which the approximation spaces are assumed to satisfy the appropriate inf-sup condition that render the standard Galerkin method stable, and another augmented and stabilised one that permits the use of finite element interpolations of arbitrary order. Stability and convergence results are provided for the two finite element formulations considered.

北京阿比特科技有限公司