亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

One of the most prominent methods for uncertainty quantification in high-dimen-sional statistics is the desparsified LASSO that relies on unconstrained $\ell_1$-minimization. The majority of initial works focused on real (sub-)Gaussian designs. However, in many applications, such as magnetic resonance imaging (MRI), the measurement process possesses a certain structure due to the nature of the problem. The measurement operator in MRI can be described by a subsampled Fourier matrix. The purpose of this work is to extend the uncertainty quantification process using the desparsified LASSO to design matrices originating from a bounded orthonormal system, which naturally generalizes the subsampled Fourier case and also allows for the treatment of the case where the sparsity basis is not the standard basis. In particular we construct honest confidence intervals for every pixel of an MR image that is sparse in the standard basis provided the number of measurements satisfies $n \gtrsim\max\{ s\log^2 s\log p, s \log^2 p \}$ or that is sparse with respect to the Haar Wavelet basis provided a slightly larger number of measurements.

相關內容

Compact finite-difference (FD) schemes specify derivative approximations implicitly, thus to achieve parallelism with domain-decomposition suitable partitioning of linear systems is required. Consistent order of accuracy, dispersion, and dissipation is crucial to maintain in wave propagation problems such that deformation of the associated spectra of the discretized problems is not too severe. In this work we consider numerically tuning spectral error, at fixed formal order of accuracy to automatically devise new compact FD schemes. Grid convergence tests indicate error reduction of at least an order of magnitude over standard FD. A proposed hybrid matching-communication strategy maintains the aforementioned properties under domain-decomposition. Under evolution of linear wave-propagation problems utilizing exponential integration or explicit Runge-Kutta methods improvement is found to remain robust. A first demonstration that compact FD methods may be applied to the Z4c formulation of numerical relativity is provided where we couple our header-only, templated C++ implementation to the highly performant GR-Athena++ code. Evolving Z4c on test-bed problems shows at least an order in magnitude reduction in phase error compared to FD for propagated metric components. Stable binary-black-hole evolution utilizing compact FD together with improved convergence is also demonstrated.

In this paper we present an active-set method for the solution of $\ell_1$-regularized convex quadratic optimization problems. It is derived by combining a proximal method of multipliers (PMM) strategy with a standard semismooth Newton method (SSN). The resulting linear systems are solved using a Krylov-subspace method, accelerated by certain general-purpose preconditioners which are shown to be optimal with respect to the proximal parameters. Practical efficiency is further improved by warm-starting the algorithm using a proximal alternating direction method of multipliers. We show that the outer PMM achieves global convergence under mere feasibility assumptions. Under additional standard assumptions, the PMM scheme achieves global linear and local superlinear convergence. The SSN scheme is locally superlinearly convergent, assuming that its associated linear systems are solved accurately enough, and globally convergent under certain additional regularity assumptions. We provide numerical evidence to demonstrate the effectiveness of the approach by comparing it against OSQP and IP-PMM (an ADMM and a regularized IPM solver, respectively) on several elastic-net linear regression and $L^1$-regularized PDE-constrained optimization problems.

We present a multigrid algorithm to solve efficiently the large saddle-point systems of equations that typically arise in PDE-constrained optimization under uncertainty. The algorithm is based on a collective smoother that at each iteration sweeps over the nodes of the computational mesh, and solves a reduced saddle-point system whose size depends on the number $N$ of samples used to discretized the probability space. We show that this reduced system can be solved with optimal $O(N)$ complexity. We test the multigrid method on three problems: a linear-quadratic problem for which the multigrid method is used to solve directly the linear optimality system; a nonsmooth problem with box constraints and $L^1$-norm penalization on the control, in which the multigrid scheme is used within a semismooth Newton iteration; a risk-adverse problem with the smoothed CVaR risk measure where the multigrid method is called within a preconditioned Newton iteration. In all cases, the multigrid algorithm exhibits very good performances and robustness with respect to all parameters of interest.

Deep neural networks (DNNs) have achieved tremendous success in making accurate predictions for computer vision, natural language processing, as well as science and engineering domains. However, it is also well-recognized that DNNs sometimes make unexpected, incorrect, but overconfident predictions. This can cause serious consequences in high-stake applications, such as autonomous driving, medical diagnosis, and disaster response. Uncertainty quantification (UQ) aims to estimate the confidence of DNN predictions beyond prediction accuracy. In recent years, many UQ methods have been developed for DNNs. It is of great practical value to systematically categorize these UQ methods and compare their advantages and disadvantages. However, existing surveys mostly focus on categorizing UQ methodologies from a neural network architecture perspective or a Bayesian perspective and ignore the source of uncertainty that each methodology can incorporate, making it difficult to select an appropriate UQ method in practice. To fill the gap, this paper presents a systematic taxonomy of UQ methods for DNNs based on the types of uncertainty sources (data uncertainty versus model uncertainty). We summarize the advantages and disadvantages of methods in each category. We show how our taxonomy of UQ methodologies can potentially help guide the choice of UQ method in different machine learning problems (e.g., active learning, robustness, and reinforcement learning). We also identify current research gaps and propose several future research directions.

Measurement error is ubiquitous in many variables - from blood pressure recordings in physiology to intelligence measures in psychology. Structural equation models (SEMs) account for the process of measurement by explicitly distinguishing between latent variables and their measurement indicators. Users often fit entire SEMs to data, but this can fail if some model parameters are not identified. The model-implied instrumental variables (MIIVs) approach is a more flexible alternative that can estimate subsets of model parameters in identified equations. Numerous methods to identify individual parameters also exist in the field of graphical models (such as DAGs), but many of these do not account for measurement effects. Here, we take the concept of "latent-to-observed" (L2O) transformation from the MIIV approach and develop an equivalent graphical L2O transformation that allows applying existing graphical criteria to latent parameters in SEMs. We combine L2O transformation with graphical instrumental variable criteria to obtain an efficient algorithm for non-iterative parameter identification in SEMs with latent variables. We prove that this graphical L2O transformation with the instrumental set criterion is equivalent to the state-of-the-art MIIV approach for SEMs, and show that it can lead to novel identification strategies when combined with other graphical criteria.

In this work, we extend the data-driven It\^{o} stochastic differential equation (SDE) framework for the pathwise assessment of short-term forecast errors to account for the time-dependent upper bound that naturally constrains the observable historical data and forecast. We propose a new nonlinear and time-inhomogeneous SDE model with a Jacobi-type diffusion term for the phenomenon of interest, simultaneously driven by the forecast and the constraining upper bound. We rigorously demonstrate the existence and uniqueness of a strong solution to the SDE model by imposing a condition for the time-varying mean-reversion parameter appearing in the drift term. The normalized forecast function is thresholded to keep such mean-reversion parameters bounded. The SDE model parameter calibration also covers the thresholding parameter of the normalized forecast by applying a novel iterative two-stage optimization procedure to user-selected approximations of the likelihood function. Another novel contribution is estimating the transition density of the forecast error process, not known analytically in a closed form, through a tailored kernel smoothing technique with the control variate method. We fit the model to the 2019 photovoltaic (PV) solar power daily production and forecast data in Uruguay, computing the daily maximum solar PV production estimation. Two statistical versions of the constrained SDE model are fit, with the beta and truncated normal distributions as proxies for the transition density. Empirical results include simulations of the normalized solar PV power production and pathwise confidence bands generated through an indirect inference method. An objective comparison of optimal parametric points associated with the two selected statistical approximations is provided by applying the innovative kernel density estimation technique of the transition function of the forecast error process.

Many large-scale recommender systems consist of two stages. The first stage efficiently screens the complete pool of items for a small subset of promising candidates, from which the second-stage model curates the final recommendations. In this paper, we investigate how to ensure group fairness to the items in this two-stage architecture. In particular, we find that existing first-stage recommenders might select an irrecoverably unfair set of candidates such that there is no hope for the second-stage recommender to deliver fair recommendations. To this end, motivated by recent advances in uncertainty quantification, we propose two threshold-policy selection rules that can provide distribution-free and finite-sample guarantees on fairness in first-stage recommenders. More concretely, given any relevance model of queries and items and a point-wise lower confidence bound on the expected number of relevant items for each threshold-policy, the two rules find near-optimal sets of candidates that contain enough relevant items in expectation from each group of items. To instantiate the rules, we demonstrate how to derive such confidence bounds from potentially partial and biased user feedback data, which are abundant in many large-scale recommender systems. In addition, we provide both finite-sample and asymptotic analyses of how close the two threshold selection rules are to the optimal thresholds. Beyond this theoretical analysis, we show empirically that these two rules can consistently select enough relevant items from each group while minimizing the size of the candidate sets for a wide range of settings.

Uncertain fractional differential equation (UFDE) is a kind of differential equation about uncertain process. As an significant mathematical tool to describe the evolution process of dynamic system, UFDE is better than the ordinary differential equation with integer derivatives because of its hereditability and memorability characteristics. However, in most instances, the precise analytical solutions of UFDE is difficult to obtain due to the complex form of the UFDE itself. Up to now, there is not plenty of researches about the numerical method of UFDE, as for the existing numerical algorithms, their accuracy is also not high. In this research, derive from the interval weighting method, a class of fractional adams method is innovatively proposed to solve UFDE. Meanwhile, such fractional adams method extends the traditional predictor-corrector method to higher order cases. The stability and truncation error limit of the improved algorithm are analyzed and deduced. As the application, several numerical simulations (including $\alpha$-path, extreme value and the first hitting time of the UFDE) are provided to manifest the higher accuracy and efficiency of the proposed numerical method.

This paper proposes a paradigm of uncertainty injection for training deep learning model to solve robust optimization problems. The majority of existing studies on deep learning focus on the model learning capability, while assuming the quality and accuracy of the inputs data can be guaranteed. However, in realistic applications of deep learning for solving optimization problems, the accuracy of inputs, which are the problem parameters in this case, plays a large role. This is because, in many situations, it is often costly or sometime impossible to obtain the problem parameters accurately, and correspondingly, it is highly desirable to develop learning algorithms that can account for the uncertainties in the input and produce solutions that are robust against these uncertainties. This paper presents a novel uncertainty injection scheme for training machine learning models that are capable of implicitly accounting for the uncertainties and producing statistically robust solutions. We further identify the wireless communications as an application field where uncertainties are prevalent in problem parameters such as the channel coefficients. We show the effectiveness of the proposed training scheme in two applications: the robust power loading for multiuser multiple-input-multiple-output (MIMO) downlink transmissions; and the robust power control for device-to-device (D2D) networks.

Ensembles over neural network weights trained from different random initialization, known as deep ensembles, achieve state-of-the-art accuracy and calibration. The recently introduced batch ensembles provide a drop-in replacement that is more parameter efficient. In this paper, we design ensembles not only over weights, but over hyperparameters to improve the state of the art in both settings. For best performance independent of budget, we propose hyper-deep ensembles, a simple procedure that involves a random search over different hyperparameters, themselves stratified across multiple random initializations. Its strong performance highlights the benefit of combining models with both weight and hyperparameter diversity. We further propose a parameter efficient version, hyper-batch ensembles, which builds on the layer structure of batch ensembles and self-tuning networks. The computational and memory costs of our method are notably lower than typical ensembles. On image classification tasks, with MLP, LeNet, and Wide ResNet 28-10 architectures, our methodology improves upon both deep and batch ensembles.

北京阿比特科技有限公司