亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Major progress has been made in the previous decade to characterize the asymptotic behavior of regularized M-estimators in high-dimensional regression problems in the proportional asymptotic regime where the sample size $n$ and the number of features $p$ are increasing simultaneously such that $n/p\to \delta \in(0,\infty)$, using powerful tools such as Approximate Message Passing or the Convex Gaussian Min-Max Theorem (CGMT). The asymptotic error and behavior of the regularized M-estimator is then typically described by a system of nonlinear equations with a few scalar unknowns, and the solution to this system precisely characterize the asymptotic error. Application of the CGMT and related machinery requires the existence of a solution to this low-dimensional system of equations. This paper resolves the question of existence of solution to this low-dimensional system for the case of linear models with independent additive noise, when both the data-fitting loss function and regularization penalty are separable and convex. Such existence result for solution to the nonlinear system were previously known under strong convexity for specific estimators such as the Lasso. The main idea behind this existence result is inspired by an argument developed \cite{montanari2019generalization,celentano2020lasso} in different contexts: By constructing an ad-hoc convex minimization problem in an infinite dimensional Hilbert space, the existence of the Lagrange multiplier for this optimization problem makes it possible to construct explicitly solutions to the low-dimensional system of interest. The conditions under which we derive this existence result exactly correspond to the side of the phase transition where perfect recovery $\hat x= x_0$ fails, so that these conditions are optimal.

相關內容

A discretization method with non-matching grids is proposed for the coupled Stokes-Darcy problem that uses a mortar variable at the interface to couple the marker and cell (MAC) method in the Stokes domain with the Raviart-Thomas mixed finite element pair in the Darcy domain. Due to this choice, the method conserves linear momentum and mass locally in the Stokes domain and exhibits local mass conservation in the Darcy domain. The MAC scheme is reformulated as a mixed finite element method on a staggered grid, which allows for the proposed scheme to be analyzed as a mortar mixed finite element method. We show that the discrete system is well-posed and derive a priori error estimates that indicate first order convergence in all variables. The system can be reduced to an interface problem concerning only the mortar variables, leading to a non-overlapping domain decomposition method. Numerical examples are presented to illustrate the theoretical results and the applicability of the method.

The current study investigates the asymptotic spectral properties of a finite difference approximation of nonlocal Helmholtz equations with a Caputo fractional Laplacian and a variable coefficient wave number $\mu$, as it occurs when considering a wave propagation in complex media, characterized by nonlocal interactions and spatially varying wave speeds. More specifically, by using tools from Toeplitz and generalized locally Toeplitz theory, the present research delves into the spectral analysis of nonpreconditioned and preconditioned matrix-sequences. We report numerical evidences supporting the theoretical findings. Finally, open problems and potential extensions in various directions are presented and briefly discussed.

The possibility of unmeasured confounding is one of the main limitations for causal inference from observational studies. There are different methods for partially empirically assessing the plausibility of unconfoundedness. However, most currently available methods require (at least partial) assumptions about the confounding structure, which may be difficult to know in practice. In this paper we describe a simple strategy for empirically assessing the plausibility of conditional unconfoundedness (i.e., whether the candidate set of covariates suffices for confounding adjustment) which does not require any assumptions about the confounding structure, requiring instead assumptions related to temporal ordering between covariates, exposure and outcome (which can be guaranteed by design), measurement error and selection into the study. The proposed method essentially relies on testing the association between a subset of covariates (those associated with the exposure given all other covariates) and the outcome conditional on the remaining covariates and the exposure. We describe the assumptions underlying the method, provide proofs, use simulations to corroborate the theory and illustrate the method with an applied example assessing the causal effect of length-for-age measured in childhood and intelligence quotient measured in adulthood using data from the 1982 Pelotas (Brazil) birth cohort. We also discuss the implications of measurement error and some important limitations.

The broad class of multivariate unified skew-normal (SUN) distributions has been recently shown to possess fundamental conjugacy properties. When used as priors for the vector of parameters in general probit, tobit, and multinomial probit models, these distributions yield posteriors that still belong to the SUN family. Although such a core result has led to important advancements in Bayesian inference and computation, its applicability beyond likelihoods associated with fully-observed, discretized, or censored realizations from multivariate Gaussian models remains yet unexplored. This article covers such an important gap by proving that the wider family of multivariate unified skew-elliptical (SUE) distributions, which extends SUNs to more general perturbations of elliptical densities, guarantees conjugacy for broader classes of models, beyond those relying on fully-observed, discretized or censored Gaussians. Such a result leverages the closure under linear combinations, conditioning and marginalization of SUE to prove that such a family is conjugate to the likelihood induced by general multivariate regression models for fully-observed, censored or dichotomized realizations from skew-elliptical distributions. This advancement substantially enlarges the set of models that enable conjugate Bayesian inference to general formulations arising from elliptical and skew-elliptical families, including the multivariate Student's t and skew-t, among others.

The present article is concerned scattered data approximation for higher dimensional data sets which exhibit an anisotropic behavior in the different dimensions. Tailoring sparse polynomial interpolation to this specific situation, we derive very efficient degenerate kernel approximations which we then use in a dimension weighted fast multipole method. This dimension weighted fast multipole method enables to deal with many more dimensions than the standard black-box fast multipole method based on interpolation. A thorough analysis of the method is provided including rigorous error estimates. The accuracy and the cost of the approach are validated by extensive numerical results. As a relevant application, we apply the approach to a shape uncertainty quantification problem.

It is well-known that one can construct solutions to the nonlocal Cahn-Hilliard equation with singular potentials via Yosida approximation with parameter $\lambda \to 0$. The usual method is based on compactness arguments and does not provide any rate of convergence. Here, we fill the gap and we obtain an explicit convergence rate $\sqrt{\lambda}$. The proof is based on the theory of maximal monotone operators and an observation that the nonlocal operator is of Hilbert-Schmidt type. Our estimate can provide convergence result for the Galerkin methods where the parameter $\lambda$ could be linked to the discretization parameters, yielding appropriate error estimates.

This paper addresses structured normwise, mixed, and componentwise condition numbers (CNs) for a linear function of the solution to the generalized saddle point problem (GSPP). We present a general framework enabling us to measure the structured CNs of the individual solution components and derive their explicit formulae when the input matrices have symmetric, Toeplitz, or some general linear structures. In addition, compact formulae for the unstructured CNs are obtained, which recover previous results on CNs for GSPPs for specific choices of the linear function. Furthermore, an application of the derived structured CNs is provided to determine the structured CNs for the weighted Teoplitz regularized least-squares problems and Tikhonov regularization problems, which retrieves some previous studies in the literature.

Composite quantile regression has been used to obtain robust estimators of regression coefficients in linear models with good statistical efficiency. By revealing an intrinsic link between the composite quantile regression loss function and the Wasserstein distance from the residuals to the set of quantiles, we establish a generalization of the composite quantile regression to the multiple-output settings. Theoretical convergence rates of the proposed estimator are derived both under the setting where the additive error possesses only a finite $\ell$-th moment (for $\ell > 2$) and where it exhibits a sub-Weibull tail. In doing so, we develop novel techniques for analyzing the M-estimation problem that involves Wasserstein-distance in the loss. Numerical studies confirm the practical effectiveness of our proposed procedure.

We introduce a lower bounding technique for the min max correlation clustering problem and, based on this technique, a combinatorial 4-approximation algorithm for complete graphs. This improves upon the previous best known approximation guarantees of 5, using a linear program formulation (Kalhan et al., 2019), and 40, for a combinatorial algorithm (Davies et al., 2023a). We extend this algorithm by a greedy joining heuristic and show empirically that it improves the state of the art in solution quality and runtime on several benchmark datasets.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

北京阿比特科技有限公司