亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present an $\ell^2_2+\ell_1$-regularized discrete least squares approximation over general regions under assumptions of hyperinterpolation, named hybrid hyperinterpolation. Hybrid hyperinterpolation, using a soft thresholding operator and a filter function to shrink the Fourier coefficients approximated by a high-order quadrature rule of a given continuous function with respect to some orthonormal basis, is a combination of Lasso and filtered hyperinterpolations. Hybrid hyperinterpolation inherits features of them to deal with noisy data once the regularization parameter and the filter function are chosen well. We not only provide $L_2$ errors in theoretical analysis for hybrid hyperinterpolation to approximate continuous functions with noise and noise-free, but also decompose $L_2$ errors into three exact computed terms with the aid of a prior regularization parameter choices rule. This rule, making fully use of coefficients of hyperinterpolation to choose a regularization parameter, reveals that $L_2$ errors for hybrid hyperinterpolation sharply decline and then slowly increase when the sparsity of coefficients ranges from one to large values. Numerical examples show the enhanced performance of hybrid hyperinterpolation when regularization parameters and noise vary. Theoretical $L_2$ errors bounds are verified in numerical examples on the interval, the unit-disk, the unit-sphere and the unit-cube, the union of disks.

相關內容

Profile likelihoods are rarely used in geostatistical models due to the computational burden imposed by repeated decompositions of large variance matrices. Accounting for uncertainty in covariance parameters can be highly consequential in geostatistical models as some covariance parameters are poorly identified, the problem is severe enough that the differentiability parameter of the Matern correlation function is typically treated as fixed. The problem is compounded with anisotropic spatial models as there are two additional parameters to consider. In this paper, we make the following contributions: 1, A methodology is created for profile likelihoods for Gaussian spatial models with Mat\'ern family of correlation functions, including anisotropic models. This methodology adopts a novel reparametrization for generation of representative points, and uses GPUs for parallel profile likelihoods computation in software implementation. 2, We show the profile likelihood of the Mat\'ern shape parameter is often quite flat but still identifiable, it can usually rule out very small values. 3, Simulation studies and applications on real data examples show that profile-based confidence intervals of covariance parameters and regression parameters have superior coverage to the traditional standard Wald type confidence intervals.

This paper focuses on optimal beamforming to maximize the mean signal-to-noise ratio (SNR) for a reconfigurable intelligent surface (RIS)-aided MISO downlink system under correlated Rician fading. The beamforming problem becomes non-convex because of the unit modulus constraint of passive RIS elements. To tackle this, we propose a semidefinite relaxation-based iterative algorithm for obtaining statistically optimal transmit beamforming vector and RIS-phase shift matrix. Further, we analyze the outage probability (OP) and ergodic capacity (EC) to measure the performance of the proposed beamforming scheme. Just like the existing works, the OP and EC evaluations rely on the numerical computation of the iterative algorithm, which does not clearly reveal the functional dependence of system performance on key parameters. Therefore, we derive closed-form expressions for the optimal beamforming vector and phase shift matrix along with their OP performance for special cases of the general setup. Our analysis reveals that the i.i.d. fading is more beneficial than the correlated case in the presence of LoS components. This fact is analytically established for the setting in which the LoS is blocked. Furthermore, we demonstrate that the maximum mean SNR improves linearly/quadratically with the number of RIS elements in the absence/presence of LoS component under i.i.d. fading.

We provide a comprehensive characterisation of the theoretical properties of the divide-and-conquer sequential Monte Carlo (DaC-SMC) algorithm. We firmly establish it as a well-founded method by showing that it possesses the same basic properties as conventional sequential Monte Carlo (SMC) algorithms do. In particular, we derive pertinent laws of large numbers, $L^p$ inequalities, and central limit theorems; and we characterize the bias in the normalized estimates produced by the algorithm and argue the absence thereof in the unnormalized ones. We further consider its practical implementation and several interesting variants; obtain expressions for its globally and locally optimal intermediate targets, auxiliary measures, and proposal kernels; and show that, in comparable conditions, DaC-SMC proves more statistically efficient than its direct SMC analogue. We close the paper with a discussion of our results, open questions, and future research directions.

In health and social sciences, it is critically important to identify subgroups of the study population where there is notable heterogeneity of treatment effects (HTE) with respect to the population average. Decision trees have been proposed and commonly adopted for data-driven discovery of HTE due to their high level of interpretability. However, single-tree discovery of HTE can be unstable and oversimplified. This paper introduces Causal Rule Ensemble (CRE), a new method for HTE discovery and estimation through an ensemble-of-trees approach. CRE offers several key features, including 1) an interpretable representation of the HTE; 2) the ability to explore complex heterogeneity patterns; and 3) high stability in subgroups discovery. The discovered subgroups are defined in terms of interpretable decision rules. Estimation of subgroup-specific causal effects is performed via a two-stage approach for which we provide theoretical guarantees. Via simulations, we show that the CRE method is highly competitive when compared to state-of-the-art techniques. Finally, we apply CRE to discover the heterogeneous health effects of exposure to air pollution on mortality for 35.3 million Medicare beneficiaries across the contiguous U.S.

Linear wave equations sourced by a Dirac delta distribution $\delta(x)$ and its derivative(s) can serve as a model for many different phenomena. We describe a discontinuous Galerkin (DG) method to numerically solve such equations with source terms proportional to $\partial^n \delta /\partial x^n$. Despite the presence of singular source terms, which imply discontinuous or potentially singular solutions, our DG method achieves global spectral accuracy even at the source's location. Our DG method is developed for the wave equation written in fully first-order form. The first-order reduction is carried out using a distributional auxiliary variable that removes some of the source term's singular behavior. While this is helpful numerically, it gives rise to a distributional constraint. We show that a time-independent spurious solution can develop if the initial constraint violation is proportional to $\delta(x)$. Numerical experiments verify this behavior and our scheme's convergence properties by comparing against exact solutions.

Randomized controlled trials (RCTs) are the gold standard for causal inference, but they are often powered only for average effects, making estimation of heterogeneous treatment effects (HTEs) challenging. Conversely, large-scale observational studies (OS) offer a wealth of data but suffer from confounding bias. Our paper presents a novel framework to leverage OS data for enhancing the efficiency in estimating conditional average treatment effects (CATEs) from RCTs while mitigating common biases. We propose an innovative approach to combine RCTs and OS data, expanding the traditionally used control arms from external sources. The framework relaxes the typical assumption of CATE invariance across populations, acknowledging the often unaccounted systematic differences between RCT and OS participants. We demonstrate this through the special case of a linear outcome model, where the CATE is sparsely different between the two populations. The core of our framework relies on learning potential outcome means from OS data and using them as a nuisance parameter in CATE estimation from RCT data. We further illustrate through experiments that using OS findings reduces the variance of the estimated CATE from RCTs and can decrease the required sample size for detecting HTEs.

This paper explores the use of reconfigurable intelligent surfaces (RIS) in mitigating cross-system interference in spectrum sharing and secure wireless applications. Unlike conventional RIS that can only adjust the phase of the incoming signal and essentially reflect all impinging energy, or active RIS, which also amplify the reflected signal at the cost of significantly higher complexity, noise, and power consumption, an absorptive RIS (ARIS) is considered. An ARIS can in principle modify both the phase and modulus of the impinging signal by absorbing a portion of the signal energy, providing a compromise between its conventional and active counterparts in terms of complexity, power consumption, and degrees of freedom (DoFs). We first use a toy example to illustrate the benefit of ARIS, and then we consider three applications: (1) Spectral coexistence of radar and communication systems, where a convex optimization problem is formulated to minimize the Frobenius norm of the channel matrix from the communication base station to the radar receiver; (2) Spectrum sharing in device-to-device (D2D) communications, where a max-min scheme that maximizes the worst-case signal-to-interference-plus-noise ratio (SINR) among the D2D links is developed and then solved via fractional programming; (3) The physical layer security of a downlink communication system, where the secrecy rate is maximized and the resulting nonconvex problem is solved by a fractional programming algorithm together with a sequential convex relaxation procedure. Numerical results are then presented to show the significant benefit of ARIS in these applications.

Univariate and multivariate normal probability distributions are widely used when modeling decisions under uncertainty. Computing the performance of such models requires integrating these distributions over specific domains, which can vary widely across models. Besides some special cases, there exist no general analytical expressions, standard numerical methods or software for these integrals. Here we present mathematical results and open-source software that provide (i) the probability in any domain of a normal in any dimensions with any parameters, (ii) the probability density, cumulative distribution, and inverse cumulative distribution of any function of a normal vector, (iii) the classification errors among any number of normal distributions, the Bayes-optimal discriminability index and relation to the operating characteristic, (iv) dimension reduction and visualizations for such problems, and (v) tests for how reliably these methods may be used on given data. We demonstrate these tools with vision research applications of detecting occluding objects in natural scenes, and detecting camouflage.

Along with the increasing availability of health data has come the rise of data-driven models to inform decision-making and policy. These models have the potential to benefit both patients and health care providers but can also exacerbate health inequities. Existing "algorithmic fairness" methods for measuring and correcting model bias fall short of what is needed for health policy in two key ways. First, methods typically focus on a single grouping along which discrimination may occur rather than considering multiple, intersecting groups. Second, in clinical applications, risk prediction is typically used to guide treatment, creating distinct statistical issues that invalidate most existing techniques. We present summary unfairness metrics that build on existing techniques in "counterfactual fairness" to address both challenges. We also develop a complete framework of estimation and inference tools for our metrics, including the unfairness value ("u-value"), used to determine the relative extremity of unfairness, and standard errors and confidence intervals employing an alternative to the standard bootstrap. We demonstrate application of our framework to a COVID-19 risk prediction model deployed in a major Midwestern health system.

This PhD thesis contains several contributions to the field of statistical causal modeling. Statistical causal models are statistical models embedded with causal assumptions that allow for the inference and reasoning about the behavior of stochastic systems affected by external manipulation (interventions). This thesis contributes to the research areas concerning the estimation of causal effects, causal structure learning, and distributionally robust (out-of-distribution generalizing) prediction methods. We present novel and consistent linear and non-linear causal effects estimators in instrumental variable settings that employ data-dependent mean squared prediction error regularization. Our proposed estimators show, in certain settings, mean squared error improvements compared to both canonical and state-of-the-art estimators. We show that recent research on distributionally robust prediction methods has connections to well-studied estimators from econometrics. This connection leads us to prove that general K-class estimators possess distributional robustness properties. We, furthermore, propose a general framework for distributional robustness with respect to intervention-induced distributions. In this framework, we derive sufficient conditions for the identifiability of distributionally robust prediction methods and present impossibility results that show the necessity of several of these conditions. We present a new structure learning method applicable in additive noise models with directed trees as causal graphs. We prove consistency in a vanishing identifiability setup and provide a method for testing substructure hypotheses with asymptotic family-wise error control that remains valid post-selection. Finally, we present heuristic ideas for learning summary graphs of nonlinear time-series models.

北京阿比特科技有限公司