亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The use of propensity score (PS) methods has become ubiquitous in causal inference. At the heart of these methods is the positivity assumption. Violation of the positivity assumption leads to the presence of extreme PS weights when estimating average causal effects of interest, such as the average treatment effect (ATE) or the average treatment effect on the treated (ATT), which renders invalid related statistical inference. To circumvent this issue, trimming or truncating the extreme estimated PSs have been widely used. However, these methods require that we specify a priori a threshold and sometimes an additional smoothing parameter. While there are a number of methods dealing with the lack of positivity when estimating ATE, surprisingly there is no much effort in the same issue for ATT. In this paper, we first review widely used methods, such as trimming and truncation in ATT. We emphasize the underlying intuition behind these methods to better understand their applications and highlight their main limitations. Then, we argue that the current methods simply target estimands that are scaled ATT (and thus move the goalpost to a different target of interest), where we specify the scale and the target populations. We further propose a PS weight-based alternative for the average causal effect on the treated, called overlap weighted average treatment effect on the treated (OWATT). The appeal of our proposed method lies in its ability to obtain similar or even better results than trimming and truncation while relaxing the constraint to choose a priori a threshold (or even specify a smoothing parameter). The performance of the proposed method is illustrated via a series of Monte Carlo simulations and a data analysis on racial disparities in health care expenditures.

相關內容

Randomized matrix algorithms have become workhorse tools in scientific computing and machine learning. To use these algorithms safely in applications, they should be coupled with posterior error estimates to assess the quality of the output. To meet this need, this paper proposes two diagnostics: a leave-one-out error estimator for randomized low-rank approximations and a jackknife resampling method to estimate the variance of the output of a randomized matrix computation. Both of these diagnostics are rapid to compute for randomized low-rank approximation algorithms such as the randomized SVD and randomized Nystr\"om approximation, and they provide useful information that can be used to assess the quality of the computed output and guide algorithmic parameter choices.

We prove explicit uniform two-sided bounds for the phase functions of Bessel functions and of their derivatives. As a consequence, we obtain new enclosures for the zeros of Bessel functions and their derivatives in terms of inverse values of some elementary functions. These bounds are valid, with a few exceptions, for all zeros and all Bessel functions with non-negative indices. We provide numerical evidence showing that our bounds either improve or closely match the best previously known ones.

Digital credentials represent a cornerstone of digital identity on the Internet. To achieve privacy, certain functionalities in credentials should be implemented. One is selective disclosure, which allows users to disclose only the claims or attributes they want. This paper presents a novel approach to selective disclosure that combines Merkle hash trees and Boneh-Lynn-Shacham (BLS) signatures. Combining these approaches, we achieve selective disclosure of claims in a single credential and creation of a verifiable presentation containing selectively disclosed claims from multiple credentials signed by different parties. Besides selective disclosure, we enable issuing credentials signed by multiple issuers using this approach.

Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.

The adaptive quasi-likelihood analysis is developed for a degenerate diffusion process. Asymptotic normality and moment convergence are proved for the quasi-maximum likelihood estimators and quasi-Bayesian estimators, in the adaptive scheme.

Numerical simulation of moving immersed solid bodies in fluids is now practiced routinely following pioneering work of Peskin and co-workers on immersed boundary method (IBM), Glowinski and co-workers on fictitious domain method (FDM), and others on related methods. A variety of variants of IBM and FDM approaches have been published, most of which rely on using a background mesh for the fluid equations and tracking the solid body using Lagrangian points. The key idea that is common to these methods is to assume that the entire fluid-solid domain is a fluid and then to constrain the fluid within the solid domain to move in accordance with the solid governing equations. The immersed solid body can be rigid or deforming. Thus, in all these methods the fluid domain is extended into the solid domain. In this review, we provide a mathemarical perspective of various immersed methods by recasting the governing equations in an extended domain form for the fluid. The solid equations are used to impose appropriate constraints on the fluid that is extended into the solid domain. This leads to extended domain constrained fluid-solid governing equations that provide a unified framework for various immersed body techniques. The unified constrained governing equations in the strong form are independent of the temporal or spatial discretization schemes. We show that particular choices of time stepping and spatial discretization lead to different techniques reported in literature ranging from freely moving rigid to elastic self-propelling bodies. These techniques have wide ranging applications including aquatic locomotion, underwater vehicles, car aerodynamics, and organ physiology (e.g. cardiac flow, esophageal transport, respiratory flows), wave energy convertors, among others. We conclude with comments on outstanding challenges and future directions.

Vascular segmentation represents a crucial clinical task, yet its automation remains challenging. Because of the recent strides in deep learning, vesselness filters, which can significantly aid the learning process, have been overlooked. This study introduces an innovative filter fusion method crafted to amplify the effectiveness of vessel segmentation models. Our investigation seeks to establish the merits of a filter-based learning approach through a comparative analysis. Specifically, we contrast the performance of a U-Net model trained on CT images with an identical U-Net configuration trained on vesselness hyper-volumes using matching parameters. Our findings, based on two vascular datasets, highlight improved segmentations, especially for small vessels, when the model's learning is exposed to vessel-enhanced inputs.

We study the performance of stochastic first-order methods for finding saddle points of convex-concave functions. A notorious challenge faced by such methods is that the gradients can grow arbitrarily large during optimization, which may result in instability and divergence. In this paper, we propose a simple and effective regularization technique that stabilizes the iterates and yields meaningful performance guarantees even if the domain and the gradient noise scales linearly with the size of the iterates (and is thus potentially unbounded). Besides providing a set of general results, we also apply our algorithm to a specific problem in reinforcement learning, where it leads to performance guarantees for finding near-optimal policies in an average-reward MDP without prior knowledge of the bias span.

We present a unification and generalization of sequentially and hierarchically semi-separable (SSS and HSS) matrices called tree semi-separable (TSS) matrices. Our main result is to show that any dense matrix can be expressed in a TSS format. Here, the dimensions of the generators are specified by the ranks of the Hankel blocks of the matrix. TSS matrices satisfy a graph-induced rank structure (GIRS) property. It is shown that TSS matrices generalize the algebraic properties of SSS and HSS matrices under addition, products, and inversion. Subsequently, TSS matrices admit linear time matrix-vector multiply, matrix-matrix multiply, matrix-matrix addition, inversion, and solvers.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

北京阿比特科技有限公司