亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In many causal studies, outcomes are censored by death, in the sense that they are neither observed nor defined for units who die. In such studies, the focus is usually on the stratum of always survivors up to a single fixed time s. Building on a recent strand of the literature, we propose an extended framework for the analysis of longitudinal studies, where units can die at different time points, and the main endpoints are observed and well defined only up to the death time. We develop a Bayesian longitudinal principal stratification framework, where units are cross classified according to the longitudinal death status. Under this framework, the focus is on causal effects for the principal strata of units that would be alive up to a time point s irrespective of their treatment assignment, where these strata may vary as a function of s. We can get precious insights into the effects of treatment by inspecting the distribution of baseline characteristics within each longitudinal principal stratum, and by investigating the time trend of both principal stratum membership and survivor-average causal effects. We illustrate our approach for the analysis of a longitudinal observational study aimed to assess, under the assumption of strong ignorability of treatment assignment, the causal effects of a policy promoting start ups on firms survival and hiring policy, where firms hiring status is censored by death.

相關內容

Prediction is a central problem in Statistics, and there is currently a renewed interest for the so-called predictive approach in Bayesian statistics. What is the latter about? One has to return on foundational concepts, which we do in this paper, moving from the role of exchangeability and reviewing forms of partial exchangeability for more structured data, with the aim of discussing their use and implications in Bayesian statistics. There we show the underlying concept that, in Bayesian statistics, a predictive rule is meant as a learning rule - how one conveys past information to information on future events. This concept has implications on the use of exchangeability and generally invests all statistical problems, also in inference. It applies to classic contexts and to less explored situations, such as the use of predictive algorithms that can be read as Bayesian learning rules. The paper offers a historical overview, but also includes a few new results, presents some recent developments and poses some open questions.

Modeling the behavior of biological tissues and organs often necessitates the knowledge of their shape in the absence of external loads. However, when their geometry is acquired in-vivo through imaging techniques, bodies are typically subject to mechanical deformation due to the presence of external forces, and the load-free configuration needs to be reconstructed. This paper addresses this crucial and frequently overlooked topic, known as the inverse elasticity problem (IEP), by delving into both theoretical and numerical aspects, with a particular focus on cardiac mechanics. In this work, we extend Shield's seminal work to determine the structure of the IEP with arbitrary material inhomogeneities and in the presence of both body and active forces. These aspects are fundamental in computational cardiology, and we show that they may break the variational structure of the inverse problem. In addition, we show that the inverse problem might have no solution even in the presence of constant Neumann boundary conditions and a polyconvex strain energy functional. We then present the results of extensive numerical tests to validate our theoretical framework, and to characterize the computational challenges associated with a direct numerical approximation of the IEP. Specifically, we show that this framework outperforms existing approaches both in terms of robustness and optimality, such as Sellier's iterative procedure, even when the latter is improved with acceleration techniques. A notable discovery is that multigrid preconditioners are, in contrast to standard elasticity, not efficient, where a one-level additive Schwarz and generalized Dryja-Smith-Widlund provide a much more reliable alternative. Finally, we successfully address the IEP for a full-heart geometry, demonstrating that the IEP formulation can compute the stress-free configuration in real-life scenarios.

Non-Hermitian topological phases can produce some remarkable properties, compared with their Hermitian counterpart, such as the breakdown of conventional bulk-boundary correspondence and the non-Hermitian topological edge mode. Here, we introduce several algorithms with multi-layer perceptron (MLP), and convolutional neural network (CNN) in the field of deep learning, to predict the winding of eigenvalues non-Hermitian Hamiltonians. Subsequently, we use the smallest module of the periodic circuit as one unit to construct high-dimensional circuit data features. Further, we use the Dense Convolutional Network (DenseNet), a type of convolutional neural network that utilizes dense connections between layers to design a non-Hermitian topolectrical Chern circuit, as the DenseNet algorithm is more suitable for processing high-dimensional data. Our results demonstrate the effectiveness of the deep learning network in capturing the global topological characteristics of a non-Hermitian system based on training data.

Retinopathy of prematurity (ROP) is a severe condition affecting premature infants, leading to abnormal retinal blood vessel growth, retinal detachment, and potential blindness. While semi-automated systems have been used in the past to diagnose ROP-related plus disease by quantifying retinal vessel features, traditional machine learning (ML) models face challenges like accuracy and overfitting. Recent advancements in deep learning (DL), especially convolutional neural networks (CNNs), have significantly improved ROP detection and classification. The i-ROP deep learning (i-ROP-DL) system also shows promise in detecting plus disease, offering reliable ROP diagnosis potential. This research comprehensively examines the contemporary progress and challenges associated with using retinal imaging and artificial intelligence (AI) to detect ROP, offering valuable insights that can guide further investigation in this domain. Based on 89 original studies in this field (out of 1487 studies that were comprehensively reviewed), we concluded that traditional methods for ROP diagnosis suffer from subjectivity and manual analysis, leading to inconsistent clinical decisions. AI holds great promise for improving ROP management. This review explores AI's potential in ROP detection, classification, diagnosis, and prognosis.

In the realm of cost-sharing mechanisms, the vulnerability to Sybil strategies - also known as false-name strategies, where agents create fake identities to manipulate outcomes - has not yet been studied. In this paper, we delve into the details of different cost-sharing mechanisms proposed in the literature, highlighting their non-Sybil-resistant nature. Furthermore, we prove that a Sybil-proof cost-sharing mechanism for public excludable goods under mild conditions is at least $(n+1)/2-$approximate. This finding reveals an exponential increase in the worst-case social cost in environments where agents are restricted from using Sybil strategies. To circumvent these negative results, we introduce the concept of \textit{Sybil Welfare Invariant} mechanisms, where a mechanism does not decrease its welfare under Sybil-strategies when agents choose weak dominant strategies and have subjective prior beliefs over other players' actions. Finally, we prove that the Shapley value mechanism for symmetric and submodular cost functions holds this property, and so deduce that the worst-case social cost of this mechanism is the $n$th harmonic number $\mathcal H_n$ under equilibrium with Sybil strategies, matching the worst-case social cost bound for cost-sharing mechanisms. This finding suggests that any group of agents, each with private valuations, can fund public excludable goods both permissionless and anonymously, achieving efficiency comparable to that of permissioned and non-anonymous domains, even when the total number of participants is unknown.

In this work, we consider the notion of "criterion collapse," in which optimization of one metric implies optimality in another, with a particular focus on conditions for collapse into error probability minimizers under a wide variety of learning criteria, ranging from DRO and OCE risks (CVaR, tilted ERM) to non-monotonic criteria underlying recent ascent-descent algorithms explored in the literature (Flooding, SoftAD). We show how collapse in the context of losses with a Bernoulli distribution goes far beyond existing results for CVaR and DRO, then expand our scope to include surrogate losses, showing conditions where monotonic criteria such as tilted ERM cannot avoid collapse, whereas non-monotonic alternatives can.

Following initial work by JaJa and Ahlswede/Cai, and inspired by a recent renewed surge in interest in deterministic identification via noisy channels, we consider the problem in its generality for memoryless channels with finite output, but arbitrary input alphabets. Such a channel is essentially given by (the closure of) the subset of its output distributions in the probability simplex. Our main findings are that the maximum number of messages thus identifiable scales super-exponentially as $2^{R\,n\log n}$ with the block length $n$, and that the optimal rate $R$ is upper and lower bounded in terms of the covering (aka Minkowski, or Kolmogorov, or entropy) dimension $d$ of the output set: $\frac14 d \leq R \leq d$. Leading up to the general case, we treat the important special case of the so-called Bernoulli channel with input alphabet $[0;1]$ and binary output, which has $d=1$, to gain intuition. Along the way, we show a certain Hypothesis Testing Lemma (generalising an earlier insight of Ahlswede regarding the intersection of typical sets) that implies that for the construction of a deterministic identification code, it is sufficient to ensure pairwise reliable distinguishability of the output distributions. These results are then shown to generalise directly to classical-quantum channels with finite-dimensional output quantum system (but arbitrary input alphabet), and in particular to quantum channels on finite-dimensional quantum systems under the constraint that the identification code can only use tensor product inputs.

Motivated by the important statistical role of sparsity, the paper uncovers four reparametrizations for covariance matrices in which sparsity is associated with conditional independence graphs in a notional Gaussian model. The intimate relationship between the Iwasawa decomposition of the general linear group and the open cone of positive definite matrices allows a unifying perspective. Specifically, the positive definite cone can be reconstructed without loss or redundancy from the exponential map applied to four Lie subalgebras determined by the Iwasawa decomposition of the general linear group. This accords geometric interpretations to the reparametrizations and the corresponding notion of sparsity. Conditions that ensure legitimacy of the reparametrizations for statistical models are identified. While the focus of this work is on understanding population-level structure, there are strong methodological implications. In particular, since the population-level sparsity manifests in a vector space, imposition of sparsity on relevant sample quantities produces a covariance estimate that respects the positive definite cone constraint.

A cyclic proof system is a proof system whose proof figure is a tree with cycles. The cut-elimination in a proof system is fundamental. It is conjectured that the cut-elimination in the cyclic proof system for first-order logic with inductive definitions does not hold. This paper shows that the conjecture is correct by giving a sequent not provable without the cut rule but provable in the cyclic proof system.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

北京阿比特科技有限公司