亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Outcome-dependent sampling designs are extensively utilized in various scientific disciplines, including epidemiology, ecology, and economics, with retrospective case-control studies being specific examples of such designs. Additionally, if the outcome used for sample selection is also mismeasured, then it is even more challenging to estimate the average treatment effect (ATE) accurately. To our knowledge, no existing method can address these two issues simultaneously. In this paper, we establish the identifiability of ATE and propose a novel method for estimating ATE in the context of generalized linear model. The estimator is shown to be consistent under some regularity conditions. To relax the model assumption, we also consider generalized additive model. We propose to estimate ATE using penalized B-splines and establish asymptotic properties for the proposed estimator. Our methods are evaluated through extensive simulation studies and the application to a dataset from the UK Biobank, with alcohol intake as the treatment and gout as the outcome.

相關內容

Motivation: Curation of literature in life sciences is a growing challenge. The continued increase in the rate of publication, coupled with the relatively fixed number of curators worldwide presents a major challenge to developers of biomedical knowledgebases. Very few knowledgebases have resources to scale to the whole relevant literature and all have to prioritise their efforts. Results: In this work, we take a first step to alleviating the lack of curator time in RNA science by generating summaries of literature for non-coding RNAs using large language models (LLMs). We demonstrate that high-quality, factually accurate summaries with accurate references can be automatically generated from the literature using a commercial LLM and a chain of prompts and checks. Manual assessment was carried out for a subset of summaries, with the majority being rated extremely high quality. We also applied the most commonly used automated evaluation approaches, finding that they do not correlate with human assessment. Finally, we apply our tool to a selection of over 4,600 ncRNAs and make the generated summaries available via the RNAcentral resource. We conclude that automated literature summarization is feasible with the current generation of LLMs, provided careful prompting and automated checking are applied. Availability: Code used to produce these summaries can be found here: //github.com/RNAcentral/litscan-summarization and the dataset of contexts and summaries can be found here: //huggingface.co/datasets/RNAcentral/litsumm-v1. Summaries are also displayed on the RNA report pages in RNAcentral (//rnacentral.org/)

As quantum theory allows for information processing and computing tasks that otherwise are not possible with classical systems, there is a need and use of quantum Internet beyond existing network systems. At the same time, the realization of a desirably functional quantum Internet is hindered by fundamental and practical challenges such as high loss during transmission of quantum systems, decoherence due to interaction with the environment, fragility of quantum states, etc. We study the implications of these constraints by analyzing the limitations on the scaling and robustness of quantum Internet. Considering quantum networks, we present practical bottlenecks for secure communication, delegated computing, and resource distribution among end nodes. Motivated by the power of abstraction in graph theory (in association with quantum information theory), we consider graph-theoretic quantifiers to assess network robustness and provide critical values of communication lines for viable communication over quantum Internet. In particular, we begin by discussing limitations on usefulness of isotropic states as device-independent quantum key repeaters which otherwise could be useful for device-independent quantum key distribution. We consider some quantum networks of practical interest, ranging from satellite-based networks connecting far-off spatial locations to currently available quantum processor architectures within computers, and analyze their robustness to perform quantum information processing tasks. Some of these tasks form primitives for delegated quantum computing, e.g., entanglement distribution and quantum teleportation. For some examples of quantum networks, we present algorithms to perform different quantum network tasks of interest such as constructing the network structure, finding the shortest path between a pair of end nodes, and optimizing the flow of resources at a node.

Completely random measures (CRMs) and their normalizations (NCRMs) offer flexible models in Bayesian nonparametrics. But their infinite dimensionality presents challenges for inference. Two popular finite approximations are truncated finite approximations (TFAs) and independent finite approximations (IFAs). While the former have been well-studied, IFAs lack similarly general bounds on approximation error, and there has been no systematic comparison between the two options. In the present work, we propose a general recipe to construct practical finite-dimensional approximations for homogeneous CRMs and NCRMs, in the presence or absence of power laws. We call our construction the automated independent finite approximation (AIFA). Relative to TFAs, we show that AIFAs facilitate more straightforward derivations and use of parallel computing in approximate inference. We upper bound the approximation error of AIFAs for a wide class of common CRMs and NCRMs -- and thereby develop guidelines for choosing the approximation level. Our lower bounds in key cases suggest that our upper bounds are tight. We prove that, for worst-case choices of observation likelihoods, TFAs are more efficient than AIFAs. Conversely, we find that in real-data experiments with standard likelihoods, AIFAs and TFAs perform similarly. Moreover, we demonstrate that AIFAs can be used for hyperparameter estimation even when other potential IFA options struggle or do not apply.

Work in AI ethics and fairness has made much progress in regulating LLMs to reflect certain values, such as fairness, truth, and diversity. However, it has taken the problem of how LLMs might 'mean' anything at all for granted. Without addressing this, it is not clear what imbuing LLMs with such values even means. In response, we provide a general theory of meaning that extends beyond humans. We use this theory to explicate the precise nature of LLMs as meaning-agents. We suggest that the LLM, by virtue of its position as a meaning-agent, already grasps the constructions of human society (e.g. morality, gender, and race) in concept. Consequently, under certain ethical frameworks, currently popular methods for model alignment are limited at best and counterproductive at worst. Moreover, unaligned models may help us better develop our moral and social philosophy.

We establish conditions under which latent causal graphs are nonparametrically identifiable and can be reconstructed from unknown interventions in the latent space. Our primary focus is the identification of the latent structure in measurement models without parametric assumptions such as linearity or Gaussianity. Moreover, we do not assume the number of hidden variables is known, and we show that at most one unknown intervention per hidden variable is needed. This extends a recent line of work on learning causal representations from observations and interventions. The proofs are constructive and introduce two new graphical concepts -- imaginary subsets and isolated edges -- that may be useful in their own right. As a matter of independent interest, the proofs also involve a novel characterization of the limits of edge orientations within the equivalence class of DAGs induced by unknown interventions. These are the first results to characterize the conditions under which causal representations are identifiable without making any parametric assumptions in a general setting with unknown interventions and without faithfulness.

This paper investigates the multiple testing problem for high-dimensional sparse binary sequences, motivated by the crowdsourcing problem in machine learning. We study the empirical Bayes approach for multiple testing on the high-dimensional Bernoulli model with a conjugate spike and uniform slab prior. We first show that the hard thresholding rule deduced from the posterior distribution is suboptimal. Consequently, the $\ell$-value procedure constructed using this posterior tends to be overly conservative in estimating the false discovery rate (FDR). We then propose two new procedures based on $\adj\ell$-values and $q$-values to correct this issue. Sharp frequentist theoretical results are obtained, demonstrating that both procedures can effectively control the FDR under sparsity. Numerical experiments are conducted to validate our theory in finite samples. To our best knowledge, this work provides the first uniform FDR control result in multiple testing for high-dimensional sparse binary data.

Large-sample Bayesian analogs exist for many frequentist methods, but are less well-known for the widely-used 'sandwich' or 'robust' variance estimates. We review existing approaches to Bayesian analogs of sandwich variance estimates and propose a new analog, as the Bayes rule under a form of balanced loss function, that combines elements of standard parametric inference with fidelity of the data to the model. Our development is general, for essentially any regression setting with independent outcomes. Being the large-sample equivalent of its frequentist counterpart, we show by simulation that Bayesian robust standard error estimates can faithfully quantify the variability of parameter estimates even under model misspecification -- thus retaining the major attraction of the original frequentist version. We demonstrate our Bayesian analog of standard error estimates when studying the association between age and systolic blood pressure in NHANES.

Biological and artificial information processing systems form representations that they can use to categorize, reason, plan, navigate, and make decisions. How can we measure the extent to which the representations formed by these diverse systems agree? Do similarities in representations then translate into similar behavior? How can a system's representations be modified to better match those of another system? These questions pertaining to the study of representational alignment are at the heart of some of the most active research areas in cognitive science, neuroscience, and machine learning. For example, cognitive scientists measure the representational alignment of multiple individuals to identify shared cognitive priors, neuroscientists align fMRI responses from multiple individuals into a shared representational space for group-level analyses, and ML researchers distill knowledge from teacher models into student models by increasing their alignment. Unfortunately, there is limited knowledge transfer between research communities interested in representational alignment, so progress in one field often ends up being rediscovered independently in another. Thus, greater cross-field communication would be advantageous. To improve communication between these fields, we propose a unifying framework that can serve as a common language between researchers studying representational alignment. We survey the literature from all three fields and demonstrate how prior work fits into this framework. Finally, we lay out open problems in representational alignment where progress can benefit all three of these fields. We hope that our work can catalyze cross-disciplinary collaboration and accelerate progress for all communities studying and developing information processing systems. We note that this is a working paper and encourage readers to reach out with their suggestions for future revisions.

We introduce PUNQ, a novel quantum programming language with quantum control, which features higher-order programs that can be superposed, enabling quantum control via quantum conditionals. Our language boasts a type system guaranteeing both unitarity and polynomial-time normalization. Unitarity is achieved by using a special modality for superpositions while requiring orthogonality among superposed terms. Polynomial-time normalization is achieved using a linear-logic-based type discipline employing Barber and Plotkin duality along with a specific modality to account for potential duplications. This type discipline also guarantees that derived values have polynomial size. PUNQ seamlessly combines the two modalities: quantum circuit programs uphold unitarity, and all programs are evaluated in polynomial time, ensuring their feasibility.

The Q-learning algorithm is known to be affected by the maximization bias, i.e. the systematic overestimation of action values, an important issue that has recently received renewed attention. Double Q-learning has been proposed as an efficient algorithm to mitigate this bias. However, this comes at the price of an underestimation of action values, in addition to increased memory requirements and a slower convergence. In this paper, we introduce a new way to address the maximization bias in the form of a "self-correcting algorithm" for approximating the maximum of an expected value. Our method balances the overestimation of the single estimator used in conventional Q-learning and the underestimation of the double estimator used in Double Q-learning. Applying this strategy to Q-learning results in Self-correcting Q-learning. We show theoretically that this new algorithm enjoys the same convergence guarantees as Q-learning while being more accurate. Empirically, it performs better than Double Q-learning in domains with rewards of high variance, and it even attains faster convergence than Q-learning in domains with rewards of zero or low variance. These advantages transfer to a Deep Q Network implementation that we call Self-correcting DQN and which outperforms regular DQN and Double DQN on several tasks in the Atari 2600 domain.

北京阿比特科技有限公司