亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we use a linear birth and death process with immigration to model infectious disease propagation when contamination stems from both person-to-person contact and contact with the environment. Our aim is to estimate the parameters of the process. The main originality and difficulty comes from the observation scheme. Counts of infected population are hidden. The only data available are periodic cumulated new retired counts. Although very common in epidemiology, this observation scheme is mathematically challenging even for such a standard stochastic process. We first derive an analytic expression of the unknown parameters as functions of well-chosen discrete time transition probabilities. Second, we extend and adapt the standard Baum-Welch algorithm in order to estimate the said discrete time transition probabilities in our hidden data framework. The performance of our estimators is illustrated both on synthetic data and real data of typhoid fever in Mayotte.

相關內容

In this study, we tackle a growing concern around the safety and ethical use of large language models (LLMs). Despite their potential, these models can be tricked into producing harmful or unethical content through various sophisticated methods, including 'jailbreaking' techniques and targeted manipulation. Our work zeroes in on a specific issue: to what extent LLMs can be led astray by asking them to generate responses that are instruction-centric such as a pseudocode, a program or a software snippet as opposed to vanilla text. To investigate this question, we introduce TechHazardQA, a dataset containing complex queries which should be answered in both text and instruction-centric formats (e.g., pseudocodes), aimed at identifying triggers for unethical responses. We query a series of LLMs -- Llama-2-13b, Llama-2-7b, Mistral-V2 and Mistral 8X7B -- and ask them to generate both text and instruction-centric responses. For evaluation we report the harmfulness score metric as well as judgements from GPT-4 and humans. Overall, we observe that asking LLMs to produce instruction-centric responses enhances the unethical response generation by ~2-38% across the models. As an additional objective, we investigate the impact of model editing using the ROME technique, which further increases the propensity for generating undesirable content. In particular, asking edited LLMs to generate instruction-centric responses further increases the unethical response generation by ~3-16% across the different models.

In this paper I will develop a lambda-term calculus, lambda-2Int, for a bi-intuitionistic logic and discuss its implications for the notions of sense and denotation of derivations in a bilateralist setting. Thus, I will use the Curry-Howard correspondence, which has been well-established between the simply typed lambda-calculus and natural deduction systems for intuitionistic logic, and apply it to a bilateralist proof system displaying two derivability relations, one for proving and one for refuting. The basis will be the natural deduction system of Wansing's bi-intuitionistic logic 2Int, which I will turn into a term-annotated form. Therefore, we need a type theory that extends to a two-sorted typed lambda-calculus. I will present such a term-annotated proof system for 2Int and prove a Dualization Theorem relating proofs and refutations in this system. On the basis of these formal results I will argue that this gives us interesting insights into questions about sense and denotation as well as synonymy and identity of proofs from a bilateralist point of view.

We present Surjective Sequential Neural Likelihood (SSNL) estimation, a novel method for simulation-based inference in models where the evaluation of the likelihood function is not tractable and only a simulator that can generate synthetic data is available. SSNL fits a dimensionality-reducing surjective normalizing flow model and uses it as a surrogate likelihood function which allows for conventional Bayesian inference using either Markov chain Monte Carlo methods or variational inference. By embedding the data in a low-dimensional space, SSNL solves several issues previous likelihood-based methods had when applied to high-dimensional data sets that, for instance, contain non-informative data dimensions or lie along a lower-dimensional manifold. We evaluate SSNL on a wide variety of experiments and show that it generally outperforms contemporary methods used in simulation-based inference, for instance, on a challenging real-world example from astrophysics which models the magnetic field strength of the sun using a solar dynamo model.

In this paper, we carry out the numerical analysis of a nonsmooth quasilinear elliptic optimal control problem, where the coefficient in the divergence term of the corresponding state equation is not differentiable with respect to the state variable. Despite the lack of differentiability of the nonlinearity in the quasilinear elliptic equation, the corresponding control-to-state operator is of class $C^1$ but not of class $C^2$. Analogously, the discrete control-to-state operators associated with the approximated control problems are proven to be of class $C^1$ only. By using an explicit second-order sufficient optimality condition, we prove a priori error estimates for a variational approximation, a piecewise constant approximation, and a continuous piecewise linear approximation of the continuous optimal control problem. The numerical tests confirm these error estimates.

In observational studies, covariates with substantial missing data are often omitted, despite their strong predictive capabilities. These excluded covariates are generally believed not to simultaneously affect both treatment and outcome, indicating that they are not genuine confounders and do not impact the identification of the average treatment effect (ATE). In this paper, we introduce an alternative doubly robust (DR) estimator that fully leverages non-confounding predictive covariates to enhance efficiency, while also allowing missing values in such covariates. Beyond the double robustness property, our proposed estimator is designed to be more efficient than the standard DR estimator. Specifically, when the propensity score model is correctly specified, it achieves the smallest asymptotic variance among the class of DR estimators, and brings additional efficiency gains by further integrating predictive covariates. Simulation studies demonstrate the notable performance of the proposed estimator over current popular methods. An illustrative example is provided to assess the effectiveness of right heart catheterization (RHC) for critically ill patients.

We use the illness-death model (IDM) for chronic conditions to derive a new analytical relation between the transition rates between the states of the IDM. The transition rates are the incidence rate (i) and the mortality rates of people without disease (m0) and with disease (m1). For the most generic case, the rates depend on age, calendar time and in case of m1 also on the duration of the disease. In this work, we show that the prevalence-odds can be expressed as a convolution-like product of the incidence rate and an exponentiated linear combination of i, m0 and m1. The analytical expression can be used as the basis for a maximum likelihood estimation (MLE) and associated large sample asymptotics. In a simulation study where a cross-sectional trial about a chronic condition is mimicked, we estimate the duration dependency of the mortality rate m1 based on aggregated current status data using the ML estimator. For this, the number of study participants and the number of diseased people in eleven age groups are considered. The ML estimator provides reasonable estimates for the parameters including their large sample confidence bounds.

We propose a simple empirical representation of expectations such that: For a number of samples above a certain threshold, drawn from any probability distribution with finite fourth-order statistic, the proposed estimator outperforms the empirical average when tested against the actual population, with respect to the quadratic loss. For datasets smaller than this threshold, the result still holds, but for a class of distributions determined by their first four statistics. Our approach leverages the duality between distributionally robust and risk-averse optimization.

In this paper we propose a reinforcement learning based weakly supervised system for localisation. We train a controller function to localise regions of interest within an image by introducing a novel reward definition that utilises non-binarised classification probability, generated by a pre-trained binary classifier which classifies object presence in images or image crops. The object-presence classifier may then inform the controller of its localisation quality by quantifying the likelihood of the image containing an object. Such an approach allows us to minimize any potential labelling or human bias propagated via human labelling for fully supervised localisation. We evaluate our proposed approach for a task of cancerous lesion localisation on a large dataset of real clinical bi-parametric MR images of the prostate. Comparisons to the commonly used multiple-instance learning weakly supervised localisation and to a fully supervised baseline show that our proposed method outperforms the multi-instance learning and performs comparably to fully-supervised learning, using only image-level classification labels for training.

We present a unification and generalization of sequentially and hierarchically semi-separable (SSS and HSS) matrices called tree semi-separable (TSS) matrices. Our main result is to show that any dense matrix can be expressed in a TSS format. Here, the dimensions of the generators are specified by the ranks of the Hankel blocks of the matrix. TSS matrices satisfy a graph-induced rank structure (GIRS) property. It is shown that TSS matrices generalize the algebraic properties of SSS and HSS matrices under addition, products, and inversion. Subsequently, TSS matrices admit linear time matrix-vector multiply, matrix-matrix multiply, matrix-matrix addition, inversion, and solvers.

This paper does not describe a working system. Instead, it presents a single idea about representation which allows advances made by several different groups to be combined into an imaginary system called GLOM. The advances include transformers, neural fields, contrastive representation learning, distillation and capsules. GLOM answers the question: How can a neural network with a fixed architecture parse an image into a part-whole hierarchy which has a different structure for each image? The idea is simply to use islands of identical vectors to represent the nodes in the parse tree. If GLOM can be made to work, it should significantly improve the interpretability of the representations produced by transformer-like systems when applied to vision or language

北京阿比特科技有限公司