亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Recurrent COVID-19 outbreaks have placed immense strain on the hospital system in Quebec. We develop a Bayesian three-state coupled Markov switching model to analyze COVID-19 outbreaks across Quebec based on admissions in the 30 largest hospitals. Within each catchment area, we assume the existence of three states for the disease: absence, a new state meant to account for many zeroes in some of the smaller areas, endemic and outbreak. Then we assume the disease switches between the three states in each area through a series of coupled nonhomogeneous hidden Markov chains. Unlike previous approaches, the transition probabilities may depend on covariates and the occurrence of outbreaks in neighboring areas, to account for geographical outbreak spread. Additionally, to prevent rapid switching between endemic and outbreak periods we introduce clone states into the model which enforce minimum endemic and outbreak durations. We make some interesting findings, such as that mobility in retail and recreation venues had a positive association with the development and persistence of new COVID-19 outbreaks in Quebec. Based on model comparison our contributions show promise in improving state estimation retrospectively and in real-time, especially when there are smaller areas and highly spatially synchronized outbreaks. Furthermore, our approach offers new and interesting epidemiological interpretations, such as being able to estimate the effect of covariates on disease extinction.

相關內容

Single-chain Markov chain Monte Carlo simulates realizations from a Markov chain to estimate expectations with the empirical average. The single-chain simulation is generally of considerable length and restricts many advantages of modern parallel computation. This paper constructs a novel many-short-chains Monte Carlo (MSC) estimator by averaging over multiple independent sums from Markov chains of a guaranteed short length. The computational advantage is the independent Markov chain simulations can be fast and may be run in parallel. The MSC estimator requires an importance sampling proposal and a drift condition on the Markov chain without requiring convergence analysis on the Markov chain. A non-asymptotic error analysis is developed for the MSC estimator under both geometric and multiplicative drift conditions. Empirical performance is illustrated on an autoregressive process and the P\'olya-Gamma Gibbs sampler for Bayesian logistic regression to predict cardiovascular disease.

In recent years, many positivity-preserving schemes for initial value problems have been constructed by modifying a Runge--Kutta (RK) method by weighting the right-hand side of the system of differential equations with solution-dependent factors. These include the classes of modified Patankar--Runge--Kutta (MPRK) and Geometric Conservative (GeCo) methods. Compared to traditional RK methods, the analysis of accuracy and stability of these methods is more complicated. In this work, we provide a comprehensive and unifying theory of order conditions for such RK-like methods, which differ from original RK schemes in that their coefficients are solution-dependent. The resulting order conditions are themselves solution-dependent and obtained using the theory of NB-series, and thus, can easily be read off from labeled N-trees. We present for the first time order conditions for MPRK and GeCo schemes of arbitrary order; For MPRK schemes, the order conditions are given implicitly in terms of the stages. From these results, we recover as particular cases all known order conditions from the literature for first- and second-order GeCo as well as first-, second- and third-order MPRK methods. Additionally, we derive sufficient and necessary conditions in an explicit form for 3rd and 4th order GeCo schemes as well as 4th order MPRK methods. We also present a new 4th order MPRK method within this framework and numerically confirm its convergence rate.

In this paper we present a new high order semi-implicit DG scheme on two-dimensional staggered triangular meshes applied to different nonlinear systems of hyperbolic conservation laws such as advection-diffusion models, incompressible Navier-Stokes equations and natural convection problems. While the temperature and pressure field are defined on a triangular main grid, the velocity field is defined on a quadrilateral edge-based staggered mesh. A semi-implicit time discretization is proposed, which separates slow and fast time scales by treating them explicitly and implicitly, respectively. The nonlinear convection terms are evolved explicitly using a semi-Lagrangian approach, whereas we consider an implicit discretization for the diffusion terms and the pressure contribution. High-order of accuracy in time is achieved using a new flexible and general framework of IMplicit-EXplicit (IMEX) Runge-Kutta schemes specifically designed to operate with semi-Lagrangian methods. To improve the efficiency in the computation of the DG divergence operator and the mass matrix, we propose to approximate the numerical solution with a less regular polynomial space on the edge-based mesh, which is defined on two sub-triangles that split the staggered quadrilateral elements. Due to the implicit treatment of the fast scale terms, the resulting numerical scheme is unconditionally stable for the considered governing equations. Contrarily to a genuinely space-time discontinuous-Galerkin scheme, the IMEX discretization permits to preserve the symmetry and the positive semi-definiteness of the arising linear system for the pressure that can be solved at the aid of an efficient matrix-free implementation of the conjugate gradient method. We present several convergence results, including nonlinear transport and density currents, up to third order of accuracy in both space and time.

Multi-modal Large Language Models (MLLMs) have shown impressive abilities in generating reasonable responses with respect to multi-modal contents. However, there is still a wide gap between the performance of recent MLLM-based applications and the expectation of the broad public, even though the most powerful OpenAI's GPT-4 and Google's Gemini have been deployed. This paper strives to enhance understanding of the gap through the lens of a qualitative study on the generalizability, trustworthiness, and causal reasoning capabilities of recent proprietary and open-source MLLMs across four modalities: ie, text, code, image, and video, ultimately aiming to improve the transparency of MLLMs. We believe these properties are several representative factors that define the reliability of MLLMs, in supporting various downstream applications. To be specific, we evaluate the closed-source GPT-4 and Gemini and 6 open-source LLMs and MLLMs. Overall we evaluate 230 manually designed cases, where the qualitative results are then summarized into 12 scores (ie, 4 modalities times 3 properties). In total, we uncover 14 empirical findings that are useful to understand the capabilities and limitations of both proprietary and open-source MLLMs, towards more reliable downstream multi-modal applications.

Recent years have witnessed tremendous success in Self-Supervised Learning (SSL), which has been widely utilized to facilitate various downstream tasks in Computer Vision (CV) and Natural Language Processing (NLP) domains. However, attackers may steal such SSL models and commercialize them for profit, making it crucial to verify the ownership of the SSL models. Most existing ownership protection solutions (e.g., backdoor-based watermarks) are designed for supervised learning models and cannot be used directly since they require that the models' downstream tasks and target labels be known and available during watermark embedding, which is not always possible in the domain of SSL. To address such a problem, especially when downstream tasks are diverse and unknown during watermark embedding, we propose a novel black-box watermarking solution, named SSL-WM, for verifying the ownership of SSL models. SSL-WM maps watermarked inputs of the protected encoders into an invariant representation space, which causes any downstream classifier to produce expected behavior, thus allowing the detection of embedded watermarks. We evaluate SSL-WM on numerous tasks, such as CV and NLP, using different SSL models both contrastive-based and generative-based. Experimental results demonstrate that SSL-WM can effectively verify the ownership of stolen SSL models in various downstream tasks. Furthermore, SSL-WM is robust against model fine-tuning, pruning, and input preprocessing attacks. Lastly, SSL-WM can also evade detection from evaluated watermark detection approaches, demonstrating its promising application in protecting the ownership of SSL models.

We propose a multilevel Markov chain Monte Carlo (MCMC) method for the Bayesian inference of random field parameters in PDEs using high-resolution data. Compared to existing multilevel MCMC methods, we additionally consider level-dependent data resolution and introduce a suitable likelihood scaling to enable consistent cross-level comparisons. We theoretically show that this approach attains the same convergence rates as when using level-independent treatment of data, but at significantly reduced computational cost. Additionally, we show that assumptions of exponential covariance and log-normality of random fields, widely held in multilevel Monte Carlo literature, can be extended to a wide range of covariance structures and random fields. These results are illustrated using numerical experiments for a 2D plane stress problem, where the Young's modulus is estimated from discretisations of the displacement field.

This paper develops a class of robust weak Galerkin methods for the stationary incompressible convective Brinkman-Forchheimer equations. The methods adopt piecewise polynomials of degrees $m\ (m\geq1)$ and $m-1$ respectively for the approximations of velocity and pressure variables inside the elements and piecewise polynomials of degrees $k \ ( k=m-1,m)$ and $m$ respectively for their numerical traces on the interfaces of elements, and are shown to yield globally divergence-free velocity approximation. Existence and uniqueness results for the discrete schemes, as well as optimal a priori error estimates, are established. A convergent linearized iterative algorithm is also presented. Numerical experiments are provided to verify the performance of the proposed methods

Estimating the lengths-of-stay (LoS) of hospitalised COVID-19 patients is key for predicting the hospital beds' demand and planning mitigation strategies, as overwhelming the healthcare systems has critical consequences for disease mortality. However, accurately mapping the time-to-event of hospital outcomes, such as the LoS in the intensive care unit (ICU), requires understanding patient trajectories while adjusting for covariates and observation bias, such as incomplete data. Standard methods, such as the Kaplan-Meier estimator, require prior assumptions that are untenable given current knowledge. Using real-time surveillance data from the first weeks of the COVID-19 epidemic in Galicia (Spain), we aimed to model the time-to-event and event probabilities of patients' hospitalised, without parametric priors and adjusting for individual covariates. We applied a non-parametric mixture cure model and compared its performance in estimating hospital ward (HW)/ICU LoS to the performances of commonly used methods to estimate survival. We showed that the proposed model outperformed standard approaches, providing more accurate ICU and HW LoS estimates. Finally, we applied our model estimates to simulate COVID-19 hospital demand using a Monte Carlo algorithm. We provided evidence that adjusting for sex, generally overlooked in prediction models, together with age is key for accurately forecasting HW and ICU occupancy, as well as discharge or death outcomes.

Disaggregated evaluation is a central task in AI fairness assessment, with the goal to measure an AI system's performance across different subgroups defined by combinations of demographic or other sensitive attributes. The standard approach is to stratify the evaluation data across subgroups and compute performance metrics separately for each group. However, even for moderately-sized evaluation datasets, sample sizes quickly get small once considering intersectional subgroups, which greatly limits the extent to which intersectional groups are considered in many disaggregated evaluations. In this work, we introduce a structured regression approach to disaggregated evaluation that we demonstrate can yield reliable system performance estimates even for very small subgroups. We also provide corresponding inference strategies for constructing confidence intervals and explore how goodness-of-fit testing can yield insight into the structure of fairness-related harms experienced by intersectional groups. We evaluate our approach on two publicly available datasets, and several variants of semi-synthetic data. The results show that our method is considerably more accurate than the standard approach, especially for small subgroups, and goodness-of-fit testing helps identify the key factors that drive differences in performance.

Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.

北京阿比特科技有限公司