亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper delves into the impact of natural disasters on affected populations and underscores the imperative of reducing disaster-related fatalities through proactive strategies. On average, approximately 45,000 individuals succumb annually to natural disasters amid a surge in economic losses. The paper explores catastrophe models for loss projection, emphasizes the necessity of evaluating volatility in disaster risk, and introduces an innovative model that integrates historical data, addresses data skewness, and accommodates temporal dependencies to forecast shifts in mortality. To this end, we introduce a time-varying skew Brownian motion model, for which we provide proof of the solution's existence and uniqueness. In this model, parameters change over time, and past occurrences are integrated via volatility.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · MoDELS · 貝葉斯推斷 · 似然 · 推斷 ·
2023 年 11 月 2 日

Count data with complex features arise in many disciplines, including ecology, agriculture, criminology, medicine, and public health. Zero inflation, spatial dependence, and non-equidispersion are common features in count data. There are two classes of models that allow for these features -- the mode-parameterized Conway--Maxwell--Poisson (COMP) distribution and the generalized Poisson model. However both require the use of either constraints on the parameter space or a parameterization that leads to challenges in interpretability. We propose a spatial mean-parameterized COMP model that retains the flexibility of these models while resolving the above issues. We use a Bayesian spatial filtering approach in order to efficiently handle high-dimensional spatial data and we use reversible-jump MCMC to automatically choose the basis vectors for spatial filtering. The COMP distribution poses two additional computational challenges -- an intractable normalizing function in the likelihood and no closed-form expression for the mean. We propose a fast computational approach that addresses these challenges by, respectively, introducing an efficient auxiliary variable algorithm and pre-computing key approximations for fast likelihood evaluation. We illustrate the application of our methodology to simulated and real datasets, including Texas HPV-cancer data and US vaccine refusal data.

The rapid digitization of real-world data offers an unprecedented opportunity for optimizing healthcare delivery and accelerating biomedical discovery. In practice, however, such data is most abundantly available in unstructured forms, such as clinical notes in electronic medical records (EMRs), and it is generally plagued by confounders. In this paper, we present TRIALSCOPE, a unifying framework for distilling real-world evidence from population-level observational data. TRIALSCOPE leverages biomedical language models to structure clinical text at scale, employs advanced probabilistic modeling for denoising and imputation, and incorporates state-of-the-art causal inference techniques to combat common confounders. Using clinical trial specification as generic representation, TRIALSCOPE provides a turn-key solution to generate and reason with clinical hypotheses using observational data. In extensive experiments and analyses on a large-scale real-world dataset with over one million cancer patients from a large US healthcare network, we show that TRIALSCOPE can produce high-quality structuring of real-world data and generates comparable results to marquee cancer trials. In addition to facilitating in-silicon clinical trial design and optimization, TRIALSCOPE may be used to empower synthetic controls, pragmatic trials, post-market surveillance, as well as support fine-grained patient-like-me reasoning in precision diagnosis and treatment.

The emergence of pandemics has significantly emphasized the need for effective solutions in healthcare data analysis. One particular challenge in this domain is the manual examination of medical images, such as X-rays and CT scans. This process is time-consuming and involves the logistical complexities of transferring these images to centralized cloud computing servers. Additionally, the speed and accuracy of image analysis are vital for efficient healthcare image management. This research paper introduces an innovative healthcare architecture that tackles the challenges of analysis efficiency and accuracy by harnessing the capabilities of Artificial Intelligence (AI). Specifically, the proposed architecture utilizes fog computing and presents a modified Convolutional Neural Network (CNN) designed specifically for image analysis. Different architectures of CNN layers are thoroughly explored and evaluated to optimize overall performance. To demonstrate the effectiveness of the proposed approach, a dataset of X-ray images is utilized for analysis and evaluation. Comparative assessments are conducted against recent models such as VGG16, VGG19, MobileNet, and related research papers. Notably, the proposed approach achieves an exceptional accuracy rate of 99.88% in classifying normal cases, accompanied by a validation rate of 96.5%, precision and recall rates of 100%, and an F1 score of 100%. These results highlight the immense potential of fog computing and modified CNNs in revolutionizing healthcare image analysis and diagnosis, not only during pandemics but also in the future. By leveraging these technologies, healthcare professionals can enhance the efficiency and accuracy of medical image analysis, leading to improved patient care and outcomes.

Accurate predictions of the populations and spatial distributions of wild animal species is critical from a species management and conservation perspective. Culling is a measure taken for various reasons, including when overpopulation of a species is observed or suspected. Thus accurate estimates of population numbers are essential for specifying, monitoring, and evaluating the impact of such programmes. Population data for wild animals is generally collated from various sources and at differing spatial resolutions. Citizen science projects typically provide point referenced data, whereas site surveys, hunter reports, and official government data may be aggregated and released at a small area or regional level. Jointly modelling these data resources involves overcoming challenges of spatial misalignment. In this article, we develop an N mixture modelling methodology for joint modelling of species populations in the presence of spatially misaligned data, motivated by the three main species of wild deer in the Republic of Ireland; fallow, red and sika. Previous studies of deer populations investigated the distribution and abundance on a species by species basis, failing to account for possible correlation between individual species and the impact of ecological covariates on their distributions.

This paper presents a research study focused on uncovering the hidden population distribution from the viewpoint of a variational non-Bayesian approach. It asserts that if the hidden probability density function (PDF) has continuous partial derivatives of at least half the dimension's order, it can be perfectly reconstructed from a stationary ergodic process: First, we establish that if the PDF belongs to the Wiener algebra, its canonical ensemble form is uniquely determined through the Fr\'echet differentiation of the Kullback-Leibler divergence, aiming to minimize their cross-entropy. Second, we utilize the result that the differentiability of the PDF implies its membership in the Wiener algebra. Third, as the energy function of the canonical ensemble is defined as a series, the problem transforms into finding solutions to the equations of analytic series for the coefficients in the energy function. Naturally, through the use of truncated polynomial series and by demonstrating the convergence of partial sums of the energy function, we ensure the efficiency of approximation with a finite number of data points. Finally, through numerical experiments, we approximate the PDF from a random sample obtained from a bivariate normal distribution and also provide approximations for the mean and covariance from the PDF. This study substantiates the excellence of its results and their practical applicability.

The signaling capacity of a neural population depends on the scale and orientation of its covariance across trials. Estimating this "noise" covariance is challenging and is thought to require a large number of stereotyped trials. New approaches are therefore needed to interrogate the structure of neural noise across rich, naturalistic behaviors and sensory experiences, with few trials per condition. Here, we exploit the fact that conditions are smoothly parameterized in many experiments and leverage Wishart process models to pool statistical power from trials in neighboring conditions. We demonstrate that these models perform favorably on experimental data from the mouse visual cortex and monkey motor cortex relative to standard covariance estimators. Moreover, they produce smooth estimates of covariance as a function of stimulus parameters, enabling estimates of noise correlations in entirely unseen conditions as well as continuous estimates of Fisher information--a commonly used measure of signal fidelity. Together, our results suggest that Wishart processes are broadly applicable tools for quantification and uncertainty estimation of noise correlations in trial-limited regimes, paving the way toward understanding the role of noise in complex neural computations and behavior.

Combating disinformation is one of the burning societal crises -- about 67% of the American population believes that disinformation produces a lot of uncertainty, and 10% of them knowingly propagate disinformation. Evidence shows that disinformation can manipulate democratic processes and public opinion, causing disruption in the share market, panic and anxiety in society, and even death during crises. Therefore, disinformation should be identified promptly and, if possible, mitigated. With approximately 3.2 billion images and 720,000 hours of video shared online daily on social media platforms, scalable detection of multimodal disinformation requires efficient fact verification. Despite progress in automatic text-based fact verification (e.g., FEVER, LIAR), the research community lacks substantial effort in multimodal fact verification. To address this gap, we introduce FACTIFY 3M, a dataset of 3 million samples that pushes the boundaries of the domain of fact verification via a multimodal fake news dataset, in addition to offering explainability through the concept of 5W question-answering. Salient features of the dataset include: (i) textual claims, (ii) ChatGPT-generated paraphrased claims, (iii) associated images, (iv) stable diffusion-generated additional images (i.e., visual paraphrases), (v) pixel-level image heatmap to foster image-text explainability of the claim, (vi) 5W QA pairs, and (vii) adversarial fake news stories.

Faults occurring in ad-hoc robot networks may fatally perturb their topologies leading to disconnection of subsets of those networks. Optimal topology synthesis is generally resource-intensive and time-consuming to be done in real time for large ad-hoc robot networks. One should only perform topology re-computations if the probability of topology recoverability after the occurrence of any fault surpasses that of its irrecoverability. We formulate this problem as a binary classification problem. Then, we develop a two-pathway data-driven model based on Bayesian Gaussian mixture models that predicts the solution to a typical problem by two different pre-fault and post-fault prediction pathways. The results, obtained by the integration of the predictions of those pathways, clearly indicate the success of our model in solving the topology (ir)recoverability prediction problem compared to the best of current strategies found in the literature.

In this paper, we investigate federated clustering (FedC) problem, that aims to accurately partition unlabeled data samples distributed over massive clients into finite clusters under the orchestration of a parameter server, meanwhile considering data privacy. Though it is an NP-hard optimization problem involving real variables denoting cluster centroids and binary variables denoting the cluster membership of each data sample, we judiciously reformulate the FedC problem into a non-convex optimization problem with only one convex constraint, accordingly yielding a soft clustering solution. Then a novel FedC algorithm using differential privacy (DP) technique, referred to as DP-FedC, is proposed in which partial clients participation and multiple local model updating steps are also considered. Furthermore, various attributes of the proposed DP-FedC are obtained through theoretical analyses of privacy protection and convergence rate, especially for the case of non-identically and independently distributed (non-i.i.d.) data, that ideally serve as the guidelines for the design of the proposed DP-FedC. Then some experimental results on two real datasets are provided to demonstrate the efficacy of the proposed DP-FedC together with its much superior performance over some state-of-the-art FedC algorithms, and the consistency with all the presented analytical results.

This paper assesses the equity impacts of for-hire autonomous vehicles (AVs) and investigates regulatory policies that promote spatial and social equity in future autonomous mobility ecosystems. To this end, we consider a multimodal transportation network, where a ride-hailing platform operates a fleet of AVs to offer mobility-on-demand services in competition with a public transit agency that offers transit services on a transportation network. A game-theoretic model is developed to characterize the intimate interactions between the ride-hailing platform, the transit agency, and multiclass passengers with distinct income levels. An algorithm is proposed to compute the Nash equilibrium of the game and conduct an ex-post evaluation of the performance of the obtained solution. Based on the proposed framework, we evaluate the spatial and social equity in transport accessibility using the Theil index, and find that although the proliferation of for-hire AVs in the ride-hailing network improves overall accessibility, the benefits are not fairly distributed among distinct locations or population groups, implying that the deployment of AVs will enlarge the existing spatial and social inequity gaps in the transportation network if no regulatory intervention is in place. To address this concern, we investigate two regulatory policies that can improve transport equity: (a) a minimum service-level requirement on ride-hailing services, which improves the spatial equity in the transport network; (b) a subsidy on transit services by taxing ride-hailing services, which promotes the use of public transit and improves the spatial and social equity of the transport network. We show that the minimum service-level requirement entails a trade-off: as a higher minimum service level is imposed, the spatial inequity reduces, but the social inequity will be exacerbated. On the other hand ...

北京阿比特科技有限公司