亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

During the COVID-19 pandemic, governments faced the challenge of managing population behavior to prevent their healthcare systems from collapsing. Sweden adopted a strategy centered on voluntary sanitary recommendations while Belgium resorted to mandatory measures. Their consequences on pandemic progression and associated economic impacts remain insufficiently understood. This study leverages the divergent policies of Belgium and Sweden during the COVID-19 pandemic to relax the unrealistic -- but persistently used -- assumption that social contacts are not influenced by an epidemic's dynamics. We develop an epidemiological-economic co-simulation model where pandemic-induced behavioral changes are a superposition of voluntary actions driven by fear, prosocial behavior or social pressure, and compulsory compliance with government directives. Our findings emphasize the importance of early responses, which reduce the stringency of measures necessary to safeguard healthcare systems and minimize ensuing economic damage. Voluntary behavioral changes lead to a pattern of recurring epidemics, which should be regarded as the natural long-term course of pandemics. Governments should carefully consider prolonging lockdown longer than necessary because this leads to higher economic damage and a potentially higher second surge when measures are released. Our model can aid policymakers in the selection of an appropriate long-term strategy that minimizes economic damage.

相關內容

This paper investigates extremal quantiles under two-way cluster dependence. We demonstrate that the limiting distribution of the unconditional intermediate order quantiles in the tails converges to a Gaussian distribution. This is remarkable as two-way cluster dependence entails potential non-Gaussianity in general, but extremal quantiles do not suffer from this issue. Building upon this result, we extend our analysis to extremal quantile regressions of intermediate order.

In several countries, including Italy, a prominent approach to population health surveillance involves conducting repeated cross-sectional surveys at short intervals of time. These surveys gather information on the health status of individual respondents, including details on their behaviors, risk factors, and relevant socio-demographic information. While the collected data undoubtedly provides valuable information, modeling such data presents several challenges. For instance, in health risk models, it is essential to consider behavioral information, spatio-temporal dynamics, and disease co-occurrence. In response to these challenges, our work proposes a multivariate spatio-temporal logistic model for chronic disease diagnoses. Predictors are modeled using individual risk factor covariates and a latent individual propensity to the disease. Leveraging a state space formulation of the model, we construct a framework in which spatio-temporal heterogeneity in regression parameters is informed by exogenous spatial information, corresponding to different spatial contextual risk factors that may affect health and the occurrence of chronic diseases in different ways. To explore the utility and the effectiveness of our method, we analyze behavioral and risk factor surveillance data collected in Italy (PASSI), which is well-known as a country characterized by high peculiar administrative, social and territorial diversities reflected on high variability in morbidity among population subgroups.

The COVID-19 pandemic has been a recent example for the spread of a harmful contagion in large populations. Moreover, the spread of harmful contagions is not only restricted to an infectious disease, but is also relevant to computer viruses and malware in computer networks. Furthermore, the spread of fake news and propaganda in online social networks is also of major concern. In this study, we introduce the measure-based spread minimization problem (MBSMP), which can help policy makers in minimizing the spread of harmful contagions in large networks. We develop exact solution methods based on branch-and-Benders-cut algorithms that make use of the application of Benders decomposition method to two different mixed-integer programming formulations of the MBSMP: an arc-based formulation and a path-based formulation. We show that for both formulations the Benders optimality cuts can be generated using a combinatorial procedure rather than solving the dual subproblems using linear programming. Additional improvements such as using scenario-dependent extended seed sets, initial cuts, and a starting heuristic are also incorporated into our branch-and-Benders-cut algorithms. We investigate the contribution of various components of the solution algorithms to the performance on the basis of computational results obtained on a set of instances derived from existing ones in the literature.

Traditional approaches to urban income segregation focus on static residential patterns, often failing to capture the dynamic nature of social mixing at the neighborhood level. Leveraging high-resolution location-based data from mobile phones, we capture the interplay of three different income groups (high, medium, low) based on their daily routines. We propose a three-dimensional space to analyze social mixing, which is embedded in the temporal dynamics of urban activities. This framework offers a more detailed perspective on social interactions, closely linked to the geographical features of each neighborhood. While residential areas fail to encourage social mixing in the nighttime, the working hours foster inclusion, with the city center showing a heightened level of interaction. As evening sets in, leisure areas emerge as potential facilitators for social interactions, depending on urban features such as public transport and a variety of Points Of Interest. These characteristics significantly modulate the magnitude and type of social stratification involved in social mixing, also underscoring the significance of urban design in either bridging or widening socio-economic divides.

With the growing demand of mineral consumption, the management of the mining waste is crucial. Cemented paste backfill (CPB) is one of the techniques developed by the mining industry to fill the voids generated by the excavation of underground spaces. The CPB process is the subject of various studies aimed at optimizing its implementation in the field. In this article, we focus on the modelling of the backfill phase where it has been shown in [Vigneaux et al., Cem. Concr. Res. 164 (2023) 107038] that a viscoplastic lubrication model can be used to describe CPB experiments. The aim here is to propose an accelerated method for performing the parameters' estimation of the properties of the paste (typically its rheological properties), with an inverse problem procedure based on observed height profiles of the paste. The inversion procedure is based on a metamodel built from an initial partial differential equation model, thanks to a Polynomial Chaos Expansion coupled with a Principal Component Analysis.

Safe and reliable disclosure of information from confidential data is a challenging statistical problem. A common approach considers the generation of synthetic data, to be disclosed instead of the original data. Efficient approaches ought to deal with the trade-off between reliability and confidentiality of the released data. Ultimately, the aim is to be able to reproduce as accurately as possible statistical analysis of the original data using the synthetic one. Bayesian networks is a model-based approach that can be used to parsimoniously estimate the underlying distribution of the original data and generate synthetic datasets. These ought to not only approximate the results of analyses with the original data but also robustly quantify the uncertainty involved in the approximation. This paper proposes a fully Bayesian approach to generate and analyze synthetic data based on the posterior predictive distribution of statistics of the synthetic data, allowing for efficient uncertainty quantification. The methodology makes use of probability properties of the model to devise a computationally efficient algorithm to obtain the target predictive distributions via Monte Carlo. Model parsimony is handled by proposing a general class of penalizing priors for Bayesian network models. Finally, the efficiency and applicability of the proposed methodology is empirically investigated through simulated and real examples.

Algorithmic fairness in the context of personalized recommendation presents significantly different challenges to those commonly encountered in classification tasks. Researchers studying classification have generally considered fairness to be a matter of achieving equality of outcomes between a protected and unprotected group, and built algorithmic interventions on this basis. We argue that fairness in real-world application settings in general, and especially in the context of personalized recommendation, is much more complex and multi-faceted, requiring a more general approach. We propose a model to formalize multistakeholder fairness in recommender systems as a two stage social choice problem. In particular, we express recommendation fairness as a novel combination of an allocation and an aggregation problem, which integrate both fairness concerns and personalized recommendation provisions, and derive new recommendation techniques based on this formulation. Simulations demonstrate the ability of the framework to integrate multiple fairness concerns in a dynamic way.

Principal component analysis (PCA) is a longstanding and well-studied approach for dimension reduction. It rests upon the assumption that the underlying signal in the data has low rank, and thus can be well-summarized using a small number of dimensions. The output of PCA is typically represented using a scree plot, which displays the proportion of variance explained (PVE) by each principal component. While the PVE is extensively reported in routine data analyses, to the best of our knowledge the notion of inference on the PVE remains unexplored. In this paper, we consider inference on the PVE. We first introduce a new population quantity for the PVE with respect to an unknown matrix mean. Critically, our interest lies in the PVE of the sample principal components (as opposed to unobserved population principal components); thus, the population PVE that we introduce is defined conditional on the sample singular vectors. We show that it is possible to conduct inference, in the sense of confidence intervals, p-values, and point estimates, on this population quantity. Furthermore, we can conduct valid inference on the PVE of a subset of the principal components, even when the subset is selected using a data-driven approach such as the elbow rule. We demonstrate the proposed approach in simulation and in an application to a gene expression dataset.

The Laplace eigenvalue problem on circular sectors has eigenfunctions with corner singularities. Standard methods may produce suboptimal approximation results. To address this issue, a novel numerical algorithm that enhances standard isogeometric analysis is proposed in this paper by using a single-patch graded mesh refinement scheme. Numerical tests demonstrate optimal convergence rates for both the eigenvalues and eigenfunctions. Furthermore, the results show that smooth splines possess a superior approximation constant compared to their $C^0$-continuous counterparts for the lower part of the Laplace spectrum. This is an extension of previous findings about excellent spectral approximation properties of smooth splines on rectangular domains to circular sectors. In addition, graded meshes prove to be particularly advantageous for an accurate approximation of a limited number of eigenvalues. The novel algorithm applied here has a drawback in the singularity of the isogeometric parameterization. It results in some basis functions not belonging to the solution space of the corresponding weak problem, which is considered a variational crime. However, the approach proves to be robust. Finally, a hierarchical mesh structure is presented to avoid anisotropic elements, omit redundant degrees of freedom and keep the number of basis functions contributing to the variational crime constant, independent of the mesh size. Numerical results validate the effectiveness of hierarchical mesh grading for the simulation of eigenfunctions with and without corner singularities.

Knowledge graphs (KGs) of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge graphs are typically incomplete, it is useful to perform knowledge graph completion or link prediction, i.e. predict whether a relationship not in the knowledge graph is likely to be true. This paper serves as a comprehensive survey of embedding models of entities and relationships for knowledge graph completion, summarizing up-to-date experimental results on standard benchmark datasets and pointing out potential future research directions.

北京阿比特科技有限公司