亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Many countries faced challenges in their health workforce supply like impending retirement waves, negative population growth, or a suboptimal distribution of resources across medical sectors even before the pandemic struck. Current quantitative models are often of limited usability as they either require extensive individual-level data to be properly calibrated or (in the absence of such data) become too simplistic to capture key demographic changes or disruptive epidemiological shocks like the SARS-CoV-2 pandemic. We propose a novel population-dynamical and stock-flow-consistent approach to health workforce supply forecasting that is complex enough to address dynamically changing behaviors while requiring only publicly available timeseries data for complete calibration. We demonstrate the usefulness of this model by applying it to 21 European countries to forecast the supply of generalist and specialist physicians until 2040, as well as how Covid-related mortality and increased healthcare utilization might impact this supply. Compared to staffing levels required to keep the physician density constant at 2019 levels, we find that in many countries there is indeed a significant trend toward decreasing density for generalist physicians at the expense of increasing densities for specialists. The trends for specialists are exacerbated in many Southern and Eastern European countries by expectations of negative population growth. Compared to the expected demographic changes in the population and the health workforce, we expect a limited impact of Covid on these trends even under conservative modelling assumptions. It is of the utmost importance to devise tools for decision makers to influence the allocation and supply of physicians across fields and sectors to combat these imbalances.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 數學 · MoDELS · 易處理的 · BASIC ·
2023 年 3 月 16 日

A distributed system is permissionless when participants can join and leave the network without permission from a central authority. Many modern distributed systems are naturally permissionless, in the sense that a central permissioning authority would defeat their design purpose: this includes blockchains, filesharing protocols, some voting systems, and more. By their permissionless nature, such systems are heterogeneous: participants may only have a partial view of the system, and they may also have different goals and beliefs. Thus, the traditional notion of consensus -- i.e. system-wide agreement -- may not be adequate, and we may need to generalise it. This is a challenge: how should we understand what heterogeneous consensus is; what mathematical framework might this require; and how can we use this to build understanding and mathematical models of robust, effective, and secure permissionless systems in practice? We analyse heterogeneous consensus using semitopology as a framework. This is like topology, but without the restriction that intersections of opens be open. Semitopologies have a rich theory which is related to topology, but with its own distinct character and mathematics. We introduce novel well-behavedness conditions, including an anti-Hausdorff property and a new notion of `topen set', and we show how these structures relate to consensus. We give a restriction of semitopologies to witness semitopologies, which are an algorithmically tractable subclass corresponding to Horn clause theories, having particularly good mathematical properties. We introduce and study several other basic notions that are specific and novel to semitopologies, and study how known quantities in topology, such as dense subsets and closures, display interesting and useful new behaviour in this new semitopological context.

With the increased interest in immersive experiences, point cloud came to birth and was widely adopted as the first choice to represent 3D media. Besides several distortions that could affect the 3D content spanning from acquisition to rendering, efficient transmission of such volumetric content over traditional communication systems stands at the expense of the delivered perceptual quality. To estimate the magnitude of such degradation, employing quality metrics became an inevitable solution. In this work, we propose a novel deep-based no-reference quality metric that operates directly on the whole point cloud without requiring extensive pre-processing, enabling real-time evaluation over both transmission and rendering levels. To do so, we use a novel model design consisting primarily of cross and self-attention layers, in order to learn the best set of local semantic affinities while keeping the best combination of geometry and color information in multiple levels from basic features extraction to deep representation modeling.

This paper proposes a new approach to estimating the distribution of a response variable conditioned on observing some factors. The proposed approach possesses desirable properties of flexibility, interpretability, tractability and extendability. The conditional quantile function is modeled by a mixture (weighted sum) of basis quantile functions, with the weights depending on factors. The calibration problem is formulated as a convex optimization problem. It can be viewed as conducting quantile regressions for all confidence levels simultaneously while avoiding quantile crossing by definition. The calibration problem is equivalent to minimizing the continuous ranked probability score (CRPS). Based on the canonical polyadic (CP) decomposition of tensors, we propose a dimensionality reduction method that reduces the rank of the parameter tensor and propose an alternating algorithm for estimation. Additionally, based on Risk Quadrangle framework, we generalize the approach to conditional distributions defined by Conditional Value-at-Risk (CVaR), expectile and other functions of uncertainty measures. Although this paper focuses on using splines as the weight functions, it can be extended to neural networks. Numerical experiments demonstrate the effectiveness of our approach.

Causal inference in spatial settings is met with unique challenges and opportunities. On one hand, a unit's outcome can be affected by the exposure at many locations, leading to interference. On the other hand, unmeasured spatial variables can confound the effect of interest. Our work has two overarching goals. First, using causal diagrams, we illustrate that spatial confounding and interference can manifest as each other, meaning that investigating the presence of one can lead to wrongful conclusions in the presence of the other, and that statistical dependencies in the exposure variable can render standard analyses invalid. This can have crucial implications for analyzing data with spatial or other dependencies, and for understanding the effect of interventions on dependent units. Secondly, we propose a parametric approach to mitigate bias from local and neighborhood unmeasured spatial confounding and account for interference simultaneously. This approach is based on simultaneous modeling of the exposure and the outcome while accounting for the presence of spatially-structured unmeasured predictors of both variables. We illustrate our approach with a simulation study and with an analysis of the local and interference effects of sulfur dioxide emissions from power plants on cardiovascular mortality.

Groundwater flow modeling is commonly used to calculate groundwater heads, estimate groundwater flow paths and travel times, and provide insights into solute transport processes within an aquifer. However, the values of input parameters that drive groundwater flow models are often highly uncertain due to subsurface heterogeneity and geologic complexity in combination with lack of measurements/unreliable measurements. This uncertainty affects the accuracy and reliability of model outputs. Therefore, parameters' uncertainty must be quantified before adopting the model as an engineering tool. In this study, we model the uncertain parameters as random variables and use a Bayesian inversion approach to obtain a posterior,data-informed, probability density function (pdf) for them: in particular, the likelihood function we consider takes into account both well measurements and our prior knowledge about the extent of the springs in the domain under study. To keep the modelistic and computational complexities under control, we assume Gaussianity of the posterior pdf of the parameters. To corroborate this assumption, we run an identifiability analysis of the model: we apply the inversion procedure to several sets of synthetic data polluted by increasing levels of noise, and we determine at which levels of noise we can effectively recover the "true value" of the parameters. We then move to real well data (coming from the Ticino River basin, in northern Italy, and spanning a month in summer 2014), and use the posterior pdf of the parameters as a starting point to perform an Uncertainty Quantification analysis on groundwater travel-time distributions.

Accurate forecasting of the U.K. gross value added (GVA) is fundamental for measuring the growth of the U.K. economy. A common nonstationarity in GVA data, such as the ABML series, is its increase in variance over time due to inflation. Transformed or inflation-adjusted series can still be challenging for classical stationarity-assuming forecasters. We adopt a different approach that works directly with the GVA series by advancing recent forecasting methods for locally stationary time series. Our approach results in more accurate and reliable forecasts, and continues to work well even when the ABML series becomes highly variable during the COVID pandemic.

Forecasting the water level of the Han river is important to control traffic and avoid natural disasters. There are many variables related to the Han river and they are intricately connected. In this work, we propose a novel transformer that exploits the causal relationship based on the prior knowledge among the variables and forecasts the water level at the Jamsu bridge in the Han river. Our proposed model considers both spatial and temporal causation by formalizing the causal structure as a multilayer network and using masking methods. Due to this approach, we can have interpretability that consistent with prior knowledge. In real data analysis, we use the Han river dataset from 2016 to 2021 and compare the proposed model with deep learning models.

Time series forecasting is widely used in business intelligence, e.g., forecast stock market price, sales, and help the analysis of data trend. Most time series of interest are macroscopic time series that are aggregated from microscopic data. However, instead of directly modeling the macroscopic time series, rare literature studied the forecasting of macroscopic time series by leveraging data on the microscopic level. In this paper, we assume that the microscopic time series follow some unknown mixture probabilistic distributions. We theoretically show that as we identify the ground truth latent mixture components, the estimation of time series from each component could be improved because of lower variance, thus benefitting the estimation of macroscopic time series as well. Inspired by the power of Seq2seq and its variants on the modeling of time series data, we propose Mixture of Seq2seq (MixSeq), an end2end mixture model to cluster microscopic time series, where all the components come from a family of Seq2seq models parameterized by different parameters. Extensive experiments on both synthetic and real-world data show the superiority of our approach.

The accurate and interpretable prediction of future events in time-series data often requires the capturing of representative patterns (or referred to as states) underpinning the observed data. To this end, most existing studies focus on the representation and recognition of states, but ignore the changing transitional relations among them. In this paper, we present evolutionary state graph, a dynamic graph structure designed to systematically represent the evolving relations (edges) among states (nodes) along time. We conduct analysis on the dynamic graphs constructed from the time-series data and show that changes on the graph structures (e.g., edges connecting certain state nodes) can inform the occurrences of events (i.e., time-series fluctuation). Inspired by this, we propose a novel graph neural network model, Evolutionary State Graph Network (EvoNet), to encode the evolutionary state graph for accurate and interpretable time-series event prediction. Specifically, Evolutionary State Graph Network models both the node-level (state-to-state) and graph-level (segment-to-segment) propagation, and captures the node-graph (state-to-segment) interactions over time. Experimental results based on five real-world datasets show that our approach not only achieves clear improvements compared with 11 baselines, but also provides more insights towards explaining the results of event predictions.

Modeling multivariate time series has long been a subject that has attracted researchers from a diverse range of fields including economics, finance, and traffic. A basic assumption behind multivariate time series forecasting is that its variables depend on one another but, upon looking closely, it is fair to say that existing methods fail to fully exploit latent spatial dependencies between pairs of variables. In recent years, meanwhile, graph neural networks (GNNs) have shown high capability in handling relational dependencies. GNNs require well-defined graph structures for information propagation which means they cannot be applied directly for multivariate time series where the dependencies are not known in advance. In this paper, we propose a general graph neural network framework designed specifically for multivariate time series data. Our approach automatically extracts the uni-directed relations among variables through a graph learning module, into which external knowledge like variable attributes can be easily integrated. A novel mix-hop propagation layer and a dilated inception layer are further proposed to capture the spatial and temporal dependencies within the time series. The graph learning, graph convolution, and temporal convolution modules are jointly learned in an end-to-end framework. Experimental results show that our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets and achieves on-par performance with other approaches on two traffic datasets which provide extra structural information.

北京阿比特科技有限公司