亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Structured epidemic models can be formulated as first-order hyperbolic PDEs, where the "spatial" variables represent individual traits, called structures. For models with two structures, we propose a numerical technique to approximate $R_{0}$, which measures the transmissibility of an infectious disease and, rigorously, is defined as the dominant eigenvalue of a next-generation operator. Via bivariate collocation and cubature on tensor grids, the latter is approximated with a finite-dimensional matrix, so that its dominant eigenvalue can easily be computed with standard techniques. We use test examples to investigate experimentally the behavior of the approximation: the convergence order appears to be infinite when the corresponding eigenfunction is smooth, and finite for less regular eigenfunctions. To demonstrate the effectiveness of the technique for more realistic applications, we present a new epidemic model structured by demographic age and immunity, and study the approximation of $R_{0}$ in some particular cases of interest.

相關內容

This paper deals with the large-scale behaviour of dynamical optimal transport on $\mathbb{Z}^d$-periodic graphs with general lower semicontinuous and convex energy densities. Our main contribution is a homogenisation result that describes the effective behaviour of the discrete problems in terms of a continuous optimal transport problem. The effective energy density can be explicitly expressed in terms of a cell formula, which is a finite-dimensional convex programming problem that depends non-trivially on the local geometry of the discrete graph and the discrete energy density. Our homogenisation result is derived from a $\Gamma$-convergence result for action functionals on curves of measures, which we prove under very mild growth conditions on the energy density. We investigate the cell formula in several cases of interest, including finite-volume discretisations of the Wasserstein distance, where non-trivial limiting behaviour occurs.

We consider the problem of testing for long-range dependence for time-varying coefficient regression models. The covariates and errors are assumed to be locally stationary, which allows complex temporal dynamics and heteroscedasticity. We develop KPSS, R/S, V/S, and K/S-type statistics based on the nonparametric residuals, and propose bootstrap approaches equipped with a difference-based long-run covariance matrix estimator for practical implementation. Under the null hypothesis, the local alternatives as well as the fixed alternatives, we derive the limiting distributions of the test statistics, establish the uniform consistency of the difference-based long-run covariance estimator, and justify the bootstrap algorithms theoretically. In particular, the exact local asymptotic power of our testing procedure enjoys the order $O( \log^{-1} n)$, the same as that of the classical KPSS test for long memory in strictly stationary series without covariates. We demonstrate the effectiveness of our tests by extensive simulation studies. The proposed tests are applied to a COVID-19 dataset in favor of long-range dependence in the cumulative confirmed series of COVID-19 in several countries, and to the Hong Kong circulatory and respiratory dataset, identifying a new type of 'spurious long memory'.

The integration and transfer of information from multiple sources to multiple targets is a core motive of neural systems. The emerging field of partial information decomposition (PID) provides a novel information-theoretic lens into these mechanisms by identifying synergistic, redundant, and unique contributions to the mutual information between one and several variables. While many works have studied aspects of PID for Gaussian and discrete distributions, the case of general continuous distributions is still uncharted territory. In this work we present a method for estimating the unique information in continuous distributions, for the case of one versus two variables. Our method solves the associated optimization problem over the space of distributions with fixed bivariate marginals by combining copula decompositions and techniques developed to optimize variational autoencoders. We obtain excellent agreement with known analytic results for Gaussians, and illustrate the power of our new approach in several brain-inspired neural models. Our method is capable of recovering the effective connectivity of a chaotic network of rate neurons, and uncovers a complex trade-off between redundancy, synergy and unique information in recurrent networks trained to solve a generalized XOR task.

Bayesian models based on the Dirichlet process and other stick-breaking priors have been proposed as core ingredients for clustering, topic modeling, and other unsupervised learning tasks. Prior specification is, however, relatively difficult for such models, given that their flexibility implies that the consequences of prior choices are often relatively opaque. Moreover, these choices can have a substantial effect on posterior inferences. Thus, considerations of robustness need to go hand in hand with nonparametric modeling. In the current paper, we tackle this challenge by exploiting the fact that variational Bayesian methods, in addition to having computational advantages in fitting complex nonparametric models, also yield sensitivities with respect to parametric and nonparametric aspects of Bayesian models. In particular, we demonstrate how to assess the sensitivity of conclusions to the choice of concentration parameter and stick-breaking distribution for inferences under Dirichlet process mixtures and related mixture models. We provide both theoretical and empirical support for our variational approach to Bayesian sensitivity analysis.

The Glivenko-Cantelli theorem states that the empirical distribution function converges uniformly almost surely to the theoretical distribution for a random variable $X \in \mathbb{R}$. This is an important result because it establishes the fact that sampling does capture the dispersion measure the distribution function $F$ imposes. In essence, sampling permits one to learn and infer the behavior of $F$ by only looking at observations from $X$. The probabilities that are inferred from samples $\mathbf{X}$ will become more precise as the sample size increases and more data becomes available. Therefore, it is valid to study distributions via samples. The proof present here is constructive, meaning that the result is derived directly from the fact that the empirical distribution function converges pointwise almost surely to the theoretical distribution. The work includes a proof of this preliminary statement and attempts to motivate the intuition one gets from sampling techniques when studying the regions in which a model concentrates probability. The sets where dispersion is described with precision by the empirical distribution function will eventually cover the entire sample space.

We formulate a free probabilistic analog of the Wasserstein manifold on $\mathbb{R}^d$ (the formal Riemannian manifold of smooth probability densities on $\mathbb{R}^d$), and we use it to study smooth non-commutative transport of measure. The points of the free Wasserstein manifold $\mathscr{W}(\mathbb{R}^{*d})$ are smooth tracial non-commutative functions $V$ with quadratic growth at $\infty$, which correspond to minus the log-density in the classical setting. The space of smooth tracial non-commutative functions used here is a new one whose definition and basic properties we develop in the paper; they are scalar-valued functions of self-adjoint $d$-tuples from arbitrary tracial von Neumann algebras that can be approximated by trace polynomials. The space of non-commutative diffeomorphisms $\mathscr{D}(\mathbb{R}^{*d})$ acts on $\mathscr{W}(\mathbb{R}^{*d})$ by transport, and the basic relationship between tangent vectors for $\mathscr{D}(\mathbb{R}^{*d})$ and tangent vectors for $\mathscr{W}(\mathbb{R}^{*d})$ is described using the Laplacian $L_V$ associated to $V$ and its pseudo-inverse $\Psi_V$ (when defined). Following similar arguments to arXiv:1204.2182, arXiv:1701.00132, and arXiv:1906.10051 in the new setting, we give a rigorous proof for the existence of smooth transport along any path $t \mapsto V_t$ when $V$ is sufficiently close $(1/2) \sum_j \operatorname{tr}(x_j^2)$, as well as smooth triangular transport.

Discrete gene regulatory networks (GRNs) play a vital role in the study of robustness and modularity. A common method of evaluating the robustness of GRNs is to measure their ability to regulate a set of perturbed gene activation patterns back to their unperturbed forms. Usually, perturbations are obtained by collecting random samples produced by a predefined distribution of gene activation patterns. This sampling method introduces stochasticity, in turn inducing dynamicity. This dynamicity is imposed on top of an already complex fitness landscape. So where sampling is used, it is important to understand which effects arise from the structure of the fitness landscape, and which arise from the dynamicity imposed on it. Stochasticity of the fitness function also causes difficulties in reproducibility and in post-experimental analyses. We develop a deterministic distributional fitness evaluation by considering the complete distribution of gene activity patterns, so as to avoid stochasticity in fitness assessment. This fitness evaluation facilitates repeatability. Its determinism permits us to ascertain theoretical bounds on the fitness, and thus to identify whether the algorithm has reached a global optimum. It enables us to differentiate the effects of the problem domain from those of the noisy fitness evaluation, and thus to resolve two remaining anomalies in the behaviour of the problem domain of~\citet{espinosa2010specialization}. We also reveal some properties of solution GRNs that lead them to be robust and modular, leading to a deeper understanding of the nature of the problem domain. We conclude by discussing potential directions toward simulating and understanding the emergence of modularity in larger, more complex domains, which is key both to generating more useful modular solutions, and to understanding the ubiquity of modularity in biological systems.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

Long Short-Term Memory (LSTM) infers the long term dependency through a cell state maintained by the input and the forget gate structures, which models a gate output as a value in [0,1] through a sigmoid function. However, due to the graduality of the sigmoid function, the sigmoid gate is not flexible in representing multi-modality or skewness. Besides, the previous models lack modeling on the correlation between the gates, which would be a new method to adopt inductive bias for a relationship between previous and current input. This paper proposes a new gate structure with the bivariate Beta distribution. The proposed gate structure enables probabilistic modeling on the gates within the LSTM cell so that the modelers can customize the cell state flow with priors and distributions. Moreover, we theoretically show the higher upper bound of the gradient compared to the sigmoid function, and we empirically observed that the bivariate Beta distribution gate structure provides higher gradient values in training. We demonstrate the effectiveness of bivariate Beta gate structure on the sentence classification, image classification, polyphonic music modeling, and image caption generation.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

北京阿比特科技有限公司