亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A modelling framework suitable for detecting shape shifts in functional profiles combining the notion of Fr\'echet mean and the concept of deformation models is developed and proposed. The generalized mean sense offered by the Fr\'echet mean notion is employed to capture the typical pattern of the profiles under study, while the concept of deformation models, and in particular of the shape invariant model, allows for interpretable parameterizations of profile's deviations from the typical shape. EWMA-type control charts compatible with the functional nature of data and the employed deformation model are built and proposed, exploiting certain shape characteristics of the profiles under study with respect to the generalized mean sense, allowing for the identification of potential shifts concerning the shape and/or the deformation process. Potential shifts in the shape deformation process, are further distinguished to significant shifts with respect to amplitude and/or the phase of the profile under study. The proposed modelling and shift detection framework is implemented to a real world case study, where daily concentration profiles concerning air pollutants from an area in the city of Athens are modelled, while profiles indicating hazardous concentration levels are successfully identified in most of the cases.

相關內容

Simulation-based inference (SBI) provides a powerful framework for inferring posterior distributions of stochastic simulators in a wide range of domains. In many settings, however, the posterior distribution is not the end goal itself -- rather, the derived parameter values and their uncertainties are used as a basis for deciding what actions to take. Unfortunately, because posterior distributions provided by SBI are (potentially crude) approximations of the true posterior, the resulting decisions can be suboptimal. Here, we address the question of how to perform Bayesian decision making on stochastic simulators, and how one can circumvent the need to compute an explicit approximation to the posterior. Our method trains a neural network on simulated data and can predict the expected cost given any data and action, and can, thus, be directly used to infer the action with lowest cost. We apply our method to several benchmark problems and demonstrate that it induces similar cost as the true posterior distribution. We then apply the method to infer optimal actions in a real-world simulator in the medical neurosciences, the Bayesian Virtual Epileptic Patient, and demonstrate that it allows to infer actions associated with low cost after few simulations.

We describe a simple algorithm for estimating the $k$-th normalized Betti number of a simplicial complex over $n$ elements using the path integral Monte Carlo method. For a general simplicial complex, the running time of our algorithm is $n^{O\left(\frac{1}{\sqrt{\gamma}}\log\frac{1}{\varepsilon}\right)}$ with $\gamma$ measuring the spectral gap of the combinatorial Laplacian and $\varepsilon \in (0,1)$ the additive precision. In the case of a clique complex, the running time of our algorithm improves to $\left(n/\lambda_{\max}\right)^{O\left(\frac{1}{\sqrt{\gamma}}\log\frac{1}{\varepsilon}\right)}$ with $\lambda_{\max} \geq k$, where $\lambda_{\max}$ is the maximum eigenvalue of the combinatorial Laplacian. Our algorithm provides a classical benchmark for a line of quantum algorithms for estimating Betti numbers. On clique complexes it matches their running time when, for example, $\gamma \in \Omega(1)$ and $k \in \Omega(n)$.

This paper focuses on representing the $L^{\infty}$-norm of finite-dimensional linear time-invariant systems with parameter-dependent coefficients. Previous studies tackled the problem in a non-parametric scenario by simplifying it to finding the maximum $y$-projection of real solutions $(x, y)$ of a system of the form $\Sigma=\{P=0, \, \partial P/\partial x=0\}$, where $P \in \Z[x, y]$. To solve this problem, standard computer algebra methods were employed and analyzed \cite{bouzidi2021computation}. In this paper, we extend our approach to address the parametric case. We aim to represent the "maximal" $y$-projection of real solutions of $\Sigma$ as a function of the given parameters. %a set of parameters $\alpha$. To accomplish this, we utilize cylindrical algebraic decomposition. This method allows us to determine the desired value as a function of the parameters within specific regions of parameter space.

A new scheme is proposed to construct an n-times differentiable function extension of an n-times differentiable function defined on a smooth domain D in d-dimensions. The extension scheme relies on an explicit formula consisting of a linear combination of n+1 function values in D, which extends the function along directions normal to the boundary. Smoothness tangent to the boundary is automatic. The performance of the scheme is illustrated by using function extension as a step in a numerical solver for the inhomogeneous Poisson equation on multiply connected domains with complex geometry in two and three dimensions. We show that the modest additional work needed to do function extension leads to considerably more accurate solutions of the partial differential equation.

Spatial statistics is traditionally based on stationary models on $\mathbb{R^d}$ like Mat\'ern fields. The adaptation of traditional spatial statistical methods, originally designed for stationary models in Euclidean spaces, to effectively model phenomena on linear networks such as stream systems and urban road networks is challenging. The current study aims to analyze the incidence of traffic accidents on road networks using three different methodologies and compare the model performance for each methodology. Initially, we analyzed the application of spatial triangulation precisely on road networks instead of traditional continuous regions. However, this approach posed challenges in areas with complex boundaries, leading to the emergence of artificial spatial dependencies. To address this, we applied an alternative computational method to construct nonstationary barrier models. Finally, we explored a recently proposed class of Gaussian processes on compact metric graphs, the Whittle-Mat\'ern fields, defined by a fractional SPDE on the metric graph. The latter fields are a natural extension of Gaussian fields with Mat\'ern covariance functions on Euclidean domains to non-Euclidean metric graph settings. A ten-year period (2010-2019) of daily traffic-accident records from Barcelona, Spain have been used to evaluate the three models referred above. While comparing model performance we observed that the Whittle-Mat\'ern fields defined directly on the network outperformed the network triangulation and barrier models. Due to their flexibility, the Whittle-Mat\'ern fields can be applied to a wide range of environmental problems on linear networks such as spatio-temporal modeling of water contamination in stream networks or modeling air quality or accidents on urban road networks.

We develop an NLP-based procedure for detecting systematic nonmeritorious consumer complaints, simply called systematic anomalies, among complaint narratives. While classification algorithms are used to detect pronounced anomalies, in the case of smaller and frequent systematic anomalies, the algorithms may falter due to a variety of reasons, including technical ones as well as natural limitations of human analysts. Therefore, as the next step after classification, we convert the complaint narratives into quantitative data, which are then analyzed using an algorithm for detecting systematic anomalies. We illustrate the entire procedure using complaint narratives from the Consumer Complaint Database of the Consumer Financial Protection Bureau.

A $3$-uniform hypergraph is a generalization of simple graphs where each hyperedge is a subset of vertices of size $3$. The degree of a vertex in a hypergraph is the number of hyperedges incident with it. The degree sequence of a hypergraph is the sequence of the degrees of its vertices. The degree sequence problem for $3$-uniform hypergraphs is to decide if a $3$-uniform hypergraph exists with a prescribed degree sequence. Such a hypergraph is called a realization. Recently, Deza \emph{et al.} proved that the degree sequence problem for $3$-uniform hypergraphs is NP-complete. Some special cases are easy; however, polynomial algorithms have been known so far only for some very restricted degree sequences. The main result of our research is the following. If all degrees are between $\frac{2n^2}{63}+O(n)$ and $\frac{5n^2}{63}-O(n)$ in a degree sequence $D$, further, the number of vertices is at least $45$, and the degree sum can be divided by $3$, then $D$ has a $3$-uniform hypergraph realization. Our proof is constructive and in fact, it constructs a hypergraph realization in polynomial time for any degree sequence satisfying the properties mentioned above. To our knowledge, this is the first polynomial running time algorithm to construct a $3$-uniform hypergraph realization of a highly irregular and dense degree sequence.

We analyze a goal-oriented adaptive algorithm that aims to efficiently compute the quantity of interest $G(u^\star)$ with a linear goal functional $G$ and the solution $u^\star$ to a general second-order nonsymmetric linear elliptic partial differential equation. The current state of the analysis of iterative algebraic solvers for nonsymmetric systems lacks the contraction property in the norms that are prescribed by the functional analytic setting. This seemingly prevents their application in the optimality analysis of goal-oriented adaptivity. As a remedy, this paper proposes a goal-oriented adaptive iteratively symmetrized finite element method (GOAISFEM). It employs a nested loop with a contractive symmetrization procedure, e.g., the Zarantonello iteration, and a contractive algebraic solver, e.g., an optimal multigrid solver. The various iterative procedures require well-designed stopping criteria such that the adaptive algorithm can effectively steer the local mesh refinement and the computation of the inexact discrete approximations. The main results consist of full linear convergence of the proposed adaptive algorithm and the proof of optimal convergence rates with respect to both degrees of freedom and total computational cost (i.e., optimal complexity). Numerical experiments confirm the theoretical results and investigate the selection of the parameters.

Most studies of adaptive algorithm behavior consider performance measures based on mean values such as the mean-square error. The derived models are useful for understanding the algorithm behavior under different environments and can be used for design. Nevertheless, from a practical point of view, the adaptive filter user has only one realization of the algorithm to obtain the desired result. This letter derives a model for the variance of the squared-error sample curve of the least-mean-square (LMS) adaptive algorithm, so that the achievable cancellation level can be predicted based on the properties of the steady-state squared error. The derived results provide the user with useful design guidelines.

Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks' Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.

北京阿比特科技有限公司