亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Gaussian random fields (GFs) are fundamental tools in spatial modeling and can be represented flexibly and efficiently as solutions to stochastic partial differential equations (SPDEs). The SPDEs depend on specific parameters, which enforce various field behaviors and can be estimated using Bayesian inference. However, the likelihood typically only provides limited insights into the covariance structure under in-fill asymptotics. In response, it is essential to leverage priors to achieve appropriate, meaningful covariance structures in the posterior. This study introduces a smooth, invertible parameterization of the correlation length and diffusion matrix of an anisotropic GF and constructs penalized complexity (PC) priors for the model when the parameters are constant in space. The formulated prior is weakly informative, effectively penalizing complexity by pushing the correlation range toward infinity and the anisotropy to zero.

相關內容

Threshold selection is a fundamental problem in any threshold-based extreme value analysis. While models are asymptotically motivated, selecting an appropriate threshold for finite samples is difficult and highly subjective through standard methods. Inference for high quantiles can also be highly sensitive to the choice of threshold. Too low a threshold choice leads to bias in the fit of the extreme value model, while too high a choice leads to unnecessary additional uncertainty in the estimation of model parameters. We develop a novel methodology for automated threshold selection that directly tackles this bias-variance trade-off. We also develop a method to account for the uncertainty in the threshold estimation and propagate this uncertainty through to high quantile inference. Through a simulation study, we demonstrate the effectiveness of our method for threshold selection and subsequent extreme quantile estimation, relative to the leading existing methods, and show how the method's effectiveness is not sensitive to the tuning parameters. We apply our method to the well-known, troublesome example of the River Nidd dataset.

We propose a new second-order accurate lattice Boltzmann formulation for linear elastodynamics that is stable for arbitrary combinations of material parameters under a CFL-like condition. The construction of the numerical scheme uses an equivalent first-order hyperbolic system of equations as an intermediate step, for which a vectorial lattice Boltzmann formulation is introduced. The only difference to conventional lattice Boltzmann formulations is the usage of vector-valued populations, so that all computational benefits of the algorithm are preserved. Using the asymptotic expansion technique and the notion of pre-stability structures we further establish second-order consistency as well as analytical stability estimates. Lastly, we introduce a second-order consistent initialization of the populations as well as a boundary formulation for Dirichlet boundary conditions on 2D rectangular domains. All theoretical derivations are numerically verified by convergence studies using manufactured solutions and long-term stability tests.

Infinitary and cyclic proof systems are proof systems for logical formulas with fixed-point operators or inductive definitions. A cyclic proof system is a restriction of the corresponding infinitary proof system. Hence, these proof systems are generally not the same, as in the cyclic system may be weaker than the infinitary system. For several logics, the infinitary proof systems are shown to be cut-free complete. However, cyclic proof systems are characterized with many unknown problems on the (cut-free) completeness or the cut-elimination property. In this study, we show that the provability of infinitary and cyclic proof systems are the same for some propositional logics with fixed-point operators or inductive definitions and that the cyclic proof systems are cut-free complete.

Topology optimization has matured to become a powerful engineering design tool that is capable of designing extraordinary structures and materials taking into account various physical phenomena. Despite the method's great advancements in recent years, several unanswered questions remain. This paper takes a step towards answering one of the larger questions, namely: How far from the global optimum is a given topology optimized design? Typically this is a hard question to answer, as almost all interesting topology optimization problems are non-convex. Unfortunately, this non-convexity implies that local minima may plague the design space, resulting in optimizers ending up in suboptimal designs. In this work, we investigate performance bounds for topology optimization via a computational framework that utilizes Lagrange duality theory. This approach provides a viable measure of how \say{close} a given design is to the global optimum for a subset of optimization formulations. The method's capabilities are exemplified via several numerical examples, including the design of mode converters and resonating plates.

Discontinuous Galerkin (DG) methods for solving elliptic equations are gaining popularity in the computational physics community for their high-order spectral convergence and their potential for parallelization on computing clusters. However, problems in numerical relativity with extremely stretched grids, such as initial data problems for binary black holes that impose boundary conditions at large distances from the black holes, have proven challenging for DG methods. To alleviate this problem we have developed a primal DG scheme that is generically applicable to a large class of elliptic equations, including problems on curved and extremely stretched grids. The DG scheme accommodates two widely used initial data formulations in numerical relativity, namely the puncture formulation and the extended conformal thin-sandwich (XCTS) formulation. We find that our DG scheme is able to stretch the grid by a factor of $\sim 10^9$ and hence allows to impose boundary conditions at large distances. The scheme converges exponentially with resolution both for the smooth XCTS problem and for the nonsmooth puncture problem. With this method we are able to generate high-quality initial data for binary black hole problems using a parallelizable DG scheme. The code is publicly available in the open-source SpECTRE numerical relativity code.

Ridge functions are used to describe and study the lower bound of the approximation done by the neural networks which can be written as a linear combination of activation functions. If the activation functions are also ridge functions, these networks are called explainable neural networks. In this brief paper, we first show that quantum neural networks which are based on variational quantum circuits can be written as a linear combination of ridge functions by following matrix notations. Consequently, we show that the interpretability and explainability of such quantum neural networks can be directly considered and studied as an approximation with the linear combination of ridge functions.

In the modelling of stochastic phenomena, such as quasi-reaction systems, parameter estimation of kinetic rates can be challenging, particularly when the time gap between consecutive measurements is large. Local linear approximation approaches account for the stochasticity in the system but fail to capture the nonlinear nature of the underlying process. At the mean level, the dynamics of the system can be described by a system of ODEs, which have an explicit solution only for simple unitary systems. An analytical solution for generic quasi-reaction systems is proposed via a first order Taylor approximation of the hazard rate. This allows a nonlinear forward prediction of the future dynamics given the current state of the system. Predictions and corresponding observations are embedded in a nonlinear least-squares approach for parameter estimation. The performance of the algorithm is compared to existing SDE and ODE-based methods via a simulation study. Besides the increased computational efficiency of the approach, the results show an improvement in the kinetic rate estimation, particularly for data observed at large time intervals. Additionally, the availability of an explicit solution makes the method robust to stiffness, which is often present in biological systems. An illustration on Rhesus Macaque data shows the applicability of the approach to the study of cell differentiation.

Psychometrics and quantitative psychology rely strongly on statistical models to measure psychological processes. As a branch of mathematics, geometry is inherently connected to measurement and focuses on properties such as distance and volume. However, despite the common root of measurement, geometry is currently not used a lot in psychological measurement. In this paper, my aim is to illustrate how ideas from non-Euclidean geometry may be relevant for psychometrics.

Unique continuation principles are fundamental properties of elliptic partial differential equations, giving conditions that guarantee that the solution to an elliptic equation must be uniformly zero. Since finite-element discretizations are a natural tool to help gain understanding into elliptic equations, it is natural to ask if such principles also hold at the discrete level. In this work, we prove a version of the unique continuation principle for piecewise-linear and -bilinear finite-element discretizations of the Laplacian eigenvalue problem on polygonal domains in $\mathbb{R}^2$. Namely, we show that any solution to the discretized equation $-\Delta u = \lambda u$ with vanishing Dirichlet and Neumann traces must be identically zero under certain geometric and topological assumptions on the resulting triangulation. We also provide a counterexample, showing that a nonzero \emph{inner solution} exists when the topological assumptions are not satisfied. Finally, we give an application to an eigenvalue interlacing problem, where the space of inner solutions makes an explicit appearance.

Gaussian graphical models are nowadays commonly applied to the comparison of groups sharing the same variables, by jointy learning their independence structures. We consider the case where there are exactly two dependent groups and the association structure is represented by a family of coloured Gaussian graphical models suited to deal with paired data problems. To learn the two dependent graphs, together with their across-graph association structure, we implement a fused graphical lasso penalty. We carry out a comprehensive analysis of this approach, with special attention to the role played by some relevant submodel classes. In this way, we provide a broad set of tools for the application of Gaussian graphical models to paired data problems. These include results useful for the specification of penalty values in order to obtain a path of lasso solutions and an ADMM algorithm that solves the fused graphical lasso optimization problem. Finally, we present an application of our method to cancer genomics where it is of interest to compare cancer cells with a control sample from histologically normal tissues adjacent to the tumor. All the methods described in this article are implemented in the $\texttt{R}$ package $\texttt{pdglasso}$ availabe at: //github.com/savranciati/pdglasso.

北京阿比特科技有限公司