亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A dynamic factor model with factor series following a VAR$(p)$ model is shown to have a VARMA$(p,p)$ model representation. Reduced-rank structures are identified for the VAR and VMA components of the resulting VARMA model. It is also shown how the VMA component parameters can be computed numerically from the original model parameters via the innovations algorithm, and connections of this approach to non-linear matrix equations are made. Some VAR models related to the resulting VARMA model are also discussed.

相關內容

This paper presents a novel approach to construct regularizing operators for severely ill-posed Fredholm integral equations of the first kind by introducing parametrized discretization. The optimal values of discretization and regularization parameters are computed simultaneously by solving a minimization problem formulated based on a regularization parameter search criterion. The effectiveness of the proposed approach is demonstrated through examples of noisy Laplace transform inversions and the deconvolution of nuclear magnetic resonance relaxation data.

We develop a hybrid scheme based on a finite difference scheme and a rescaling technique to approximate the solution of nonlinear wave equation. In order to numerically reproduce the blow-up phenomena, we propose a rule of scaling transformation, which is a variant of what was successfully used in the case of nonlinear parabolic equations. A careful study of the convergence of the proposed scheme is carried out and several numerical examples are performed in illustration.

Developing an efficient computational scheme for high-dimensional Bayesian variable selection in generalised linear models and survival models has always been a challenging problem due to the absence of closed-form solutions for the marginal likelihood. The RJMCMC approach can be employed to samples model and coefficients jointly, but effective design of the transdimensional jumps of RJMCMC can be challenge, making it hard to implement. Alternatively, the marginal likelihood can be derived using data-augmentation scheme e.g. Polya-gamma data argumentation for logistic regression) or through other estimation methods. However, suitable data-augmentation schemes are not available for every generalised linear and survival models, and using estimations such as Laplace approximation or correlated pseudo-marginal to derive marginal likelihood within a locally informed proposal can be computationally expensive in the "large n, large p" settings. In this paper, three main contributions are presented. Firstly, we present an extended Point-wise implementation of Adaptive Random Neighbourhood Informed proposal (PARNI) to efficiently sample models directly from the marginal posterior distribution in both generalised linear models and survival models. Secondly, in the light of the approximate Laplace approximation, we also describe an efficient and accurate estimation method for the marginal likelihood which involves adaptive parameters. Additionally, we describe a new method to adapt the algorithmic tuning parameters of the PARNI proposal by replacing the Rao-Blackwellised estimates with the combination of a warm-start estimate and an ergodic average. We present numerous numerical results from simulated data and 8 high-dimensional gene fine mapping data-sets to showcase the efficiency of the novel PARNI proposal compared to the baseline add-delete-swap proposal.

Machine-learning (ML) based discretization has been developed to simulate complex partial differential equations (PDEs) with tremendous success across various fields. These learned PDE solvers can effectively resolve the underlying solution structures of interest and achieve a level of accuracy which often requires an order-of-magnitude finer grid for a conventional numerical method using polynomial-based approximations. In a previous work in [13], we introduced a learned finite volume discretization that further incorporates the semi-Lagrangian (SL) mechanism, enabling larger CFL numbers for stability. However, the efficiency and effectiveness of such methodology heavily rely on the availability of abundant high-resolution training data, which can be prohibitively expensive to obtain. To address this challenge, in this paper, we propose a novel multi-fidelity ML-based SL method for transport equations. This method leverages a combination of a small amount of high-fidelity data and sufficient but cheaper low-fidelity data. The approach is designed based on a composite convolutional neural network architecture that explore the inherent correlation between high-fidelity and low-fidelity data. The proposed method demonstrates the capability to achieve a reasonable level of accuracy, particularly in scenarios where a single-fidelity model fails to generalize effectively. We further extend the method to the nonlinear Vlasov-Poisson system by employing high order Runge-Kutta exponential integrators. A collection of numerical tests are provided to validate the efficiency and accuracy of the proposed method.

Let $G$ be a graph on $n$ vertices with adjacency matrix $A$, and let $\mathbf{1}$ be the all-ones vector. We call $G$ controllable if the set of vectors $\mathbf{1}, A\mathbf{1}, \dots, A^{n-1}\mathbf{1}$ spans the whole space $\mathbb{R}^n$. We characterize the isomorphism problem of controllable graphs in terms of other combinatorial, geometric and logical problems. We also describe a polynomial time algorithm for graph isomorphism that works for almost all graphs.

Besov priors are nonparametric priors that can model spatially inhomogeneous functions. They are routinely used in inverse problems and imaging, where they exhibit attractive sparsity-promoting and edge-preserving features. A recent line of work has initiated the study of their asymptotic frequentist convergence properties. In the present paper, we consider the theoretical recovery performance of the posterior distributions associated to Besov-Laplace priors in the density estimation model, under the assumption that the observations are generated by a possibly spatially inhomogeneous true density belonging to a Besov space. We improve on existing results and show that carefully tuned Besov-Laplace priors attain optimal posterior contraction rates. Furthermore, we show that hierarchical procedures involving a hyper-prior on the regularity parameter lead to adaptation to any smoothness level.

The HEat modulated Infinite DImensional Heston (HEIDIH) model and its numerical approximation are introduced and analyzed. This model falls into the general framework of infinite dimensional Heston stochastic volatility models of (F.E. Benth, I.C. Simonsen '18), introduced for the pricing of forward contracts. The HEIDIH model consists of a one-dimensional stochastic advection equation coupled with a stochastic volatility process, defined as a Cholesky-type decomposition of the tensor product of a Hilbert-space valued Ornstein-Uhlenbeck process, the mild solution to the stochastic heat equation on the real half-line. The advection and heat equations are driven by independent space-time Gaussian processes which are white in time and colored in space, with the latter covariance structure expressed by two different kernels. First, a class of weight-stationary kernels are given, under which regularity results for the HEIDIH model in fractional Sobolev spaces are formulated. In particular, the class includes weighted Mat\'ern kernels. Second, numerical approximation of the model is considered. An error decomposition formula, pointwise in space and time, for a finite-difference scheme is proven. For a special case, essentially sharp convergence rates are obtained when this is combined with a fully discrete finite element approximation of the stochastic heat equation. The analysis takes into account a localization error, a pointwise-in-space finite element discretization error and an error stemming from the noise being sampled pointwise in space. The rates obtained in the analysis are higher than what would be obtained using a standard Sobolev embedding technique. Numerical simulations illustrate the results.

Suppose that $S \subseteq [n]^2$ contains no three points of the form $(x,y), (x,y+\delta), (x+\delta,y')$, where $\delta \neq 0$. How big can $S$ be? Trivially, $n \le |S| \le n^2$. Slight improvements on these bounds are obtained from Shkredov's upper bound for the corners problem [Shk06], which shows that $|S| \le O(n^2/(\log \log n)^c)$ for some small $c > 0$, and a construction due to Petrov [Pet23], which shows that $|S| \ge \Omega(n \log n/\sqrt{\log \log n})$. Could it be that for all $\varepsilon > 0$, $|S| \le O(n^{1+\varepsilon})$? We show that if so, this would rule out obtaining $\omega = 2$ using a large family of abelian groups in the group-theoretic framework of Cohn, Kleinberg, Szegedy and Umans [CU03,CKSU05] (which is known to capture the best bounds on $\omega$ to date), for which no barriers are currently known. Furthermore, an upper bound of $O(n^{4/3 - \varepsilon})$ for any fixed $\varepsilon > 0$ would rule out a conjectured approach to obtain $\omega = 2$ of [CKSU05]. Along the way, we encounter several problems that have much stronger constraints and that would already have these implications.

A novel overlapping domain decomposition splitting algorithm based on a Crank-Nisolson method is developed for the stochastic nonlinear Schroedinger equation driven by a multiplicative noise with non-periodic boundary conditions. The proposed algorithm can significantly reduce the computational cost while maintaining the similar conservation laws. Numerical experiments are dedicated to illustrating the capability of the algorithm for different spatial dimensions, as well as the various initial conditions. In particular, we compare the performance of the overlapping domain decomposition splitting algorithm with the stochastic multi-symplectic method in [S. Jiang, L. Wang and J. Hong, Commun. Comput. Phys., 2013] and the finite difference splitting scheme in [J. Cui, J. Hong, Z. Liu and W. Zhou, J. Differ. Equ., 2019]. We observe that our proposed algorithm has excellent computational efficiency and is highly competitive. It provides a useful tool for solving stochastic partial differential equations.

In 1-equation URANS models of turbulence the eddy viscosity is given by $\nu_{T}=0.55l(x,t)\sqrt{k(x,t)}$ . The length scale $l$ must be pre-specified and $k(x,t)$ is determined by solving a nonlinear partial differential equation. We show that in interesting cases the spacial mean of $k(x,t)$ satisfies a simple ordinary differential equation. Using its solution in $\nu_{T}$ results in a 1/2-equation model. This model has attractive analytic properties. Further, in comparative tests in 2d and 3d the velocity statistics produced by the 1/2-equation model are comparable to those of the full 1-equation model.

北京阿比特科技有限公司