亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The propagation of internal gravity waves in stratified media, such as those found in ocean basins and lakes, leads to the development of geometrical patterns called "attractors". These structures accumulate much of the wave energy and make the fluid flow highly singular. In more analytical terms, the cause of this phenomenon has been attributed to the presence of a continuous spectrum in some nonlocal zeroth-order pseudo-differential operators. In this work, we analyze the generation of these attractors from a numerical analysis perspective. First, we propose a high-order pseudo-spectral method to solve the evolution problem (whose long-term behaviour is known to be not square-integrable). Then, we use similar tools to discretize the corresponding eigenvalue problem. Since the eigenvalues are embedded in a continuous spectrum, we compute them using viscous approximations. Finally, we explore the effect that the embedded eigenmodes have on the long-term evolution of the system.

相關內容

We develop the no-propagate algorithm for sampling the linear response of random dynamical systems, which are non-uniform hyperbolic deterministic systems perturbed by noise with smooth density. We first derive a Monte-Carlo type formula and then the algorithm, which is different from the ensemble (stochastic gradient) algorithms, finite-element algorithms, and fast-response algorithms; it does not involve the propagation of vectors or covectors, and only the density of the noise is differentiated, so the formula is not cursed by gradient explosion, dimensionality, or non-hyperbolicity. We demonstrate our algorithm on a tent map perturbed by noise and a chaotic neural network with 51 layers $\times$ 9 neurons. By itself, this algorithm approximates the linear response of non-hyperbolic deterministic systems, with an additional error proportional to the noise. We also discuss the potential of using this algorithm as a part of a bigger algorithm with smaller error.

We prove closed-form equations for the exact high-dimensional asymptotics of a family of first order gradient-based methods, learning an estimator (e.g. M-estimator, shallow neural network, ...) from observations on Gaussian data with empirical risk minimization. This includes widely used algorithms such as stochastic gradient descent (SGD) or Nesterov acceleration. The obtained equations match those resulting from the discretization of dynamical mean-field theory (DMFT) equations from statistical physics when applied to gradient flow. Our proof method allows us to give an explicit description of how memory kernels build up in the effective dynamics, and to include non-separable update functions, allowing datasets with non-identity covariance matrices. Finally, we provide numerical implementations of the equations for SGD with generic extensive batch-size and with constant learning rates.

The simulation of supersonic or hypersonic flows often suffers from numerical shock instabilities if the flow field contains strong shocks, limiting the further application of shock-capturing schemes. In this paper, we develop the unified matrix stability analysis method for schemes with three-point stencils and present MSAT, an open-source tool to quantitatively analyze the shock instability problem. Based on the finite-volume approach on the structured grid, MSAT can be employed to investigate the mechanism of the shock instability problem, evaluate the robustness of numerical schemes, and then help to develop robust schemes. Also, MSAT has the ability to analyze the practical simulation of supersonic or hypersonic flows, evaluate whether it will suffer from shock instabilities, and then assist in selecting appropriate numerical schemes accordingly. As a result, MSAT is a helpful tool that can investigate the shock instability problem and help to cure it.

Medical studies for chronic disease are often interested in the relation between longitudinal risk factor profiles and individuals' later life disease outcomes. These profiles may typically be subject to intermediate structural changes due to treatment or environmental influences. Analysis of such studies may be handled by the joint model framework. However, current joint modeling does not consider structural changes in the residual variability of the risk profile nor consider the influence of subject-specific residual variability on the time-to-event outcome. In the present paper, we extend the joint model framework to address these two heterogeneous intra-individual variabilities. A Bayesian approach is used to estimate the unknown parameters and simulation studies are conducted to investigate the performance of the method. The proposed joint model is applied to the Framingham Heart Study to investigate the influence of anti-hypertensive medication on the systolic blood pressure variability together with its effect on the risk of developing cardiovascular disease. We show that anti-hypertensive medication is associated with elevated systolic blood pressure variability and increased variability elevates risk of developing cardiovascular disease.

Multi-reference alignment (MRA) is the problem of recovering a signal from its multiple noisy copies, each acted upon by a random group element. MRA is mainly motivated by single-particle cryo-electron microscopy (cryo-EM) that has recently joined X-ray crystallography as one of the two leading technologies to reconstruct biological molecular structures. Previous papers have shown that in the high noise regime, the sample complexity of MRA and cryo-EM is $n=\omega(\sigma^{2d})$, where $n$ is the number of observations, $\sigma^2$ is the variance of the noise, and $d$ is the lowest-order moment of the observations that uniquely determines the signal. In particular, it was shown that in many cases, $d=3$ for generic signals, and thus the sample complexity is $n=\omega(\sigma^6)$. In this paper, we analyze the second moment of the MRA and cryo-EM models. First, we show that in both models the second moment determines the signal up to a set of unitary matrices, whose dimension is governed by the decomposition of the space of signals into irreducible representations of the group. Second, we derive sparsity conditions under which a signal can be recovered from the second moment, implying sample complexity of $n=\omega(\sigma^4)$. Notably, we show that the sample complexity of cryo-EM is $n=\omega(\sigma^4)$ if at most one third of the coefficients representing the molecular structure are non-zero; this bound is near-optimal. The analysis is based on tools from representation theory and algebraic geometry. We also derive bounds on recovering a sparse signal from its power spectrum, which is the main computational problem of X-ray crystallography.

Bayesian linear mixed-effects models and Bayesian ANOVA are increasingly being used in the cognitive sciences to perform null hypothesis tests, where a null hypothesis that an effect is zero is compared with an alternative hypothesis that the effect exists and is different from zero. While software tools for Bayes factor null hypothesis tests are easily accessible, how to specify the data and the model correctly is often not clear. In Bayesian approaches, many authors use data aggregation at the by-subject level and estimate Bayes factors on aggregated data. Here, we use simulation-based calibration for model inference applied to several example experimental designs to demonstrate that, as with frequentist analysis, such null hypothesis tests on aggregated data can be problematic in Bayesian analysis. Specifically, when random slope variances differ (i.e., violated sphericity assumption), Bayes factors are too conservative for contrasts where the variance is small and they are too liberal for contrasts where the variance is large. Running Bayesian ANOVA on aggregated data can - if the sphericity assumption is violated - likewise lead to biased Bayes factor results. Moreover, Bayes factors for by-subject aggregated data are biased (too liberal) when random item slope variance is present but ignored in the analysis. These problems can be circumvented or reduced by running Bayesian linear mixed-effects models on non-aggregated data such as on individual trials, and by explicitly modeling the full random effects structure. Reproducible code is available from \url{//osf.io/mjf47/}.

Stochastic space-time fractional diffusion equations often appear in the modeling of the heat propagation in non-homogeneous medium. In this paper, we firstly investigate the Mittag--Leffler Euler integrator of a class of stochastic space-time fractional diffusion equations, whose super-convergence order is obtained by developing a helpful decomposition way for the time-fractional integral. Here, the developed decomposition way is the key to dealing with the singularity of the solution operator. Moreover, we study the Freidlin--Wentzell type large deviation principles of the underlying equation and its Mittag--Leffler Euler integrator based on the weak convergence approach. In particular, we prove that the large deviation rate function of the Mittag--Leffler Euler integrator $\Gamma$-converges to that of the underlying equation.

Formation control of multi-agent systems has been a prominent research topic, spanning both theoretical and practical domains over the past two decades. Our study delves into the leader-follower framework, addressing two critical, previously overlooked aspects. Firstly, we investigate the impact of an unknown nonlinear manifold, introducing added complexity to the formation control challenge. Secondly, we address the practical constraint of limited follower sensing range, posing difficulties in accurately localizing the leader for followers. Our core objective revolves around employing Koopman operator theory and Extended Dynamic Mode Decomposition to craft a reliable prediction algorithm for the follower robot to anticipate the leader's position effectively. Our experimentation on an elliptical paraboloid manifold, utilizing two omni-directional wheeled robots, validates the prediction algorithm's effectiveness.

This paper presents a robust numerical solution to the electromagnetic scattering problem involving multiple multi-layered cavities in both transverse magnetic and electric polarizations. A transparent boundary condition is introduced at the open aperture of the cavity to transform the problem from an unbounded domain into that of bounded cavities. By employing Fourier series expansion of the solution, we reduce the original boundary value problem to a two-point boundary value problem, represented as an ordinary differential equation for the Fourier coefficients. The analytical derivation of the connection formula for the solution enables us to construct a small-scale system that includes solely the Fourier coefficients on the aperture, streamlining the solving process. Furthermore, we propose accurate numerical quadrature formulas designed to efficiently handle the weakly singular integrals that arise in the transparent boundary conditions. To demonstrate the effectiveness and versatility of our proposed method, a series of numerical experiments are conducted.

The quantitative characterization of the evolution of the error distribution (as the step-size tends to zero) is a fundamental problem in the analysis of stochastic numerical method. In this paper, we answer this problem by proving that the error of numerical method for linear stochastic differential equation satisfies the limit theorems and large deviation principle. To the best of our knowledge, this is the first result on the quantitative characterization of the evolution of the error distribution of stochastic numerical method. As an application, we provide a new perspective to explain the superiority of symplectic methods for stochastic Hamiltonian systems in the long-time computation. To be specific, by taking the linear stochastic oscillator as the test equation, we show that in the long-time computation, the probability that the error deviates from the typical value is smaller for the symplectic methods than that for the non-symplectic methods, which reveals that the stochastic symplectic methods are more stable than non-symplectic methods.

北京阿比特科技有限公司