亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The limitations of turbulence closure models in the context of Reynolds-averaged NavierStokes (RANS) simulations play a significant part in contributing to the uncertainty of Computational Fluid Dynamics (CFD). Perturbing the spectral representation of the Reynolds stress tensor within physical limits is common practice in several commercial and open-source CFD solvers, in order to obtain estimates for the epistemic uncertainties of RANS turbulence models. Recent research revealed, that there is a need for moderating the amount of perturbed Reynolds stress tensor tensor to be considered due to upcoming stability issues of the solver. In this paper we point out that the consequent common implementation can lead to unintended states of the resulting perturbed Reynolds stress tensor. The combination of eigenvector perturbation and moderation factor may actually result in moderated eigenvalues, which are not linearly dependent on the originally unperturbed and fully perturbed eigenvalues anymore. Hence, the computational implementation is no longer in accordance with the conceptual idea of the Eigenspace Perturbation Framework. We verify the implementation of the conceptual description with respect to its self-consistency. Adequately representing the basic concept results in formulating a computational implementation to improve self-consistency of the Reynolds stress tensor perturbation

相關內容

Temporal irreversibility, often referred to as the arrow of time, is a fundamental concept in statistical mechanics. Markers of irreversibility also provide a powerful characterisation of information processing in biological systems. However, current approaches tend to describe temporal irreversibility in terms of a single scalar quantity, without disentangling the underlying dynamics that contribute to irreversibility. Here we propose a broadly applicable information-theoretic framework to characterise the arrow of time in multivariate time series, which yields qualitatively different types of irreversible information dynamics. This multidimensional characterisation reveals previously unreported high-order modes of irreversibility, and establishes a formal connection between recent heuristic markers of temporal irreversibility and metrics of information processing. We demonstrate the prevalence of high-order irreversibility in the hyperactive regime of a biophysical model of brain dynamics, showing that our framework is both theoretically principled and empirically useful. This work challenges the view of the arrow of time as a monolithic entity, enhancing both our theoretical understanding of irreversibility and our ability to detect it in practical applications.

Quantile treatment effects (QTEs) can characterize the potentially heterogeneous causal effect of a treatment on different points of the entire outcome distribution. Propensity score (PS) methods are commonly employed for estimating QTEs in non-randomized studies. Empirical and theoretical studies have shown that insufficient and unnecessary adjustment for covariates in PS models can lead to bias and efficiency loss in estimating treatment effects. Striking a balance between bias and efficiency through variable selection is a crucial concern in casual inference. It is essential to acknowledge that the covariates related treatment and outcome may vary across different quantiles of the outcome distribution. However, previous studies have overlooked to adjust for different covariates separately in the PS models when estimating different QTEs. In this article, we proposed the quantile regression outcome-adaptive lasso (QROAL) method to select covariates that can provide unbiased and efficient estimates of QTEs. A distinctive feature of our proposed method is the utilization of linear quantile regression models for constructing penalty weights, enabling covariate selection in PS models separately when estimating different QTEs. We conducted simulation studies to show the superiority of our proposed method over the outcome-adaptive lasso (OAL) method in variable selection. Moreover, the proposed method exhibited favorable performance compared to the OAL method in terms of root mean square error in a range of settings, including both homogeneous and heterogeneous scenarios. Additionally, we applied the QROAL method to datasets from the China Health and Retirement Longitudinal Study (CHARLS) to explore the impact of smoking status on the severity of depression symptoms.

This work puts forth low-complexity Riemannian subspace descent algorithms for the minimization of functions over the symmetric positive definite (SPD) manifold. Different from the existing Riemannian gradient descent variants, the proposed approach utilizes carefully chosen subspaces that allow the update to be written as a product of the Cholesky factor of the iterate and a sparse matrix. The resulting updates avoid the costly matrix operations like matrix exponentiation and dense matrix multiplication, which are generally required in almost all other Riemannian optimization algorithms on SPD manifold. We further identify a broad class of functions, arising in diverse applications, such as kernel matrix learning, covariance estimation of Gaussian distributions, maximum likelihood parameter estimation of elliptically contoured distributions, and parameter estimation in Gaussian mixture model problems, over which the Riemannian gradients can be calculated efficiently. The proposed uni-directional and multi-directional Riemannian subspace descent variants incur per-iteration complexities of $\mathcal{O}(n)$ and $\mathcal{O}(n^2)$ respectively, as compared to the $\mathcal{O}(n^3)$ or higher complexity incurred by all existing Riemannian gradient descent variants. The superior runtime and low per-iteration complexity of the proposed algorithms is also demonstrated via numerical tests on large-scale covariance estimation problems.

We consider the problem of estimating the roughness of the volatility in a stochastic volatility model that arises as a nonlinear function of fractional Brownian motion with drift. To this end, we introduce a new estimator that measures the so-called roughness exponent of a continuous trajectory, based on discrete observations of its antiderivative. We provide conditions on the underlying trajectory under which our estimator converges in a strictly pathwise sense. Then we verify that these conditions are satisfied by almost every sample path of fractional Brownian motion (with drift). As a consequence, we obtain strong consistency theorems in the context of a large class of rough volatility models. Numerical simulations show that our estimation procedure performs well after passing to a scale-invariant modification of our estimator.

We model a family of closed kinematic chains, known as Kaleidocycles, with the theory of discrete spatial curves. By leveraging the connection between the deformation of discrete curves and the semi-discrete integrable systems, we describe the motion of a Kaleidocycle by elliptic theta functions. This study showcases an interesting example in which an integrable system generates an orbit in the space of the real solutions of polynomial equations defined by geometric constraints.

A new information theoretic condition is presented for reconstructing a discrete random variable $X$ based on the knowledge of a set of discrete functions of $X$. The reconstruction condition is derived from Shannon's 1953 lattice theory with two entropic metrics of Shannon and Rajski. Because such a theoretical material is relatively unknown and appears quite dispersed in different references, we first provide a synthetic description (with complete proofs) of its concepts, such as total, common and complementary informations. Definitions and properties of the two entropic metrics are also fully detailed and shown compatible with the lattice structure. A new geometric interpretation of such a lattice structure is then investigated that leads to a necessary (and sometimes sufficient) condition for reconstructing the discrete random variable $X$ given a set $\{ X_1,\ldots,X_{n} \}$ of elements in the lattice generated by $X$. Finally, this condition is illustrated in five specific examples of perfect reconstruction problems: reconstruction of a symmetric random variable from the knowledge of its sign and absolute value, reconstruction of a word from a set of linear combinations, reconstruction of an integer from its prime signature (fundamental theorem of arithmetic) and from its remainders modulo a set of coprime integers (Chinese remainder theorem), and reconstruction of the sorting permutation of a list from a minimal set of pairwise comparisons.

We consider an unknown multivariate function representing a system-such as a complex numerical simulator-taking both deterministic and uncertain inputs. Our objective is to estimate the set of deterministic inputs leading to outputs whose probability (with respect to the distribution of the uncertain inputs) of belonging to a given set is less than a given threshold. This problem, which we call Quantile Set Inversion (QSI), occurs for instance in the context of robust (reliability-based) optimization problems, when looking for the set of solutions that satisfy the constraints with sufficiently large probability. To solve the QSI problem, we propose a Bayesian strategy based on Gaussian process modeling and the Stepwise Uncertainty Reduction (SUR) principle, to sequentially choose the points at which the function should be evaluated to efficiently approximate the set of interest. We illustrate the performance and interest of the proposed SUR strategy through several numerical experiments.

We present a rigorous and precise analysis of the maximum degree and the average degree in a dynamic duplication-divergence graph model introduced by Sol\'e, Pastor-Satorras et al. in which the graph grows according to a duplication-divergence mechanism, i.e. by iteratively creating a copy of some node and then randomly alternating the neighborhood of a new node with probability $p$. This model captures the growth of some real-world processes e.g. biological or social networks. In this paper, we prove that for some $0 < p < 1$ the maximum degree and the average degree of a duplication-divergence graph on $t$ vertices are asymptotically concentrated with high probability around $t^p$ and $\max\{t^{2 p - 1}, 1\}$, respectively, i.e. they are within at most a polylogarithmic factor from these values with probability at least $1 - t^{-A}$ for any constant $A > 0$.

We present a practical and effective method for rigorously estimating quantities associated to top eigenvalues of transfer operators to very high precision. The method combines explicit error bounds of the Lagrange-Chebyshev approximation with an established min-max method. We illustrate its applicability by significantly improving rigorous estimates on various ergodic quantities associated to the Bolyai-R\'enyi map.

The main computational cost per iteration of adaptive cubic regularization methods for solving large-scale nonconvex problems is the computation of the step $s_k$, which requires an approximate minimizer of the cubic model. We propose a new approach in which this minimizer is sought in a low dimensional subspace that, in contrast to classical approaches, is reused for a number of iterations. A regularized Newton step to correct $s_k$ is also incorporated whenever needed. We show that our method increases efficiency while preserving the worst-case complexity of classical cubic regularized methods. We also explore the use of rational Krylov subspaces for the subspace minimization, to overcome some of the issues encountered when using polynomial Krylov subspaces. We provide several experimental results illustrating the gains of the new approach when compared to classic implementations.

北京阿比特科技有限公司