亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We use backward error analysis for differential equations to obtain modified or distorted equations describing the behaviour of the Newmark scheme applied to the transient structural dynamics equation. Using these results, we show how to construct compensation terms from the original parameters of the system, which improve the performance of Newmark simulations without changing the time step or modifying the scheme itself. Two such compensations are given: one eliminates numerical damping, while the other achieves fourth-order accurate calculations using the traditionally second-order Newmark method.

相關內容

Functional data analysis almost always involves smoothing discrete observations into curves, because they are never observed in continuous time and rarely without error. Although smoothing parameters affect the subsequent inference, data-driven methods for selecting these parameters are not well-developed, frustrated by the difficulty of using all the information shared by curves while being computationally efficient. On the one hand, smoothing individual curves in an isolated, albeit sophisticated way, ignores useful signals present in other curves. On the other hand, bandwidth selection by automatic procedures such as cross-validation after pooling all the curves together quickly become computationally unfeasible due to the large number of data points. In this paper we propose a new data-driven, adaptive kernel smoothing, specifically tailored for functional principal components analysis through the derivation of sharp, explicit risk bounds for the eigen-elements. The minimization of these quadratic risk bounds provide refined, yet computationally efficient bandwidth rules for each eigen-element separately. Both common and independent design cases are allowed. Rates of convergence for the estimators are derived. An extensive simulation study, designed in a versatile manner to closely mimic the characteristics of real data sets supports our methodological contribution. An illustration on a real data application is provided.

We give a new, constructive uniqueness theorem for tensor decomposition. It applies to order 3 tensors of format $n \times n \times p$ and can prove uniqueness of decomposition for generic tensors up to rank $r=4n/3$ as soon as $p \geq 4$. One major advantage over Kruskal's uniqueness theorem is that our theorem has an algorithmic proof, and the resulting algorithm is efficient. Like the uniqueness theorem, it applies in the range $n \leq r \leq 4n/3$. As a result, we obtain the first efficient algorithm for overcomplete decomposition of generic tensors of order 3. For instance, prior to this work it was not known how to efficiently decompose generic tensors of format $n \times n \times n$ and rank $r=1.01n$ (or rank $r \leq (1+\epsilon) n$, for some constant $\epsilon >0$). Efficient overcomplete decomposition of generic tensors of format $n \times n \times 3$ remains an open problem. Our results are based on the method of commuting extensions pioneered by Strassen for the proof of his $3n/2$ lower bound on tensor rank and border rank. In particular, we rely on an algorithm for the computation of commuting extensions recently proposed in a companion paper, and on the classical diagonalization-based "Jennrich algorithm" for undercomplete tensor decomposition.

Solving partial differential equations (PDEs) on fine spatio-temporal scales for high-fidelity solutions is critical for numerous scientific breakthroughs. Yet, this process can be prohibitively expensive, owing to the inherent complexities of the problems, including nonlinearity and multiscale phenomena. To speed up large-scale computations, a process known as downscaling is employed, which generates high-fidelity approximate solutions from their low-fidelity counterparts. In this paper, we propose a novel Physics-Guided Diffusion Model (PGDM) for downscaling. Our model, initially trained on a dataset comprising low-and-high-fidelity paired solutions across coarse and fine scales, generates new high-fidelity approximations from any new low-fidelity inputs. These outputs are subsequently refined through fine-tuning, aimed at minimizing the physical discrepancies as defined by the discretized PDEs at the finer scale. We evaluate and benchmark our model's performance against other downscaling baselines in three categories of nonlinear PDEs. Our numerical experiments demonstrate that our model not only outperforms the baselines but also achieves a computational acceleration exceeding tenfold, while maintaining the same level of accuracy as the conventional fine-scale solvers.

In this work we consider the Allen--Cahn equation, a prototypical model problem in nonlinear dynamics that exhibits bifurcations corresponding to variations of a deterministic bifurcation parameter. Going beyond the state-of-the-art, we introduce a random coefficient function in the linear reaction part of the equation, thereby accounting for random, spatially-heterogeneous effects. Importantly, we assume a spatially constant, deterministic mean value of the random coefficient. We show that this mean value is in fact a bifurcation parameter in the Allen--Cahn equation with random coefficients. Moreover, we show that the bifurcation points and bifurcation curves become random objects. We consider two distinct modelling situations: (i) for a spatially homogeneous coefficient we derive analytical expressions for the distribution of the bifurcation points and show that the bifurcation curves are random shifts of a fixed reference curve; (ii) for a spatially heterogeneous coefficient we employ a generalized polynomial chaos expansion to approximate the statistical properties of the random bifurcation points and bifurcation curves. We present numerical examples in 1D physical space, where we combine the popular software package Continuation Core and Toolboxes (CoCo) for numerical continuation and the Sparse Grids Matlab Kit for the polynomial chaos expansion. Our exposition addresses both, dynamical systems and uncertainty quantification, highlighting how analytical and numerical tools from both areas can be combined efficiently for the challenging uncertainty quantification analysis of bifurcations in random differential equations.

We present and analyze an a posteriori error estimator for a space-time hybridizable discontinuous Galerkin discretization of the time-dependent advection-diffusion problem. The residual-based error estimator is proven to be reliable and locally efficient. In the reliability analysis we combine a Peclet-robust coercivity type result and a saturation assumption, while local efficiency analysis is based on using bubble functions. The analysis considers both local space and time adaptivity and is verified by numerical simulations on problems which include boundary and interior layers.

Signal cancellation provides a radically new and efficient approach to exploratory factor analysis, without matrix decomposition nor presetting the required number of factors. Its current implementation requires that each factor has at least two unique indicators. Its principle is that it is always possible to combine two indicator variables exclusive to the same factor with weights that cancel their common factor information. Successful combinations, consisting of nose only, are recognized by their null correlations with all remaining variables. The optimal combinations of multifactorial indicators, though, typically retain correlations with some other variables. Their signal, however, can be cancelled through combinations with unifactorial indicators of their contributing factors. The loadings are estimated from the relative signal cancellation weights of the variables involved along with their observed correlations. The factor correlations are obtained from those of their unifactorial indicators, corrected by their factor loadings. The method is illustrated with synthetic data from a complex six-factor structure that even includes two doublet factors. Another example using actual data documents that signal cancellation can rival confirmatory factor analysis.

Learning approximations to smooth target functions of many variables from finite sets of pointwise samples is an important task in scientific computing and its many applications in computational science and engineering. Despite well over half a century of research on high-dimensional approximation, this remains a challenging problem. Yet, significant advances have been made in the last decade towards efficient methods for doing this, commencing with so-called sparse polynomial approximation methods and continuing most recently with methods based on Deep Neural Networks (DNNs). In tandem, there have been substantial advances in the relevant approximation theory and analysis of these techniques. In this work, we survey this recent progress. We describe the contemporary motivations for this problem, which stem from parametric models and computational uncertainty quantification; the relevant function classes, namely, classes of infinite-dimensional, Banach-valued, holomorphic functions; fundamental limits of learnability from finite data for these classes; and finally, sparse polynomial and DNN methods for efficiently learning such functions from finite data. For the latter, there is currently a significant gap between the approximation theory of DNNs and the practical performance of deep learning. Aiming to narrow this gap, we develop the topic of practical existence theory, which asserts the existence of dimension-independent DNN architectures and training strategies that achieve provably near-optimal generalization errors in terms of the amount of training data.

We explore the possibility of simulating the grade-two fluid model in a geometry related to a contraction rheometer, and we provide details on several key aspects of the computation. We show how the results can be used to determine the viscosity $\nu$ from experimental data. We also explore the identifiability of the grade-two parameters $\alpha_1$ and $\alpha_2$ from experimental data. In particular, as the flow rate varies, force data appears to be nearly the same for certain distinct pairs of values $\alpha_1$ and $\alpha_2$; however we determine a regime for $\alpha_1$ and $\alpha_2$ for which the parameters may be identifiable with a contraction rheometer.

We consider the computation of model-free bounds for multi-asset options in a setting that combines dependence uncertainty with additional information on the dependence structure. More specifically, we consider the setting where the marginal distributions are known and partial information, in the form of known prices for multi-asset options, is also available in the market. We provide a fundamental theorem of asset pricing in this setting, as well as a superhedging duality that allows to transform the maximization problem over probability measures in a more tractable minimization problem over trading strategies. The latter is solved using a penalization approach combined with a deep learning approximation using artificial neural networks. The numerical method is fast and the computational time scales linearly with respect to the number of traded assets. We finally examine the significance of various pieces of additional information. Empirical evidence suggests that "relevant" information, i.e. prices of derivatives with the same payoff structure as the target payoff, are more useful that other information, and should be prioritized in view of the trade-off between accuracy and computational efficiency.

Causal inference with observational studies often suffers from unmeasured confounding, yielding biased estimators based on the unconfoundedness assumption. Sensitivity analysis assesses how the causal conclusions change with respect to different degrees of unmeasured confounding. Most existing sensitivity analysis methods work well for specific types of statistical estimation or testing strategies. We propose a flexible sensitivity analysis framework that can deal with commonly used inverse probability weighting, outcome regression, and doubly robust estimators simultaneously. It is based on the well-known parametrization of the selection bias as comparisons of the observed and counterfactual outcomes conditional on observed covariates. It is attractive for practical use because it only requires simple modifications of the standard estimators. Moreover, it naturally extends to many other causal inference settings, including the causal risk ratio or odds ratio, the average causal effect on the treated units, and studies with survival outcomes. We also develop an R package saci to implement our sensitivity analysis estimators.

北京阿比特科技有限公司