亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we investigate an ill-posed Cauchy problem involving a stochastic parabolic equation. We first establish a Carleman estimate for this equation. Leveraging this estimate, we are able to derive the conditional stability and convergence rate of the Tikhonov regularization method for the aforementioned ill-posed Cauchy problem. To complement our theoretical analysis, we employ kernel-based learning theory to implement the completed Tikhonov regularization method for several numerical examples.

相關內容

In this paper, I present three closed-form approximations of the two-sample Pearson Bayes factor. The techniques rely on some classical asymptotic results about gamma functions. These approximations permit simple closed-form calculation of the Pearson Bayes factor in cases where only the summary statistics are available (i.e., the t-score and degrees of freedom).

Testing of hypotheses is a well studied topic in mathematical statistics. Recently, this issue has also been addressed in the context of Inverse Problems, where the quantity of interest is not directly accessible but only after the inversion of a (potentially) ill-posed operator. In this study, we propose a regularized approach to hypothesis testing in Inverse Problems in the sense that the underlying estimators (or test statistics) are allowed to be biased. Under mild source-condition type assumptions we derive a family of tests with prescribed level $\alpha$ and subsequently analyze how to choose the test with maximal power out of this family. As one major result we prove that regularized testing is always at least as good as (classical) unregularized testing. Furthermore, using tools from convex optimization, we provide an adaptive test by maximizing the power functional, which then outperforms previous unregularized tests in numerical simulations by several orders of magnitude.

We study step-wise time approximations of non-linear hyperbolic initial value problems. The technique used here is a generalization of the minimizing movements method, using two time-scales: one for velocity, the other (potentially much larger) for acceleration. The main applications are from elastodynamics namely so-called generalized solids, undergoing large deformations. The evolution follows an underlying variational structure exploited by step-wise minimisation. We show for a large family of (elastic) energies that the introduced scheme is stable; allowing for non-linearities of highest order. If the highest order can assumed to be linear, we show that the limit solutions are regular and that the minimizing movements scheme converges with optimal linear rate. Thus this work extends numerical time-step minimization methods to the realm of hyperbolic problems.

We investigate the algebra and geometry of general interventions in discrete DAG models. To this end, we introduce a theory for modeling soft interventions in the more general family of staged tree models and develop the formalism to study these models as parametrized subvarieties of a product of probability simplices. We then consider the problem of finding their defining equations, and we derive a combinatorial criterion for identifying interventional staged tree models for which the defining ideal is toric. We apply these results to the class of discrete interventional DAG models and establish a criteria to determine when these models are toric varieties.

In this paper, we investigate building a sequence to sequence architecture for motion to language translation and synchronization. The aim is to translate motion capture inputs into English natural-language descriptions, such that the descriptions are generated synchronously with the actions performed, enabling semantic segmentation as a byproduct, but without requiring synchronized training data. We propose a new recurrent formulation of local attention that is suited for synchronous/live text generation, as well as an improved motion encoder architecture better suited to smaller data and for synchronous generation. We evaluate both contributions in individual experiments, using the standard BLEU4 metric, as well as a simple semantic equivalence measure, on the KIT motion language dataset. In a follow-up experiment, we assess the quality of the synchronization of generated text in our proposed approaches through multiple evaluation metrics. We find that both contributions to the attention mechanism and the encoder architecture additively improve the quality of generated text (BLEU and semantic equivalence), but also of synchronization. Our code will be made available at \url{//github.com/rd20karim/M2T-Segmentation/tree/main}

In this paper we consider the numerical approximation of infinite horizon problems via the dynamic programming approach. The value function of the problem solves a Hamilton-Jacobi-Bellman (HJB) equation that is approximated by a fully discrete method. It is known that the numerical problem is difficult to handle by the so called curse of dimensionality. To mitigate this issue we apply a reduction of the order by means of a new proper orthogonal decomposition (POD) method based on time derivatives. We carry out the error analysis of the method using recently proved optimal bounds for the fully discrete approximations. Moreover, the use of snapshots based on time derivatives allow us to bound some terms of the error that could not be bounded in a standard POD approach. Some numerical experiments show the good performance of the method in practice.

In this paper we consider the numerical solution of fractional differential equations. In particular, we study a step-by-step graded mesh procedure based on an expansion of the vector field using orthonormal Jacobi polynomials. Under mild hypotheses, the proposed procedure is capable of getting spectral accuracy. A few numerical examples are reported to confirm the theoretical findings.

This article proposes entropy stable discontinuous Galerkin schemes (DG) for two-fluid relativistic plasma flow equations. These equations couple the flow of relativistic fluids via electromagnetic quantities evolved using Maxwell's equations. The proposed schemes are based on the Gauss-Lobatto quadrature rule, which has the summation by parts (SBP) property. We exploit the structure of the equations having the flux with three independent parts coupled via nonlinear source terms. We design entropy stable DG schemes for each flux part, coupled with the fact that the source terms do not affect entropy, resulting in an entropy stable scheme for the complete system. The proposed schemes are then tested on various test problems in one and two dimensions to demonstrate their accuracy and stability.

In this study, we present a precise anisotropic interpolation error estimate for the Morley finite element method (FEM) and apply it to fourth-order elliptical equations. We did not impose a shape-regularity mesh condition for the analysis. Therefore, anisotropic meshes can be used. The main contributions of this study include providing new proof of the consistency term. This enabled us to obtain an anisotropic consistency error estimate. The core idea of the proof involves using the relationship between the Raviart--Thomas and Morley finite element spaces. Our results show optimal convergence rates and imply that the modified Morley FEM may be effective for errors.

Complexity is a fundamental concept underlying statistical learning theory that aims to inform generalization performance. Parameter count, while successful in low-dimensional settings, is not well-justified for overparameterized settings when the number of parameters is more than the number of training samples. We revisit complexity measures based on Rissanen's principle of minimum description length (MDL) and define a novel MDL-based complexity (MDL-COMP) that remains valid for overparameterized models. MDL-COMP is defined via an optimality criterion over the encodings induced by a good Ridge estimator class. We provide an extensive theoretical characterization of MDL-COMP for linear models and kernel methods and show that it is not just a function of parameter count, but rather a function of the singular values of the design or the kernel matrix and the signal-to-noise ratio. For a linear model with $n$ observations, $d$ parameters, and i.i.d. Gaussian predictors, MDL-COMP scales linearly with $d$ when $d<n$, but the scaling is exponentially smaller -- $\log d$ for $d>n$. For kernel methods, we show that MDL-COMP informs minimax in-sample error, and can decrease as the dimensionality of the input increases. We also prove that MDL-COMP upper bounds the in-sample mean squared error (MSE). Via an array of simulations and real-data experiments, we show that a data-driven Prac-MDL-COMP informs hyper-parameter tuning for optimizing test MSE with ridge regression in limited data settings, sometimes improving upon cross-validation and (always) saving computational costs. Finally, our findings also suggest that the recently observed double decent phenomenons in overparameterized models might be a consequence of the choice of non-ideal estimators.

北京阿比特科技有限公司