亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Inferring the parameters of ordinary differential equations (ODEs) from noisy observations is an important problem in many scientific fields. Currently, most parameter estimation methods that bypass numerical integration tend to rely on basis functions or Gaussian processes to approximate the ODE solution and its derivatives. Due to the sensitivity of the ODE solution to its derivatives, these methods can be hindered by estimation error, especially when only sparse time-course observations are available. We present a Bayesian collocation framework that operates on the integrated form of the ODEs and also avoids the expensive use of numerical solvers. Our methodology has the capability to handle general nonlinear ODE systems. We demonstrate the accuracy of the proposed method through a simulation study, where the estimated parameters and recovered system trajectories are compared with other recent methods. A real data example is also provided.

相關內容

In this paper, we study the deep Ritz method for solving the linear elasticity equation from a numerical analysis perspective. A modified Ritz formulation using the $H^{1/2}(\Gamma_D)$ norm is introduced and analyzed for linear elasticity equation in order to deal with the (essential) Dirichlet boundary condition. We show that the resulting deep Ritz method provides the best approximation among the set of deep neural network (DNN) functions with respect to the ``energy'' norm. Furthermore, we demonstrate that the total error of the deep Ritz simulation is bounded by the sum of the network approximation error and the numerical integration error, disregarding the algebraic error. To effectively control the numerical integration error, we propose an adaptive quadrature-based numerical integration technique with a residual-based local error indicator. This approach enables efficient approximation of the modified energy functional. Through numerical experiments involving smooth and singular problems, as well as problems with stress concentration, we validate the effectiveness and efficiency of the proposed deep Ritz method with adaptive quadrature.

To speed up online testing, adaptive traffic experimentation through multi-armed bandit algorithms is rising as an essential complementary alternative to the fixed horizon A/B testing. Based on recent research on best arm identification and statistical inference with adaptively collected data, this paper derives and evaluates four Bayesian batch bandit algorithms (NB-TS, WB-TS, NB-TTTS, WB-TTTS), which are combinations of two ways of weighting batches (Naive Batch and Weighted Batch) and two Bayesian sampling strategies (Thompson Sampling and Top-Two Thompson Sampling) to adaptively determine traffic allocation. These derived Bayesian sampling algorithms are practically based on summary batch statistics of a reward metric for pilot experiments, where one of the combination WB-TTTS in this paper seems to be newly discussed. The comprehensive evaluation on the four Bayesian sampling algorithms covers trustworthiness, sensitivity and regret of a testing methodology. Moreover, the evaluation includes 4 real-world eBay experiments and 40 reproducible synthetic experiments to reveal the learnings, which covers both stationary and non-stationary situations. Our evaluation reveals that, (a) There exist false positives inflation with equivalent best arms, while seldom discussed in literatures; (b) To control false positives, connections between convergence of posterior optimal probabilities and neutral posterior reshaping are discovered; (c) WB-TTTS shows competitive recall, higher precision, and robustness against non-stationary trend; (d) NB-TS outperforms on minimizing regret trials except on precision and robustness; (e) WB-TTTS is a promising alternative if regret of A/B Testing is affordable, otherwise NB-TS is still a powerful choice with regret consideration for pilot experiments.

Code verification plays an important role in establishing the credibility of computational simulations by assessing the correctness of the implementation of the underlying numerical methods. In computational electromagnetics, the numerical solution to integral equations incurs multiple interacting sources of numerical error, as well as other challenges, which render traditional code-verification approaches ineffective. In this paper, we provide approaches to separately measure the numerical errors arising from these different error sources for the method-of-moments implementation of the combined-field integral equation. We demonstrate the effectiveness of these approaches for cases with and without coding errors.

For computational physics simulations, code verification plays a major role in establishing the credibility of the results by assessing the correctness of the implementation of the underlying numerical methods. In computational electromagnetics, surface integral equations, such as the method-of-moments implementation of the magnetic-field integral equation, are frequently used to solve Maxwell's equations on the surfaces of electromagnetic scatterers. These electromagnetic surface integral equations yield many code-verification challenges due to the various sources of numerical error and their possible interactions. In this paper, we provide approaches to separately measure the numerical errors arising from these different error sources. We demonstrate the effectiveness of these approaches for cases with and without coding errors.

Constraint satisfaction problems form a nicely behaved class of problems that lends itself to complexity classification results. From the point of view of parameterized complexity, a natural task is to classify the parameterized complexity of MinCSP problems parameterized by the number of unsatisfied constraints. In other words, we ask whether we can delete at most $k$ constraints, where $k$ is the parameter, to get a satisfiable instance. In this work, we take a step towards classifying the parameterized complexity for an important infinite-domain CSP: Allen's interval algebra (IA). This CSP has closed intervals with rational endpoints as domain values and employs a set $A$ of 13 basic comparison relations such as ``precedes'' or ``during'' for relating intervals. IA is a highly influential and well-studied formalism within AI and qualitative reasoning that has numerous applications in, for instance, planning, natural language processing and molecular biology. We provide an FPT vs. W[1]-hard dichotomy for MinCSP$(\Gamma)$ for all $\Gamma \subseteq A$. IA is sometimes extended with unions of the relations in $A$ or first-order definable relations over $A$, but extending our results to these cases would require first solving the parameterized complexity of Directed Symmetric Multicut, which is a notorious open problem. Already in this limited setting, we uncover connections to new variants of graph cut and separation problems. This includes hardness proofs for simultaneous cuts or feedback arc set problems in directed graphs, as well as new tractable cases with algorithms based on the recently introduced flow augmentation technique. Given the intractability of MinCSP$(A)$ in general, we then consider (parameterized) approximation algorithms and present a factor-$2$ fpt-approximation algorithm.

This article describes a numerical method based on the dual reciprocity boundary elements method (DRBEM) for solving some well-known nonlinear parabolic partial differential equations (PDEs). The equations include the classic and generalized Fisher's equations, Allen-Cahn equation, Newell-Whithead equation, Fitz-HughNagumo equation and generalized Fitz-HughNagumo equation with time-dependent coefficients. The concept of the dual reciprocity is used to convert the domain integral to the boundary that leads to an integration free method. We employ the time stepping scheme to approximate the time derivative, and the linear radial basis functions (RBFs) are used as approximate functions in presented method. The nonlinear terms are treated iteratively within each time step. The developed formulation is verified in some numerical test examples. The results of numerical experiments are compared with analytical solution to confirm the accuracy and efficiency of the presented scheme.

Elliptic interface boundary value problems play a major role in numerous applications involving heat, fluids, materials, and proteins, to name a few. As an example, in implicit variational solvation, for the construction of biomolecular shapes, the electrostatic contributions satisfy the Poisson-Boltzmann equation with discontinuous dielectric constants across the interface. When interface motions are involved, one often needs not only accurate solution values, but accurate derivatives as well, such as the normal derivatives at the interface. We introduce here the Compact Coupling Interface Method (CCIM), a finite difference method for the elliptic interface problem with interfacial jump conditions. The CCIM can calculate solution values and their derivatives up to second-order accuracy in arbitrary ambient space dimensions. It combines elements of Chern and Shu's Coupling Interface Method and Mayo's approach for elliptic interface boundary value problems, leading to more compact finite difference stencils that are applicable to more general situations. Numerical results on a variety of geometric interfacial shapes and on complex protein molecules in three dimensions support the efficacy of our approach and reveal advantages in accuracy and robustness.

This study aims to solve the over-reliance on the rank estimation strategy in the standard tensor factorization-based tensor recovery and the problem of a large computational cost in the standard t-SVD-based tensor recovery. To this end, we proposes a new tensor norm with a dual low-rank constraint, which utilizes the low-rank prior and rank information at the same time. In the proposed tensor norm, a series of surrogate functions of the tensor tubal rank can be used to achieve better performance in harness low-rankness within tensor data. It is proven theoretically that the resulting tensor completion model can effectively avoid performance degradation caused by inaccurate rank estimation. Meanwhile, attributed to the proposed dual low-rank constraint, the t-SVD of a smaller tensor instead of the original big one is computed by using a sample trick. Based on this, the total cost at each iteration of the optimization algorithm is reduced to $\mathcal{O}(n^3\log n +kn^3)$ from $\mathcal{O}(n^4)$ achieved with standard methods, where $k$ is the estimation of the true tensor rank and far less than $n$. Our method was evaluated on synthetic and real-world data, and it demonstrated superior performance and efficiency over several existing state-of-the-art tensor completion methods.

Integral equations (IEs) are equations that model spatiotemporal systems with non-local interactions. They have found important applications throughout theoretical and applied sciences, including in physics, chemistry, biology, and engineering. While efficient algorithms exist for solving given IEs, no method exists that can learn an IE and its associated dynamics from data alone. In this paper, we introduce Neural Integral Equations (NIE), a method that learns an unknown integral operator from data through an IE solver. We also introduce Attentional Neural Integral Equations (ANIE), where the integral is replaced by self-attention, which improves scalability, capacity, and results in an interpretable model. We demonstrate that (A)NIE outperforms other methods in both speed and accuracy on several benchmark tasks in ODE, PDE, and IE systems of synthetic and real-world data.

The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.

北京阿比特科技有限公司