亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

For a general class of nonlinear port-Hamiltonian systems we develop a high-order time discretization scheme with certain structure preservation properties. The possibly infinite-dimensional system under consideration possesses a Hamiltonian function, which represents an energy in the system and is conserved or dissipated along solutions. The numerical scheme is energy-consistent in the sense that the Hamiltonian of the approximate solutions at time grid points behaves accordingly. This structure preservation property is achieved by specific design of a continuous Petrov-Galerkin (cPG) method in time. It coincides with standard cPG methods in special cases, in which the latter are energy-consistent. Examples of port-Hamiltonian ODEs and PDEs are presented to visualize the framework. In numerical experiments the energy consistency is verified and the convergence behavior is investigated.

相關內容

The estimation of directed couplings between the nodes of a network from indirect measurements is a central methodological challenge in scientific fields such as neuroscience, systems biology and economics. Unfortunately, the problem is generally ill-posed due to the possible presence of unknown delays in the measurements. In this paper, we offer a solution of this problem by using a variational Bayes framework, where the uncertainty over the delays is marginalized in order to obtain conservative coupling estimates. To overcome the well-known overconfidence of classical variational methods, we use a hybrid-VI scheme where the (possibly flat or multimodal) posterior over the measurement parameters is estimated using a forward KL loss while the (nearly convex) conditional posterior over the couplings is estimated using the highly scalable gradient-based VI. In our ground-truth experiments, we show that the network provides reliable and conservative estimates of the couplings, greatly outperforming similar methods such as regression DCM.

A fully discrete semi-convex-splitting finite-element scheme with stabilization for a Cahn-Hilliard cross-diffusion system is analyzed. The system consists of parabolic fourth-order equations for the volume fraction of the fiber phase and solute concentration, modeling pre-patterning of lymphatic vessel morphology. The existence of discrete solutions is proved, and it is shown that the numerical scheme is energy stable up to stabilization, conserves the solute mass, and preserves the lower and upper bounds of the fiber phase fraction. Numerical experiments in two space dimensions using FreeFEM illustrate the phase segregation and pattern formation.

This work deals with developing two fast randomized algorithms for computing the generalized tensor singular value decomposition (GTSVD) based on the tubal product (t-product). The random projection method is utilized to compute the important actions of the underlying data tensors and use them to get small sketches of the original data tensors, which are easier to be handled. Due to the small size of the sketch tensors, deterministic approaches are applied to them to compute their GTSVDs. Then, from the GTSVD of the small sketch tensors, the GTSVD of the original large-scale data tensors is recovered. Some experiments are conducted to show the effectiveness of the proposed approach.

This study proposes a novel image contrast enhancement method based on image projection onto the squared eigenfunctions of the two dimensional Schr\"odinger operator. This projection depends on a design parameter \texorpdfstring{\(\gamma\)}{gamma} which is proposed to control the pixel intensity during image reconstruction. The performance of the proposed method is investigated through its application to color images. The selection of \texorpdfstring{\(\gamma\)}{gamma} values is performed using k-means, which helps preserve the image spatial adjacency information. Furthermore, multi-objective optimization using the Non dominated Sorting Genetic Algorithm II (NSAG2) algorithm is proposed to select the optimal values of \texorpdfstring{\(\gamma\)}{gamma} and the semi-classical parameter h from the 2DSCSA. The results demonstrate the effectiveness of the proposed method for enhancing image contrast while preserving the inherent characteristics of the original image, producing the desired enhancement with almost no artifacts.

This work characterizes equivariant polynomial functions from tuples of tensor inputs to tensor outputs. Loosely motivated by physics, we focus on equivariant functions with respect to the diagonal action of the orthogonal group on tensors. We show how to extend this characterization to other linear algebraic groups, including the Lorentz and symplectic groups. Our goal behind these characterizations is to define equivariant machine learning models. In particular, we focus on the sparse vector estimation problem. This problem has been broadly studied in the theoretical computer science literature, and explicit spectral methods, derived by techniques from sum-of-squares, can be shown to recover sparse vectors under certain assumptions. Our numerical results show that the proposed equivariant machine learning models can learn spectral methods that outperform the best theoretically known spectral methods in some regimes. The experiments also suggest that learned spectral methods can solve the problem in settings that have not yet been theoretically analyzed. This is an example of a promising direction in which theory can inform machine learning models and machine learning models could inform theory.

We develop adaptive discretization algorithms for locally optimal experimental design of nonlinear prediction models. With these algorithms, we refine and improve a pertinent state-of-the-art algorithm in various respects. We establish novel termination, convergence, and convergence rate results for the proposed algorithms. In particular, we prove a sublinear convergence rate result under very general assumptions on the design criterion and, most notably, a linear convergence result under the additional assumption that the design criterion is strongly convex and the design space is finite. Additionally, we prove the finite termination at approximately optimal designs, including upper bounds on the number of iterations until termination. And finally, we illustrate the practical use of the proposed algorithms by means of two application examples from chemical engineering: one with a stationary model and one with a dynamic model.

A new coupling rule for the Lighthill-Whitham-Richards model at merging junctions is introduced that imposes the preservation of the ratio between inflow from a given road to the total inflow into the junction. This rule is considered both in the context of the original traffic flow model and a relaxation setting giving rise to two different Riemann solvers that are discussed for merging 2-to-1 junctions. Numerical experiments are shown suggesting that the relaxation based Riemann solver is capable of suitable predictions of both, free-flow and congestion scenarios without relying on flow maximization.

We consider the application of the generalized Convolution Quadrature (gCQ) to approximate the solution of an important class of sectorial problems. The gCQ is a generalization of Lubich's Convolution Quadrature (CQ) that allows for variable steps. The available stability and convergence theory for the gCQ requires non realistic regularity assumptions on the data, which do not hold in many applications of interest, such as the approximation of subdiffusion equations. It is well known that for non smooth enough data the original CQ, with uniform steps, presents an order reduction close to the singularity. We generalize the analysis of the gCQ to data satisfying realistic regularity assumptions and provide sufficient conditions for stability and convergence on arbitrary sequences of time points. We consider the particular case of graded meshes and show how to choose them optimally, according to the behaviour of the data. An important advantage of the gCQ method is that it allows for a fast and memory reduced implementation. We describe how the fast and oblivious gCQ can be implemented and illustrate our theoretical results with several numerical experiments.

This article presents an easy distance field-based collision detection scheme to detect collisions of an object with its environment. Through the clever use of back-face culling and z-buffering, the solution is precise and very easy to implement. Since the complete scheme relies on the graphics pipeline, the collision detection is performed by the GPU. It is easy to use and only requires the meshes of the object and the scene; it does not rely on special representations. It can natively handle collision with primitives emitted directly on the pipeline. Our scheme is efficient and we expose many possible variants (especially an adaptation to certain particle systems). The main limitation of our scheme is that it imposes some restrictions on the shape of the considered objects - but not on their environment. We evaluate our scheme by first, comparing with the FCL, second, testing a more complete scene (involving geometry shader, tessellation and compute shader) and last, illustrating with a particle system.

Graph-centric artificial intelligence (graph AI) has achieved remarkable success in modeling interacting systems prevalent in nature, from dynamical systems in biology to particle physics. The increasing heterogeneity of data calls for graph neural architectures that can combine multiple inductive biases. However, combining data from various sources is challenging because appropriate inductive bias may vary by data modality. Multimodal learning methods fuse multiple data modalities while leveraging cross-modal dependencies to address this challenge. Here, we survey 140 studies in graph-centric AI and realize that diverse data types are increasingly brought together using graphs and fed into sophisticated multimodal models. These models stratify into image-, language-, and knowledge-grounded multimodal learning. We put forward an algorithmic blueprint for multimodal graph learning based on this categorization. The blueprint serves as a way to group state-of-the-art architectures that treat multimodal data by choosing appropriately four different components. This effort can pave the way for standardizing the design of sophisticated multimodal architectures for highly complex real-world problems.

北京阿比特科技有限公司