亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we develop a Neumann-Neumann type domain decomposition method for elliptic problems on metric graphs. We describe the iteration in the continuous and discrete setting and rewrite the latter as a preconditioner for the Schur complement system. Then we formulate the discrete iteration as an abstract additive Schwarz iteration and prove that it convergences to the finite element solution with a rate that is independent of the finite element mesh size. We show that the condition number of the Schur complement is also independent of the finite element mesh size. We provide an implementation and test it on various examples of interest and compare it to other preconditioners.

相關內容

The test-time finetuning text-guided image editing method, Forgedit, is capable of tackling general and complex image editing problems given only the input image itself and the target text prompt. During finetuning stage, using the same set of finetuning hyper-paramters every time for every given image, Forgedit remembers and understands the input image in 30 seconds. During editing stage, the workflow of Forgedit might seem complicated. However, in fact, the editing process of Forgedit is not more complex than previous SOTA Imagic, yet completely solves the overfitting problem of Imagic. In this paper, we will elaborate the workflow of Forgedit editing stage with examples. We will show how to tune the hyper-parameters in an efficient way to obtain ideal editing results.

We present DEF-oriCORN, a framework for language-directed manipulation tasks. By leveraging a novel object-based scene representation and diffusion-model-based state estimation algorithm, our framework enables efficient and robust manipulation planning in response to verbal commands, even in tightly packed environments with sparse camera views without any demonstrations. Unlike traditional representations, our representation affords efficient collision checking and language grounding. Compared to state-of-the-art baselines, our framework achieves superior estimation and motion planning performance from sparse RGB images and zero-shot generalizes to real-world scenarios with diverse materials, including transparent and reflective objects, despite being trained exclusively in simulation. Our code for data generation, training, inference, and pre-trained weights are publicly available at: //sites.google.com/view/def-oricorn/home.

In this paper, we introduce a novel, data-driven approach for solving high-dimensional Bayesian inverse problems based on partial differential equations (PDEs), called Weak Neural Variational Inference (WNVI). The method complements real measurements with virtual observations derived from the physical model. In particular, weighted residuals are employed as probes to the governing PDE in order to formulate and solve a Bayesian inverse problem without ever formulating nor solving a forward model. The formulation treats the state variables of the physical model as latent variables, inferred using Stochastic Variational Inference (SVI), along with the usual unknowns. The approximate posterior employed uses neural networks to approximate the inverse mapping from state variables to the unknowns. We illustrate the proposed method in a biomedical setting where we infer spatially varying material properties from noisy tissue deformation data. We demonstrate that WNVI is not only as accurate and more efficient than traditional methods that rely on repeatedly solving the (non)linear forward problem as a black-box, but it can also handle ill-posed forward problems (e.g., with insufficient boundary conditions).

The aim of this paper is to provide a coherent framework for transforming boundary pairs of digital images from one resolution to another without knowledge of the full images. It is intended to facilitate the simultaneous usage of multiresolution processing and boundary reduction, primarily for algorithms in computational dynamics and computational control theory.

In this paper we consider functional data with heterogeneity in time and in population. We propose a mixture model with segmentation of time to represent this heterogeneity while keeping the functional structure. Maximum likelihood estimator is considered, proved to be identifiable and consistent. In practice, an EM algorithm is used, combined with dynamic programming for the maximization step, to approximate the maximum likelihood estimator. The method is illustrated on a simulated dataset, and used on a real dataset of electricity consumption.

The paper focuses on first-order invariant-domain preserving approximations of hyperbolic systems. We propose a new way to estimate the artificial viscosity that has to be added to make explicit, conservative, consistent numerical methods invariant-domain preserving and entropy inequality compliant. Instead of computing an upper bound on the maximum wave speed in Riemann problems, we estimate a minimum wave speed in the said Riemann problems such that the approximation satisfies predefined invariant-domain properties and predefined entropy inequalities. This technique eliminates non-essential fast waves from the construction of the artificial viscosity, while preserving pre-assigned invariant-domain properties and entropy inequalities.

In this paper, a two-dimensional Dirichlet-to-Neumann (DtN) finite element method (FEM) is developed to analyze the scattering of SH guided waves due to an interface delamination in a bi-material plate. During the finite element analysis, it is necessary to determine the far-field DtN conditions at virtual boundaries where both displacements and tractions are unknown. In this study, firstly, the scattered waves at the virtual boundaries are represented by a superposition of guided waves with unknown scattered coefficients. Secondly, utilizing the mode orthogonality, the unknown tractions at virtual boundaries are expressed in terms of the unknown scattered displacements at virtual boundaries via scattered coefficients. Thirdly, this relationship at virtual boundaries can be finally assembled into the global DtN-FEM matrix to solve the problem. This method is simple and elegant, which has advantages on dimension reduction and needs no absorption medium or perfectly matched layer to suppress the reflected waves compared to traditional FEM. Furthermore, the reflection and transmission coefficients of each guided mode can be directly obtained without post-processing. This proposed DtN-FEM will be compared with boundary element method (BEM), and finally validated for several benchmark problems.

This paper presents a graph autoencoder architecture capable of performing projection-based model-order reduction (PMOR) on advection-dominated flows modeled by unstructured meshes. The autoencoder is coupled with the time integration scheme from a traditional deep least-squares Petrov-Galerkin projection and provides the first deployment of a graph autoencoder into a PMOR framework. The presented graph autoencoder is constructed with a two-part process that consists of (1) generating a hierarchy of reduced graphs to emulate the compressive abilities of convolutional neural networks (CNNs) and (2) training a message passing operation at each step in the hierarchy of reduced graphs to emulate the filtering process of a CNN. The resulting framework provides improved flexibility over traditional CNN-based autoencoders because it is extendable to unstructured meshes. To highlight the capabilities of the proposed framework, which is named geometric deep least-squares Petrov-Galerkin (GD-LSPG), we benchmark the method on a one-dimensional Burgers' equation problem with a structured mesh and demonstrate the flexibility of GD-LSPG by deploying it to a two-dimensional Euler equations model that uses an unstructured mesh. The proposed framework provides considerable improvement in accuracy for very low-dimensional latent spaces in comparison with traditional affine projections.

In this paper, we study Physics-Informed Neural Networks (PINN) to approximate solutions to one-dimensional boundary value problems for linear elliptic equations and establish robust error estimates of PINN regardless of the quantities of the coefficients. In particular, we rigorously demonstrate the existence and uniqueness of solutions using the Sobolev space theory based on a variational approach. Deriving $L^2$-contraction estimates, we show that the error, defined as the mean square for differences at the sample points between the true solution and our trial function, is dominated by the training loss. Furthermore, we show that as the quantities of the coefficients for the differential equation increase, the error-to-loss ratio rapidly decreases. Our theoretical and experimental results confirm the robustness of the error regardless of the quantities of the coefficients.

The paper presents a framework for online learning of the Koopman operator using streaming data. Many complex systems for which data-driven modeling and control are sought provide streaming sensor data, the abundance of which can present computational challenges but cannot be ignored. Streaming data can intermittently sample dynamically different regimes or rare events which could be critical to model and control. Using ideas from subspace identification, we present a method where the Grassmannian distance between the subspace of an extended observability matrix and the streaming segment of data is used to assess the `novelty' of the data. If this distance is above a threshold, it is added to an archive and the Koopman operator is updated if not it is discarded. Therefore, our method identifies data from segments of trajectories of a dynamical system that are from different dynamical regimes, prioritizes minimizing the amount of data needed in updating the Koopman model and furthermore reduces the number of basis functions by learning them adaptively. Therefore, by dynamically adjusting the amount of data used and learning basis functions, our method optimizes the model's accuracy and the system order.

北京阿比特科技有限公司