亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Fast and accurate predictions for complex physical dynamics are a significant challenge across various applications. Real-time prediction on resource-constrained hardware is even more crucial in real-world problems. The deep operator network (DeepONet) has recently been proposed as a framework for learning nonlinear mappings between function spaces. However, the DeepONet requires many parameters and has a high computational cost when learning operators, particularly those with complex (discontinuous or non-smooth) target functions. This study proposes HyperDeepONet, which uses the expressive power of the hypernetwork to enable the learning of a complex operator with a smaller set of parameters. The DeepONet and its variant models can be thought of as a method of injecting the input function information into the target function. From this perspective, these models can be viewed as a particular case of HyperDeepONet. We analyze the complexity of DeepONet and conclude that HyperDeepONet needs relatively lower complexity to obtain the desired accuracy for operator learning. HyperDeepONet successfully learned various operators with fewer computational resources compared to other benchmarks.

相關內容

Signal processing in the time-frequency plane has a long history and remains a field of methodological innovation. For instance, detection and denoising based on the zeros of the spectrogram have been proposed since 2015, contrasting with a long history of focusing on larger values of the spectrogram. Yet, unlike neighboring fields like optimization and machine learning, time-frequency signal processing lacks widely-adopted benchmarking tools. In this work, we contribute an open-source, Python-based toolbox termed MCSM-Benchs for benchmarking multi-component signal analysis methods, and we demonstrate our toolbox on three time-frequency benchmarks. First, we compare different methods for signal detection based on the zeros of the spectrogram, including unexplored variations of previously proposed detection tests. Second, we compare zero-based denoising methods to both classical and novel methods based on large values and ridges of the spectrogram. Finally, we compare the denoising performance of these methods against typical spectrogram thresholding strategies, in terms of post-processing artifacts commonly referred to as musical noise. At a low level, the obtained results provide new insight on the assessed approaches, and in particular research directions to further develop zero-based methods. At a higher level, our benchmarks exemplify the benefits of using a public, collaborative, common framework for benchmarking.

Autoencoders (AE) are simple yet powerful class of neural networks that compress data by projecting input into low-dimensional latent space (LS). Whereas LS is formed according to the loss function minimization during training, its properties and topology are not controlled directly. In this paper we focus on AE LS properties and propose two methods for obtaining LS with desired topology, called LS configuration. The proposed methods include loss configuration using a geometric loss term that acts directly in LS, and encoder configuration. We show that the former allows to reliably obtain LS with desired configuration by defining the positions and shapes of LS clusters for supervised AE (SAE). Knowing LS configuration allows to define similarity measure in LS to predict labels or estimate similarity for multiple inputs without using decoders or classifiers. We also show that this leads to more stable and interpretable training. We show that SAE trained for clothes texture classification using the proposed method generalizes well to unseen data from LIP, Market1501, and WildTrack datasets without fine-tuning, and even allows to evaluate similarity for unseen classes. We further illustrate the advantages of pre-configured LS similarity estimation with cross-dataset searches and text-based search using a text query without language models.

This paper presents a statistical forward model for a Compton imaging system, called Compton imager. This system, under development at the University of Illinois Urbana Champaign, is a variant of Compton cameras with a single type of sensors which can simultaneously act as scatterers and absorbers. This imager is convenient for imaging situations requiring a wide field of view. The proposed statistical forward model is then used to solve the inverse problem of estimating the location and energy of point-like sources from observed data. This inverse problem is formulated and solved in a Bayesian framework by using a Metropolis within Gibbs algorithm for the estimation of the location, and an expectation-maximization algorithm for the estimation of the energy. This approach leads to more accurate estimation when compared with the deterministic standard back-projection approach, with the additional benefit of uncertainty quantification in the low photon imaging setting.

The dynamic mode decomposition (DMD) is a simple and powerful data-driven modeling technique that is capable of revealing coherent spatiotemporal patterns from data. The method's linear algebra-based formulation additionally allows for a variety of optimizations and extensions that make the algorithm practical and viable for real-world data analysis. As a result, DMD has grown to become a leading method for dynamical system analysis across multiple scientific disciplines. PyDMD is a Python package that implements DMD and several of its major variants. In this work, we expand the PyDMD package to include a number of cutting-edge DMD methods and tools specifically designed to handle dynamics that are noisy, multiscale, parameterized, prohibitively high-dimensional, or even strongly nonlinear. We provide a complete overview of the features available in PyDMD as of version 1.0, along with a brief overview of the theory behind the DMD algorithm, information for developers, tips regarding practical DMD usage, and introductory coding examples. All code is available at //github.com/PyDMD/PyDMD .

The evaluation of text-generative vision-language models is a challenging yet crucial endeavor. By addressing the limitations of existing Visual Question Answering (VQA) benchmarks and proposing innovative evaluation methodologies, our research seeks to advance our understanding of these models' capabilities. We propose a novel VQA benchmark based on well-known visual classification datasets which allows a granular evaluation of text-generative vision-language models and their comparison with discriminative vision-language models. To improve the assessment of coarse answers on fine-grained classification tasks, we suggest using the semantic hierarchy of the label space to ask automatically generated follow-up questions about the ground-truth category. Finally, we compare traditional NLP and LLM-based metrics for the problem of evaluating model predictions given ground-truth answers. We perform a human evaluation study upon which we base our decision on the final metric. We apply our benchmark to a suite of vision-language models and show a detailed comparison of their abilities on object, action, and attribute classification. Our contributions aim to lay the foundation for more precise and meaningful assessments, facilitating targeted progress in the exciting field of vision-language modeling.

In the present work, the applicability of physics-augmented neural network (PANN) constitutive models for complex electro-elastic finite element analysis is demonstrated. For the investigations, PANN models for electro-elastic material behavior at finite deformations are calibrated to different synthetically generated datasets, including an analytical isotropic potential, a homogenised rank-one laminate, and a homogenised metamaterial with a spherical inclusion. Subsequently, boundary value problems inspired by engineering applications of composite electro-elastic materials are considered. Scenarios with large electrically induced deformations and instabilities are particularly challenging and thus necessitate extensive investigations of the PANN constitutive models in the context of finite element analyses. First of all, an excellent prediction quality of the model is required for very general load cases occurring in the simulation. Furthermore, simulation of large deformations and instabilities poses challenges on the stability of the numerical solver, which is closely related to the constitutive model. In all cases studied, the PANN models yield excellent prediction qualities and a stable numerical behavior even in highly nonlinear scenarios. This can be traced back to the PANN models excellent performance in learning both the first and second derivatives of the ground truth electro-elastic potentials, even though it is only calibrated on the first derivatives. Overall, this work demonstrates the applicability of PANN constitutive models for the efficient and robust simulation of engineering applications of composite electro-elastic materials.

We introduce a novel continual learning method based on multifidelity deep neural networks. This method learns the correlation between the output of previously trained models and the desired output of the model on the current training dataset, limiting catastrophic forgetting. On its own the multifidelity continual learning method shows robust results that limit forgetting across several datasets. Additionally, we show that the multifidelity method can be combined with existing continual learning methods, including replay and memory aware synapses, to further limit catastrophic forgetting. The proposed continual learning method is especially suited for physical problems where the data satisfy the same physical laws on each domain, or for physics-informed neural networks, because in these cases we expect there to be a strong correlation between the output of the previous model and the model on the current training domain.

When generating in-silico clinical electrophysiological outputs, such as electrocardiograms (ECGs) and body surface potential maps (BSPMs), mathematical models have relied on single physics, i.e. of the cardiac electrophysiology (EP), neglecting the role of the heart motion. Since the heart is the most powerful source of electrical activity in the human body, its motion dynamically shifts the position of the principal electrical sources in the torso, influencing electrical potential distribution and potentially altering the EP outputs. In this work, we propose a computational model for the simulation of ECGs and BSPMs by coupling a cardiac electromechanical model with a model that simulates the propagation of the EP signal in the torso, thanks to a flexible numerical approach, that simulates the torso domain deformation induced by the myocardial displacement. Our model accounts for the major mechano-electrical feedbacks, along with unidirectional displacement and potential couplings from the heart to the surrounding body. For the numerical discretization, we employ a versatile intergrid transfer operator that allows for the use of different Finite Element spaces to be used in the cardiac and torso domains. Our numerical results are obtained on a realistic 3D biventricular-torso geometry, and cover both cases of sinus rhythm and ventricular tachycardia (VT), solving both the electromechanical-torso model in dynamical domains, and the classical electrophysiology-torso model in static domains. By comparing standard 12-lead ECG and BSPMs, we highlight the non-negligible effects of the myocardial contraction on the EP-outputs, especially in pathological conditions, such as the VT.

Computationally efficient surrogates for parametrized physical models play a crucial role in science and engineering. Operator learning provides data-driven surrogates that map between function spaces. However, instead of full-field measurements, often the available data are only finite-dimensional parametrizations of model inputs or finite observables of model outputs. Building off of Fourier Neural Operators, this paper introduces the Fourier Neural Mappings (FNMs) framework that is able to accommodate such finite-dimensional inputs and outputs. The paper develops universal approximation theorems for the method. Moreover, in many applications the underlying parameter-to-observable (PtO) map is defined implicitly through an infinite-dimensional operator, such as the solution operator of a partial differential equation. A natural question is whether it is more data-efficient to learn the PtO map end-to-end or first learn the solution operator and subsequently compute the observable from the full-field solution. A theoretical analysis of Bayesian nonparametric regression of linear functionals, which is of independent interest, suggests that the end-to-end approach can actually have worse sample complexity. Extending beyond the theory, numerical results for the FNM approximation of three nonlinear PtO maps demonstrate the benefits of the operator learning perspective that this paper adopts.

Graph representation learning for hypergraphs can be used to extract patterns among higher-order interactions that are critically important in many real world problems. Current approaches designed for hypergraphs, however, are unable to handle different types of hypergraphs and are typically not generic for various learning tasks. Indeed, models that can predict variable-sized heterogeneous hyperedges have not been available. Here we develop a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms the state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. Hyper-SAGNN will be useful for graph representation learning to uncover complex higher-order interactions in different applications.

北京阿比特科技有限公司