亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a rotation equivariant, quasi-monolithic graph neural network framework for the reduced-order modeling of fluid-structure interaction systems. With the aid of an arbitrary Lagrangian-Eulerian formulation, the system states are evolved temporally with two sub-networks. The movement of the mesh is reduced to the evolution of several coefficients via complex-valued proper orthogonal decomposition, and the prediction of these coefficients over time is handled by a single multi-layer perceptron. A finite element-inspired hypergraph neural network is employed to predict the evolution of the fluid state based on the state of the whole system. The structural state is implicitly modeled by the movement of the mesh on the solid-fluid interface; hence it makes the proposed framework quasi-monolithic. The effectiveness of the proposed framework is assessed on two prototypical fluid-structure systems, namely the flow around an elastically-mounted cylinder, and the flow around a hyperelastic plate attached to a fixed cylinder. The proposed framework tracks the interface description and provides stable and accurate system state predictions during roll-out for at least 2000 time steps, and even demonstrates some capability in self-correcting erroneous predictions. The proposed framework also enables direct calculation of the lift and drag forces using the predicted fluid and mesh states, in contrast to existing convolution-based architectures. The proposed reduced-order model via graph neural network has implications for the development of physics-based digital twins concerning moving boundaries and fluid-structure interactions.

相關內容

Inspired by the success of WaveNet in multi-subject speech synthesis, we propose a novel neural network based on causal convolutions for multi-subject motion modeling and generation. The network can capture the intrinsic characteristics of the motion of different subjects, such as the influence of skeleton scale variation on motion style. Moreover, after fine-tuning the network using a small motion dataset for a novel skeleton that is not included in the training dataset, it is able to synthesize high-quality motions with a personalized style for the novel skeleton. The experimental results demonstrate that our network can model the intrinsic characteristics of motions well and can be applied to various motion modeling and synthesis tasks.

In the domain of Mobility Data Science, the intricate task of interpreting models trained on trajectory data, and elucidating the spatio-temporal movement of entities, has persistently posed significant challenges. Conventional XAI techniques, although brimming with potential, frequently overlook the distinct structure and nuances inherent within trajectory data. Observing this deficiency, we introduced a comprehensive framework that harmonizes pivotal XAI techniques: LIME (Local Interpretable Model-agnostic Explanations), SHAP (SHapley Additive exPlanations), Saliency maps, attention mechanisms, direct trajectory visualization, and Permutation Feature Importance (PFI). Unlike conventional strategies that deploy these methods singularly, our unified approach capitalizes on the collective efficacy of these techniques, yielding deeper and more granular insights for models reliant on trajectory data. In crafting this synthesis, we effectively address the multifaceted essence of trajectories, achieving not only amplified interpretability but also a nuanced, contextually rich comprehension of model decisions. To validate and enhance our framework, we undertook a survey to gauge preferences and reception among various user demographics. Our findings underscored a dichotomy: professionals with academic orientations, particularly those in roles like Data Scientist, IT Expert, and ML Engineer, showcased a profound, technical understanding and often exhibited a predilection for amalgamated methods for interpretability. Conversely, end-users or individuals less acquainted with AI and Data Science showcased simpler inclinations, such as bar plots indicating timestep significance or visual depictions pinpointing pivotal segments of a vessel's trajectory.

We present compact semi-implicit finite difference schemes on structured grids for numerical solutions of the advection by an external velocity and by a speed in normal direction that are applicable in level set methods. The most involved numerical scheme is third order accurate for the linear advection with a space dependent velocity and unconditionally stable in the sense of von Neumann stability analysis. We also present a simple high-resolution scheme that gives a TVD (Total Variation Diminishing) approximation of the spatial derivative for the advected level set function. In the case of nonlinear advection, the semi-implicit discretization is proposed to linearize the problem. The compact form of implicit stencil in numerical schemes containing unknowns only in the upwind direction allows applications of efficient algebraic solvers like fast sweeping methods. Numerical tests to evolve a smooth and non-smooth interface and an example with a large variation of velocity confirm the good accuracy of the methods and fast convergence of the algebraic solver even in the case of very large Courant numbers.

Evaluating the reliability of complex technical networks, such as those in energy distribution, logistics, and transportation systems, is of paramount importance. These networks are often represented as multistate flow networks (MFNs). While there has been considerable research on assessing MFN reliability, many studies still need to pay more attention to a critical factor: transmission distance constraints. These constraints are typical in real-world applications, such as Internet infrastructure, where controlling the distances between data centers, network nodes, and end-users is vital for ensuring low latency and efficient data transmission. This paper addresses the evaluation of MFN reliability under distance constraints. Specifically, it focuses on determining the probability that a minimum of $d$ flow units can be transmitted successfully from a source node to a sink node, using only paths with lengths not exceeding a predefined distance limit of $\lambda $. We introduce an effective algorithm to tackle this challenge, provide a benchmark example to illustrate its application and analyze its computational complexity.

Physics-informed neural networks (PINNs) effectively embed physical principles into machine learning, but often struggle with complex or alternating geometries. We propose a novel method for integrating geometric transformations within PINNs to robustly accommodate geometric variations. Our method incorporates a diffeomorphism as a mapping of a reference domain and adapts the derivative computation of the physics-informed loss function. This generalizes the applicability of PINNs not only to smoothly deformed domains, but also to lower-dimensional manifolds and allows for direct shape optimization while training the network. We demonstrate the effectivity of our approach on several problems: (i) Eikonal equation on Archimedean spiral, (ii) Poisson problem on surface manifold, (iii) Incompressible Stokes flow in deformed tube, and (iv) Shape optimization with Laplace operator. Through these examples, we demonstrate the enhanced flexibility over traditional PINNs, especially under geometric variations. The proposed framework presents an outlook for training deep neural operators over parametrized geometries, paving the way for advanced modeling with PDEs on complex geometries in science and engineering.

We prove closed-form equations for the exact high-dimensional asymptotics of a family of first order gradient-based methods, learning an estimator (e.g. M-estimator, shallow neural network, ...) from observations on Gaussian data with empirical risk minimization. This includes widely used algorithms such as stochastic gradient descent (SGD) or Nesterov acceleration. The obtained equations match those resulting from the discretization of dynamical mean-field theory (DMFT) equations from statistical physics when applied to gradient flow. Our proof method allows us to give an explicit description of how memory kernels build up in the effective dynamics, and to include non-separable update functions, allowing datasets with non-identity covariance matrices. Finally, we provide numerical implementations of the equations for SGD with generic extensive batch-size and with constant learning rates.

Methods that distinguish dynamical regimes in networks of active elements make it possible to design the dynamics of models of realistic networks. A particularly salient example is partial synchronization, which may play a pivotal role in elucidating the dynamics of biological neural networks. Such emergent partial synchronization in structurally homogeneous networks is commonly denoted as chimera states. While several methods for detecting chimeras in networks of spiking neurons have been proposed, these are less effective when applied to networks of bursting neurons. Here we introduce the correlation dimension as a novel approach to identifying dynamic network states. To assess the viability of this new method, we study a network of intrinsically Hindmarsh-Rose neurons with non-local connections. In comparison to other measures of chimera states, the correlation dimension effectively characterizes chimeras in burst neurons, whether the incoherence arises in spikes or bursts. The generality of dimensionality measures inherent in the correlation dimension renders this approach applicable to any dynamic system, facilitating the comparison of simulated and experimental data. We anticipate that this methodology will enable the tuning and simulation of when modelling intricate network processes, contributing to a deeper understanding of neural dynamics.

We prove discrete-to-continuum convergence for dynamical optimal transport on $\mathbb{Z}^d$-periodic graphs with energy density having linear growth at infinity. This result provides an answer to a problem left open by Gladbach, Kopfer, Maas, and Portinale (Calc Var Partial Differential Equations 62(5), 2023), where the convergence behaviour of discrete boundary-value dynamical transport problems is proved under the stronger assumption of superlinear growth. Our result extends the known literature to some important classes of examples, such as scaling limits of 1-Wasserstein transport problems. Similarly to what happens in the quadratic case, the geometry of the graph plays a crucial role in the structure of the limit cost function, as we discuss in the final part of this work, which includes some visual representations.

We study the optimal sample complexity of neighbourhood selection in linear structural equation models, and compare this to best subset selection (BSS) for linear models under general design. We show by example that -- even when the structure is \emph{unknown} -- the existence of underlying structure can reduce the sample complexity of neighbourhood selection. This result is complicated by the possibility of path cancellation, which we study in detail, and show that improvements are still possible in the presence of path cancellation. Finally, we support these theoretical observations with experiments. The proof introduces a modified BSS estimator, called klBSS, and compares its performance to BSS. The analysis of klBSS may also be of independent interest since it applies to arbitrary structured models, not necessarily those induced by a structural equation model. Our results have implications for structure learning in graphical models, which often relies on neighbourhood selection as a subroutine.

We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.

北京阿比特科技有限公司