亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper launches a thorough discussion on the locality of local neural operator (LNO), which is the core that enables LNO great flexibility on varied computational domains in solving transient partial differential equations (PDEs). We investigate the locality of LNO by looking into its receptive field and receptive range, carrying a main concern about how the locality acts in LNO training and applications. In a large group of LNO training experiments for learning fluid dynamics, it is found that an initial receptive range compatible with the learning task is crucial for LNO to perform well. On the one hand, an over-small receptive range is fatal and usually leads LNO to numerical oscillation; on the other hand, an over-large receptive range hinders LNO from achieving the best accuracy. We deem rules found in this paper general when applying LNO to learn and solve transient PDEs in diverse fields. Practical examples of applying the pre-trained LNOs in flow prediction are presented to confirm the findings further. Overall, with the architecture properly designed with a compatible receptive range, the pre-trained LNO shows commendable accuracy and efficiency in solving practical cases.

相關內容

When applying Hamiltonian operator splitting methods for the time integration of multi-species Vlasov-Maxwell-Landau systems, the reliable and efficient numerical approximation of the Landau equation represents a fundamental component of the entire algorithm. Substantial computational issues arise from the treatment of the physically most relevant three-dimensional case with Coulomb interaction. This work is concerned with the introduction and numerical comparison of novel approaches for the evaluation of the Landau collision operator. In the spirit of collocation, common tools are the identification of fundamental integrals, series expansions of the integral kernel and the density function on the main part of the velocity domain, and interpolation as well as quadrature approximation nearby the singularity of the kernel. Focusing on the favourable choice of the Fourier spectral method, their practical implementation uses the reduction to basic integrals, fast Fourier techniques, and summations along certain directions. Moreover, an important observation is that a significant percentage of the overall computational effort can be transferred to precomputations which are independent of the density function. For the purpose of exposition and numerical validation, the cases of constant, regular, and singular integral kernels are distinguished, and the procedure is adapted accordingly to the increasing complexity of the problem. With regard to the time integration of the Landau equation, the most expedient approach is applied in such a manner that the conservation of mass is ensured.

This manuscript investigates the problem of locational complexity, a type of complexity that emanates from a companys territorial strategy. Using an entropy-based measure for supply chain structural complexity ( pars-complexity), we develop a theoretical framework for analysing the effects of locational complexity on the profitability of service/manufacturing networks. The proposed model is used to shed light on the reasons why network restructuring strategies may result ineffective at reducing complexity-related costs. Our contribution is three-fold. First, we develop a novel mathematical formulation of a facility location problem that integrates the pars-complexity measure in the decision process. Second, using this model, we propose a decomposition of the penalties imposed by locational complexity into (a) an intrinsic cost of structural complexity; and (b) an avoidable cost of ignoring such complexity in the decision process. Such a decomposition is a valuable tool for identifying more effective measures for tackling locational complexity, moreover, it has allowed us to provide an explanation to the so-called addiction to growth within the locational context. Finally, we propose three alternative strategies that attempt to mimic different approaches used in practice by companies that have engaged in network restructuring processes. The impact of those approaches is evaluated through extensive numerical experiments. Our experimental results suggest that network restructuring efforts that are not accompanied by a substantial reduction on the target market of the company, fail at reducing complexity-related costs and, therefore, have a limited impact on the companys profitability.

Approximation of high dimensional functions is in the focus of machine learning and data-based scientific computing. In many applications, empirical risk minimisation techniques over nonlinear model classes are employed. Neural networks, kernel methods and tensor decomposition techniques are among the most popular model classes. We provide a numerical study comparing the performance of these methods on various high-dimensional functions with focus on optimal control problems, where the collection of the dataset is based on the application of the State-Dependent Riccati Equation.

In this paper we propose and analyze a novel multilevel version of Stein variational gradient descent (SVGD). SVGD is a recent particle based variational inference method. For Bayesian inverse problems with computationally expensive likelihood evaluations, the method can become prohibitive as it requires to evolve a discrete dynamical system over many time steps, each of which requires likelihood evaluations at all particle locations. To address this, we introduce a multilevel variant that involves running several interacting particle dynamics in parallel corresponding to different approximation levels of the likelihood. By carefully tuning the number of particles at each level, we prove that a significant reduction in computational complexity can be achieved. As an application we provide a numerical experiment for a PDE driven inverse problem, which confirms the speed up suggested by our theoretical results.

This study introduces a two-scale Graph Neural Operator (GNO), namely, LatticeGraphNet (LGN), designed as a surrogate model for costly nonlinear finite-element simulations of three-dimensional latticed parts and structures. LGN has two networks: LGN-i, learning the reduced dynamics of lattices, and LGN-ii, learning the mapping from the reduced representation onto the tetrahedral mesh. LGN can predict deformation for arbitrary lattices, therefore the name operator. Our approach significantly reduces inference time while maintaining high accuracy for unseen simulations, establishing the use of GNOs as efficient surrogate models for evaluating mechanical responses of lattices and structures.

Pretrained large language models (LLMs) are surprisingly effective at performing zero-shot tasks, including time-series forecasting. However, understanding the mechanisms behind such capabilities remains highly challenging due to the complexity of the models. In this paper, we study LLMs' ability to extrapolate the behavior of dynamical systems whose evolution is governed by principles of physical interest. Our results show that LLaMA 2, a language model trained primarily on texts, achieves accurate predictions of dynamical system time series without fine-tuning or prompt engineering. Moreover, the accuracy of the learned physical rules increases with the length of the input context window, revealing an in-context version of neural scaling law. Along the way, we present a flexible and efficient algorithm for extracting probability density functions of multi-digit numbers directly from LLMs.

Conventional neural network elastoplasticity models are often perceived as lacking interpretability. This paper introduces a two-step machine learning approach that returns mathematical models interpretable by human experts. In particular, we introduce a surrogate model where yield surfaces are expressed in terms of a set of single-variable feature mappings obtained from supervised learning. A post-processing step is then used to re-interpret the set of single-variable neural network mapping functions into mathematical form through symbolic regression. This divide-and-conquer approach provides several important advantages. First, it enables us to overcome the scaling issue of symbolic regression algorithms. From a practical perspective, it enhances the portability of learned models for partial differential equation solvers written in different programming languages. Finally, it enables us to have a concrete understanding of the attributes of the materials, such as convexity and symmetries of models, through automated derivations and reasoning. Numerical examples have been provided, along with an open-source code to enable third-party validation.

This paper develops a flexible and computationally efficient multivariate volatility model, which allows for dynamic conditional correlations and volatility spillover effects among financial assets. The new model has desirable properties such as identifiability and computational tractability for many assets. A sufficient condition of the strict stationarity is derived for the new process. Two quasi-maximum likelihood estimation methods are proposed for the new model with and without low-rank constraints on the coefficient matrices respectively, and the asymptotic properties for both estimators are established. Moreover, a Bayesian information criterion with selection consistency is developed for order selection, and the testing for volatility spillover effects is carefully discussed. The finite sample performance of the proposed methods is evaluated in simulation studies for small and moderate dimensions. The usefulness of the new model and its inference tools is illustrated by two empirical examples for 5 stock markets and 17 industry portfolios, respectively.

The innovation of next-generation sequencing (NGS) techniques has significantly reduced the price of genome sequencing, lowering barriers to future medical research; it is now feasible to apply genome sequencing to studies where it would have previously been cost-inefficient. Identifying damaging or pathogenic mutations in vast amounts of complex, high-dimensional genome sequencing data may be of particular interest to researchers. Thus, this paper's aims were to train machine learning models on the attributes of a genetic mutation to predict LoFtool scores (which measure a gene's intolerance to loss-of-function mutations). These attributes included, but were not limited to, the position of a mutation on a chromosome, changes in amino acids, and changes in codons caused by the mutation. Models were built using the univariate feature selection technique f-regression combined with K-nearest neighbors (KNN), Support Vector Machine (SVM), Random Sample Consensus (RANSAC), Decision Trees, Random Forest, and Extreme Gradient Boosting (XGBoost). These models were evaluated using five-fold cross-validated averages of r-squared, mean squared error, root mean squared error, mean absolute error, and explained variance. The findings of this study include the training of multiple models with testing set r-squared values of 0.97.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

北京阿比特科技有限公司