亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We establish guaranteed and practically computable a posteriori error bounds for source problems and eigenvalue problems involving linear Schr{\"o}dinger operators with atom-centered potentials discretized with linear combinations of atomic orbitals. We show that the energy norm of the discretization error can be estimated by the dual energy norm of the residual, that further decomposes into atomic contributions, characterizing the error localized on atoms. Moreover, we show that the practical computation of the dual norms of atomic residuals involves diagonalizing radial Schr{\"o}dinger operators which can easily be precomputed in practice. We provide numerical illustrations of the performance of such a posteriori analysis on several test cases, showing that the error bounds accurately estimate the error, and that the localized error components allow for optimized adaptive basis sets.

相關內容

GitHub 發布的(de)文本編輯器。

Text-to-image generation and text-guided image manipulation have received considerable attention in the field of image generation tasks. However, the mainstream evaluation methods for these tasks have difficulty in evaluating whether all the information from the input text is accurately reflected in the generated images, and they mainly focus on evaluating the overall alignment between the input text and the generated images. This paper proposes new evaluation metrics that assess the alignment between input text and generated images for every individual object. Firstly, according to the input text, chatGPT is utilized to produce questions for the generated images. After that, we use Visual Question Answering(VQA) to measure the relevance of the generated images to the input text, which allows for a more detailed evaluation of the alignment compared to existing methods. In addition, we use Non-Reference Image Quality Assessment(NR-IQA) to evaluate not only the text-image alignment but also the quality of the generated images. Experimental results show that our proposed evaluation approach is the superior metric that can simultaneously assess finer text-image alignment and image quality while allowing for the adjustment of these ratios.

We propose novel optimal and parameter-free algorithms for computing an approximate solution with small (projected) gradient norm. Specifically, for computing an approximate solution such that the norm of its (projected) gradient does not exceed $\varepsilon$, we obtain the following results: a) for the convex case, the total number of gradient evaluations is bounded by $O(1)\sqrt{L\|x_0 - x^*\|/\varepsilon}$, where $L$ is the Lipschitz constant of the gradient and $x^*$ is any optimal solution; b) for the strongly convex case, the total number of gradient evaluations is bounded by $O(1)\sqrt{L/\mu}\log(\|\nabla f(x_0)\|/\epsilon)$, where $\mu$ is the strong convexity modulus; and c) for the nonconvex case, the total number of gradient evaluations is bounded by $O(1)\sqrt{Ll}(f(x_0) - f(x^*))/\varepsilon^2$, where $l$ is the lower curvature constant. Our complexity results match the lower complexity bounds of the convex and strongly cases, and achieve the above best-known complexity bound for the nonconvex case for the first time in the literature. Our results can also be extended to problems with constraints and composite objectives. Moreover, for all the convex, strongly convex, and nonconvex cases, we propose parameter-free algorithms that do not require the input of any problem parameters or the convexity status of the problem. To the best of our knowledge, there do not exist such parameter-free methods before especially for the strongly convex and nonconvex cases. Since most regularity conditions (e.g., strong convexity and lower curvature) are imposed over a global scope, the corresponding problem parameters are notoriously difficult to estimate. However, gradient norm minimization equips us with a convenient tool to monitor the progress of algorithms and thus the ability to estimate such parameters in-situ.

We consider a class of Wasserstein distributionally robust Nash equilibrium problems, where agents construct heterogeneous data-driven Wasserstein ambiguity sets using private samples and radii, in line with their individual risk-averse behaviour. By leveraging relevant properties of this class of games, we show that equilibria of the original seemingly infinite-dimensional problem can be obtained as a solution to a finite-dimensional Nash equilibrium problem. We then reformulate the problem as a finite-dimensional variational inequality and establish the connection between the corresponding solution sets. Our reformulation has scalable behaviour with respect to the data size and maintains a fixed number of constraints, independently of the number of samples. To compute a solution, we leverage two algorithms, based on the golden ratio algorithm. The efficiency of both algorithmic schemes is corroborated through extensive simulation studies on an illustrative example and a stochastic portfolio allocation game, where behavioural coupling among investors is modeled.

Numerical modeling errors are unavoidable in finite element analysis. The presence of model errors inherently reflects both model accuracy and uncertainty. To date there have been few methods for explicitly quantifying errors at points of interest (e.g. at finite element nodes). The lack of explicit model error approximators has been addressed recently with the emergence of machine learning (ML), which closes the loop between numerical model features/solutions and explicit model error approximations. In this paper, we propose physics-informed neural networks (PINNs) for simultaneous numerical model error approximation and superresolution. To test our approach, numerical data was generated using finite element simulations on a two-dimensional elastic plate with a central opening. Four- and eight-node quadrilateral elements were used in the discretization to represent the reduced-order and higher-order models, respectively. It was found that the developed PINNs effectively predict model errors in both x and y displacement fields with small differences between predictions and ground truth. Our findings demonstrate that the integration of physics-informed loss functions enables neural networks (NNs) to surpass a purely data-driven approach for approximating model errors.

Splitting methods are widely used for solving initial value problems (IVPs) due to their ability to simplify complicated evolutions into more manageable subproblems which can be solved efficiently and accurately. Traditionally, these methods are derived using analytic and algebraic techniques from numerical analysis, including truncated Taylor series and their Lie algebraic analogue, the Baker--Campbell--Hausdorff formula. These tools enable the development of high-order numerical methods that provide exceptional accuracy for small timesteps. Moreover, these methods often (nearly) conserve important physical invariants, such as mass, unitarity, and energy. However, in many practical applications the computational resources are limited. Thus, it is crucial to identify methods that achieve the best accuracy within a fixed computational budget, which might require taking relatively large timesteps. In this regime, high-order methods derived with traditional methods often exhibit large errors since they are only designed to be asymptotically optimal. Machine Learning techniques offer a potential solution since they can be trained to efficiently solve a given IVP with less computational resources. However, they are often purely data-driven, come with limited convergence guarantees in the small-timestep regime and do not necessarily conserve physical invariants. In this work, we propose a framework for finding machine learned splitting methods that are computationally efficient for large timesteps and have provable convergence and conservation guarantees in the small-timestep limit. We demonstrate numerically that the learned methods, which by construction converge quadratically in the timestep size, can be significantly more efficient than established methods for the Schr\"{o}dinger equation if the computational budget is limited.

When solving inverse problems, one has to deal with numerous potential sources of model inexactnesses, like object motion, calibration errors, or simplified data models. Regularized Sequential Subspace Optimization (ReSeSOp) allows to compensate for such inaccuracies within the reconstruction step by employing consecutive projections onto suitably defined subspaces. However, this approach relies on a priori estimates for the model inexactness levels which are typically unknown. In dynamic imaging applications, where inaccuracies arise from the unpredictable dynamics of the object, these estimates are particularly challenging to determine in advance. To overcome this limitation, we propose a learned version of ReSeSOp which allows to approximate inexactness levels on the fly. The proposed framework generalizes established unrolled iterative reconstruction schemes to inexact forward operators and is particularly tailored to the structure of dynamic problems. We also present a comprehensive mathematical analysis regarding the effect of dependencies within the forward problem, clarifying when and why dividing the overall problem into subproblems is essential. The proposed method is evaluated on various examples from dynamic imaging, including datasets from a rheological CT experiment, brain MRI, and real-time cardiac MRI. The respective results emphasize improvements in reconstruction quality while ensuring adequate data consistency.

Analytical workflows in functional magnetic resonance imaging are highly flexible with limited best practices as to how to choose a pipeline. While it has been shown that the use of different pipelines might lead to different results, there is still a lack of understanding of the factors that drive these differences and of the stability of these differences across contexts. We use community detection algorithms to explore the pipeline space and assess the stability of pipeline relationships across different contexts. We show that there are subsets of pipelines that give similar results, especially those sharing specific parameters (e.g. number of motion regressors, software packages, etc.). Those pipeline-to-pipeline patterns are stable across groups of participants but not across different tasks. By visualizing the differences between communities, we show that the pipeline space is mainly driven by the size of the activation area in the brain and the scale of statistic values in statistic maps.

Solving multiscale diffusion problems is often computationally expensive due to the spatial and temporal discretization challenges arising from high-contrast coefficients. To address this issue, a partially explicit temporal splitting scheme is proposed. By appropriately constructing multiscale spaces, the spatial multiscale property is effectively managed, and it has been demonstrated that the temporal step size is independent of the contrast. To enhance simulation speed, we propose a parallel algorithm for the multiscale flow problem that leverages the partially explicit temporal splitting scheme. The idea is first to evolve the partially explicit system using a coarse time step size, then correct the solution on each coarse time interval with a fine propagator, for which we consider both the sequential solver and all-at-once solver. This procedure is then performed iteratively till convergence. We analyze the stability and convergence of the proposed algorithm. The numerical experiments demonstrate that the proposed algorithm achieves high numerical accuracy for high-contrast problems and converges in a relatively small number of iterations. The number of iterations stays stable as the number of coarse intervals increases, thus significantly improving computational efficiency through parallel processing.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

Graph representation learning for hypergraphs can be used to extract patterns among higher-order interactions that are critically important in many real world problems. Current approaches designed for hypergraphs, however, are unable to handle different types of hypergraphs and are typically not generic for various learning tasks. Indeed, models that can predict variable-sized heterogeneous hyperedges have not been available. Here we develop a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms the state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. Hyper-SAGNN will be useful for graph representation learning to uncover complex higher-order interactions in different applications.

北京阿比特科技有限公司