亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Recently, the multi-step inertial randomized Kaczmarz (MIRK) method for solving large-scale linear systems was proposed in [17]. In this paper, we incorporate the greedy probability criterion into the MIRK method, along with the introduction of a tighter threshold parameter for this criterion. We prove that the proposed greedy MIRK (GMIRK) method enjoys an improved deterministic linear convergence compared to both the MIRK method and the greedy randomized Kaczmarz method. Furthermore, we exhibit that the multi-step inertial extrapolation approach can be seen geometrically as an orthogonal projection method, and establish its relationship with the sketch-and-project method [15] and the oblique projection technique [22]. Numerical experiments are provided to confirm our results.

相關內容

The numerical solution of continuum damage mechanics (CDM) problems suffers from convergence-related challenges during the material softening stage, and consequently existing iterative solvers are subject to a trade-off between computational expense and solution accuracy. In this work, we present a novel unified arc-length (UAL) method, and we derive the formulation of the analytical tangent matrix and governing system of equations for both local and non-local gradient damage problems. Unlike existing versions of arc-length solvers that monolithically scale the external force vector, the proposed method treats the latter as an independent variable and determines the position of the system on the equilibrium path based on all the nodal variations of the external force vector. This approach renders the proposed solver substantially more efficient and robust than existing solvers used in CDM problems. We demonstrate the considerable advantages of the proposed algorithm through several benchmark 1D problems with sharp snap-backs and 2D examples under various boundary conditions and loading scenarios. The proposed UAL approach exhibits a superior ability of overcoming critical increments along the equilibrium path. Moreover, the proposed UAL method is 1-2 orders of magnitude faster than force-controlled arc-length and monolithic Newton-Raphson solvers.

The unified gas-kinetic wave-particle method (UGKWP) has been developed for the multiscale gas, plasma, and multiphase flow transport processes for the past years. In this work, we propose an implicit unified gas-kinetic wave-particle (IUGKWP) method to remove the CFL time step constraint. Based on the local integral solution of the radiative transfer equation (RTE), the particle transport processes are categorized into the long-$\lambda$ streaming process and the short-$\lambda$ streaming process comparing to a local physical characteristic time $t_p$. In the construction of the IUGKWP method, the long-$\lambda$ streaming process is tracked by the implicit Monte Carlo (IMC) method; the short-$\lambda$ streaming process is evolved by solving the implicit moments equations; and the photon distribution is closed by a local integral solution of RTE. In the IUGKWP method, the multiscale flux of radiation energy and the multiscale closure of photon distribution are constructed based on the local integral solution. The IUGKWP method preserves the second-order asymptotic expansion of RTE in the optically thick regime and adapts its computational complexity to the flow regime. The numerical dissipation is well controlled, and the teleportation error is significantly reduced in the optically thick regime. The computational complexity of the IUGKWP method decreases exponentially as the Knudsen number approaches zero, and the computational efficiency is remarkably improved in the optically thick regime. The IUGKWP is formulated on a generalized unstructured mesh, and multidimensional 2D and 3D algorithms are developed. Numerical tests are presented to validate the capability of IUGKWP in capturing the multiscale photon transport process. The algorithm and code will apply in the engineering applications of inertial confinement fusion (ICF).

This work presents a systematic methodology for describing the transient dynamics of coarse-grained molecular systems inferred from all-atom simulated data. We suggest Langevin-type dynamics where the coarse-grained interaction potential depends explicitly on time to efficiently approximate the transient coarse-grained dynamics. We apply the path-space force matching approach at the transient dynamics regime to learn the proposed model parameters. In particular, we parameterize the coarse-grained potential both with respect to the pair distance of the CG particles and the time, and we obtain an evolution model that is explicitly time-dependent. Moreover, we follow a data-driven approach to estimate the friction kernel, given by appropriate correlation functions directly from the underlying all-atom molecular dynamics simulations. To explore and validate the proposed methodology we study a benchmark system of a moving particle in a box. We examine the suggested model's effectiveness in terms of the system's correlation time and find that the model can approximate well the transient time regime of the system, depending on the correlation time of the system. As a result, in the less correlated case, it can represent the dynamics for a longer time interval. We present an extensive study of our approach to a realistic high-dimensional water molecular system. Posing the water system initially out of thermal equilibrium we collect trajectories of all-atom data for the, empirically estimated, transient time regime. Then, we infer the suggested model and strengthen the model's validity by comparing it with simplified Markovian models.

Neuro-evolutionary methods have proven effective in addressing a wide range of tasks. However, the study of the robustness and generalisability of evolved artificial neural networks (ANNs) has remained limited. This has immense implications in the fields like robotics where such controllers are used in control tasks. Unexpected morphological or environmental changes during operation can risk failure if the ANN controllers are unable to handle these changes. This paper proposes an algorithm that aims to enhance the robustness and generalisability of the controllers. This is achieved by introducing morphological variations during the evolutionary process. As a results, it is possible to discover generalist controllers that can handle a wide range of morphological variations sufficiently without the need of the information regarding their morphologies or adaptation of their parameters. We perform an extensive experimental analysis on simulation that demonstrates the trade-off between specialist and generalist controllers. The results show that generalists are able to control a range of morphological variations with a cost of underperforming on a specific morphology relative to a specialist. This research contributes to the field by addressing the limited understanding of robustness and generalisability in neuro-evolutionary methods and proposes a method by which to improve these properties.

Solving linear systems is of great importance in numerous fields. In particular, circulant systems are especially valuable for efficiently finding numerical solutions to physics-related differential equations. Current quantum algorithms like HHL or variational methods are either resource-intensive or may fail to find a solution. We present an efficient algorithm based on convex optimization of combinations of quantum states to solve for banded circulant linear systems whose non-zero terms are within distance $K$ of the main diagonal. By decomposing banded circulant matrices into cyclic permutations, our approach produces approximate solutions to such systems with a combination of quantum states linear to $K$, significantly improving over previous convergence guarantees, which require quantum states exponential to $K$. We propose a hybrid quantum-classical algorithm using the Hadamard test and the quantum Fourier transform as subroutines and show its PromiseBQP-hardness. Additionally, we introduce a quantum-inspired algorithm with similar performance given sample and query access. We validate our methods with classical simulations and actual IBM quantum computer implementation, showcasing their applicability for solving physical problems such as heat transfer.

Point source localisation is generally modelled as a Lasso-type problem on measures. However, optimisation methods in non-Hilbert spaces, such as the space of Radon measures, are much less developed than in Hilbert spaces. Most numerical algorithms for point source localisation are based on the Frank-Wolfe conditional gradient method, for which ad hoc convergence theory is developed. We develop extensions of proximal-type methods to spaces of measures. This includes forward-backward splitting, its inertial version, and primal-dual proximal splitting. Their convergence proofs follow standard patterns. We demonstrate their numerical efficacy.

In this paper, two kinds of generalizations of ideal matrices, generalized ideal matrices and double ideal matrices. are obtained and studied, The concepts of generalized ideal matrices and double ideal matrices are proposed, and their ranks and maxima.linearly independent groups are verified.The initial motivation to study double cyclic matrices is to study the quasi cyclic codes of the fractional index. In this paper, the generalized form of the quasi cyclic codes, i.e. the {\phi}-quasi cyclic codes. and the construction of the generated matrix are given by the double ideal matrix.

We propose an approach to compute inner and outer-approximations of the sets of values satisfying constraints expressed as arbitrarily quantified formulas. Such formulas arise for instance when specifying important problems in control such as robustness, motion planning or controllers comparison. We propose an interval-based method which allows for tractable but tight approximations. We demonstrate its applicability through a series of examples and benchmarks using a prototype implementation.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.

北京阿比特科技有限公司