亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present the first implementation of the Active Flux method on adaptively refined Cartesian grids. The Active Flux method is a third order accurate finite volume method for hyperbolic conservation laws, which is based on the use of point values as well as cell average values of the conserved quantities. The resulting method has a compact stencil in space and time and good stability properties. The method is implemented as a new solver in ForestClaw, a software for parallel adaptive mesh refinement of patch-based solvers. On each Cartesian grid patch the single grid Active Flux method can be applied. The exchange of data between grid patches is organised via ghost cells. The local stencil in space and time and the availability of the point values that are used for the reconstruction, leads to an efficient implementation. The resulting method is third order accurate, conservative and allows the use of subcycling in time.

相關內容

In this chapter, we identify fundamental geometric structures that underlie the problems of sampling, optimisation, inference and adaptive decision-making. Based on this identification, we derive algorithms that exploit these geometric structures to solve these problems efficiently. We show that a wide range of geometric theories emerge naturally in these fields, ranging from measure-preserving processes, information divergences, Poisson geometry, and geometric integration. Specifically, we explain how (i) leveraging the symplectic geometry of Hamiltonian systems enable us to construct (accelerated) sampling and optimisation methods, (ii) the theory of Hilbertian subspaces and Stein operators provides a general methodology to obtain robust estimators, (iii) preserving the information geometry of decision-making yields adaptive agents that perform active inference. Throughout, we emphasise the rich connections between these fields; e.g., inference draws on sampling and optimisation, and adaptive decision-making assesses decisions by inferring their counterfactual consequences. Our exposition provides a conceptual overview of underlying ideas, rather than a technical discussion, which can be found in the references herein.

Many organisms, including various species of spiders and caterpillars, change their shape to switch gaits and adapt to different environments. Recent technological advances, ranging from stretchable circuits to highly deformable soft robots, have begun to make shape-changing robots a possibility. However, it is currently unclear how and when shape change should occur, and what capabilities could be gained, leading to a wide range of unsolved design and control problems. To begin addressing these questions, here we simulate, design, and build a soft robot that utilizes shape change to achieve locomotion over both a flat and inclined surface. Modeling this robot in simulation, we explore its capabilities in two environments and demonstrate the existence of environment-specific shapes and gaits that successfully transfer to the physical hardware. We found that the shape-changing robot traverses these environments better than an equivalent but non-morphing robot, in simulation and reality.

Although generative facial prior and geometric prior have recently demonstrated high-quality results for blind face restoration, producing fine-grained facial details faithful to inputs remains a challenging problem. Motivated by the classical dictionary-based methods and the recent vector quantization (VQ) technique, we propose a VQ-based face restoration method - VQFR. VQFR takes advantage of high-quality low-level feature banks extracted from high-quality faces and can thus help recover realistic facial details. However, the simple application of the VQ codebook cannot achieve good results with faithful details and identity preservation. Therefore, we further introduce two special network designs. 1). We first investigate the compression patch size in the VQ codebook and find that the VQ codebook designed with a proper compression patch size is crucial to balance the quality and fidelity. 2). To further fuse low-level features from inputs while not "contaminating" the realistic details generated from the VQ codebook, we proposed a parallel decoder consisting of a texture decoder and a main decoder. Those two decoders then interact with a texture warping module with deformable convolution. Equipped with the VQ codebook as a facial detail dictionary and the parallel decoder design, the proposed VQFR can largely enhance the restored quality of facial details while keeping the fidelity to previous methods.

We study \textit{rescaled gradient dynamical systems} in a Hilbert space $\mathcal{H}$, where implicit discretization in a finite-dimensional Euclidean space leads to high-order methods for solving monotone equations (MEs). Our framework can be interpreted as a natural generalization of celebrated dual extrapolation method~\citep{Nesterov-2007-Dual} from first order to high order via appeal to the regularization toolbox of optimization theory~\citep{Nesterov-2021-Implementable, Nesterov-2021-Inexact}. More specifically, we establish the existence and uniqueness of a global solution and analyze the convergence properties of solution trajectories. We also present discrete-time counterparts of our high-order continuous-time methods, and we show that the $p^{th}$-order method achieves an ergodic rate of $O(k^{-(p+1)/2})$ in terms of a restricted merit function and a pointwise rate of $O(k^{-p/2})$ in terms of a residue function. Under regularity conditions, the restarted version of $p^{th}$-order methods achieves local convergence with the order $p \geq 2$. Notably, our methods are \textit{optimal} since they have matched the lower bound established for solving the monotone equation problems under a standard linear span assumption~\citep{Lin-2022-Perseus}.

It is a well-known fact that there is no complete and discrete invariant on the collection of all multiparameter persistence modules. Nonetheless, many invariants have been proposed in the literature to study multiparameter persistence modules, though each invariant will lose some amount of information. One such invariant is the generalized rank invariant. This invariant is known to be complete on the class of interval decomposable persistence modules in general, under mild assumptions on the indexing poset $P$. There is often a trade-off, where the stronger an invariant is, the more expensive it is to compute in practice. The generalized rank invariant on its own is difficult to compute, whereas the standard rank invariant is readily computable through software implementations such as RIVET. We can interpolate between these two to induce new invariants via restricting the domain of the generalized rank invariant, and this family exhibits the aforementioned trade-off. This work studies the tension which exists between computational efficiency and retaining strength when restricting the domain of the generalized rank invariant. We provide a characterization result on where such restrictions are complete invariants in the setting where $P$ is finite, and furthermore show that such restricted generalized rank invariants are stable.

Given a partial differential equation (PDE), goal-oriented error estimation allows us to understand how errors in a diagnostic quantity of interest (QoI), or goal, occur and accumulate in a numerical approximation, for example using the finite element method. By decomposing the error estimates into contributions from individual elements, it is possible to formulate adaptation methods, which modify the mesh with the objective of minimising the resulting QoI error. However, the standard error estimate formulation involves the true adjoint solution, which is unknown in practice. As such, it is common practice to approximate it with an 'enriched' approximation (e.g. in a higher order space or on a refined mesh). Doing so generally results in a significant increase in computational cost, which can be a bottleneck compromising the competitiveness of (goal-oriented) adaptive simulations. The central idea of this paper is to develop a "data-driven" goal-oriented mesh adaptation approach through the selective replacement of the expensive error estimation step with an appropriately configured and trained neural network. In doing so, the error estimator may be obtained without even constructing the enriched spaces. An element-by-element construction is employed here, whereby local values of various parameters related to the mesh geometry and underlying problem physics are taken as inputs, and the corresponding contribution to the error estimator is taken as output. We demonstrate that this approach is able to obtain the same accuracy with a reduced computational cost, for adaptive mesh test cases related to flow around tidal turbines, which interact via their downstream wakes, and where the overall power output of the farm is taken as the QoI. Moreover, we demonstrate that the element-by-element approach implies reasonably low training costs.

Inexpensive numerical methods are key to enable simulations of systems of a large number of particles of different shapes in Stokes flow. Several approximate methods have been introduced for this purpose. We study the accuracy of the multiblob method for solving the Stokes mobility problem in free space, where the 3D geometry of a particle surface is discretised with spherical blobs and the pair-wise interaction between blobs is described by the RPY-tensor. The paper aims to investigate and improve on the magnitude of the error in the solution velocities of the Stokes mobility problem using a combination of two different techniques: an optimally chosen grid of blobs and a pair-correction inspired by Stokesian dynamics. Optimisation strategies to determine a grid with a certain number of blobs are presented with the aim of matching the hydrodynamic response of a single accurately described ideal particle, alone in the fluid. Small errors in this self-interaction are essential as they determine the basic error level in a system of well-separated particles. With a good match, reasonable accuracy can be obtained even with coarse blob-resolutions of the particle surfaces. The error in the self-interaction is however sensitive to the exact choice of grid parameters and simply hand-picking a suitable blob geometry can lead to errors several orders of magnitude larger in size. The pair-correction is local and cheap to apply, and reduces on the error for more closely interacting particles. Two different types of geometries are considered: spheres and axisymmetric rods with smooth caps. The error in solutions to mobility problems is quantified for particles of varying inter-particle distances for systems containing a few particles, comparing to an accurate solution based on a second kind BIE-formulation where the quadrature error is controlled by employing quadrature by expansion (QBX).

Optimal execution is a sequential decision-making problem for cost-saving in algorithmic trading. Studies have found that reinforcement learning (RL) can help decide the order-splitting sizes. However, a problem remains unsolved: how to place limit orders at appropriate limit prices? The key challenge lies in the "continuous-discrete duality" of the action space. On the one hand, the continuous action space using percentage changes in prices is preferred for generalization. On the other hand, the trader eventually needs to choose limit prices discretely due to the existence of the tick size, which requires specialization for every single stock with different characteristics (e.g., the liquidity and the price range). So we need continuous control for generalization and discrete control for specialization. To this end, we propose a hybrid RL method to combine the advantages of both of them. We first use a continuous control agent to scope an action subset, then deploy a fine-grained agent to choose a specific limit price. Extensive experiments show that our method has higher sample efficiency and better training stability than existing RL algorithms and significantly outperforms previous learning-based methods for order execution.

Learning representations of neural network weights given a model zoo is an emerging and challenging area with many potential applications from model inspection, to neural architecture search or knowledge distillation. Recently, an autoencoder trained on a model zoo was able to learn a hyper-representation, which captures intrinsic and extrinsic properties of the models in the zoo. In this work, we extend hyper-representations for generative use to sample new model weights as pre-training. We propose layer-wise loss normalization which we demonstrate is key to generate high-performing models and a sampling method based on the empirical density of hyper-representations. The models generated using our methods are diverse, performant and capable to outperform conventional baselines for transfer learning. Our results indicate the potential of knowledge aggregation from model zoos to new models via hyper-representations thereby paving the avenue for novel research directions.

Implicit Processes (IPs) represent a flexible framework that can be used to describe a wide variety of models, from Bayesian neural networks, neural samplers and data generators to many others. IPs also allow for approximate inference in function-space. This change of formulation solves intrinsic degenerate problems of parameter-space approximate inference concerning the high number of parameters and their strong dependencies in large models. For this, previous works in the literature have attempted to employ IPs both to set up the prior and to approximate the resulting posterior. However, this has proven to be a challenging task. Existing methods that can tune the prior IP result in a Gaussian predictive distribution, which fails to capture important data patterns. By contrast, methods producing flexible predictive distributions by using another IP to approximate the posterior process cannot tune the prior IP to the observed data. We propose here the first method that can accomplish both goals. For this, we rely on an inducing-point representation of the prior IP, as often done in the context of sparse Gaussian processes. The result is a scalable method for approximate inference with IPs that can tune the prior IP parameters to the data, and that provides accurate non-Gaussian predictive distributions.

北京阿比特科技有限公司