亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The travel time functional measures the time taken for a particle trajectory to travel from a given initial position to the boundary of the domain. Such evaluation is paramount in the post-closure safety assessment of deep geological storage facilities for radioactive waste where leaked, non-sorbing, solutes can be transported to the surface of the site by the surrounding groundwater. The accurate simulation of this transport can be attained using standard dual-weighted-residual techniques to derive goal-oriented $a$ $posteriori$ error bounds. This work provides a key aspect in obtaining a suitable error estimate for the travel time functional: the evaluation of its G\^ateaux derivative. A mixed finite element method is implemented to approximate Darcy's equations and numerical experiments are presented to test the performance of the proposed error estimator. In particular, we consider a test case inspired by the Sellafield site located in Cumbria, in the UK.

相關內容

The prediction of dynamical stability of power grids becomes more important and challenging with increasing shares of renewable energy sources due to their decentralized structure, reduced inertia and volatility. We investigate the feasibility of applying graph neural networks (GNN) to predict dynamic stability of synchronisation in complex power grids using the single-node basin stability (SNBS) as a measure. To do so, we generate two synthetic datasets for grids with 20 and 100 nodes respectively and estimate SNBS using Monte-Carlo sampling. Those datasets are used to train and evaluate the performance of eight different GNN-models. All models use the full graph without simplifications as input and predict SNBS in a nodal-regression-setup. We show that SNBS can be predicted in general and the performance significantly changes using different GNN-models. Furthermore, we observe interesting transfer capabilities of our approach: GNN-models trained on smaller grids can directly be applied on larger grids without the need of retraining.

Starting from first principles of wave propagation, we consider a multiple-input multiple-output (MIMO) representation of a communication system between two spatially-continuous volumes. This is the concept of holographic MIMO communications. The analysis takes into account the electromagnetic interference, generated by external sources, and the constraint on the physical radiated power. The electromagnetic MIMO model is particularized for a pair of parallel line segments in line-of-sight conditions. Inspired by orthogonal-frequency division-multiplexing, we assume that the spatially-continuous transmit currents and received fields are represented using the Fourier basis functions. In doing so, a wavenumber-division multiplexing (WDM) scheme is obtained, which is not optimal but can be efficiently implemented. The interplay among the different system parameters (e.g., transmission range, wavelength, and sizes of source and receiver) in terms of number of communication modes and level of interference among them is studied with conventional tools of linear systems theory. Due to the non-finite support (in the spatial domain) of the electromagnetic channel, WDM cannot provide non-interfering communication modes. The interference decreases as the receiver size grows, and goes to zero only asymptotically. Different digital processing architectures, operating in the wavenumber domain, are thus used to deal with the interference. The simplest implementation provides the same spectral efficiency of a singular-value decomposition architecture with water-filling when the receiver size is comparable to the transmission range. The developed framework is also used to represent a classical MIMO system and to make comparisons. It turns out that the latter achieves better performance only when a higher number of radio-frequency chains is used.

While scalable cell-free massive MIMO (CF-mMIMO) shows advantages in static conditions, the impact of its changing serving access point (AP) set in a mobile network is not yet addressed. In this paper we first derive the CPU cluster and AP handover rates of scalable CF-mMIMO as exact numerical results and tight closed form approximations. We then use our closed form handover rate result to analyse the mobility-aware throughput. We compare the mobility-aware spectral efficiency (SE) of scalable CF-mMIMO against distributed MIMO with pure network- and UE-centric AP selection, for different AP densities and handover delays. Our results reveal an important trade-off for future dense networks with low control delay: under moderate to high mobility, scalable CF-mMIMO maintains its advantage for the 95th-percentile users but at the cost of degraded median SE.

Sorting is the task of ordering $n$ elements using pairwise comparisons. It is well known that $m=\Theta(n\log n)$ comparisons are both necessary and sufficient when the outcomes of the comparisons are observed with no noise. In this paper, we study the sorting problem in the presence of noise. Unlike the common approach in the literature which aims to minimize the number of pairwise comparisons $m$ to achieve a given desired error probability, our goal is to characterize the maximal ratio $\frac{n\log n}{m}$ such that the ordering of the elements can be estimated with a vanishing error probability asymptotically. The maximal ratio is referred to as the noisy sorting capacity. In this work, we derive upper and lower bounds on the noisy sorting capacity. The algorithm that attains the lower bound is based on the well-known Burnashev--Zigangirov algorithm for coding over channels with feedback. By comparing with existing algorithms in the literature under the proposed framework, we show that our algorithm can achieve a strictly larger ratio asymptotically.

For many real-world applications, obtaining stable and robust statistical performance is more important than simply achieving state-of-the-art predictive test accuracy, and thus robustness of neural networks is an increasingly important topic. Relatedly, data augmentation schemes have been shown to improve robustness with respect to input perturbations and domain shifts. Motivated by this, we introduce NoisyMix, a training scheme that combines data augmentations with stability training and noise injections to improve both model robustness and in-domain accuracy. This combination promotes models that are consistently more robust and that provide well-calibrated estimates of class membership probabilities. We demonstrate the benefits of NoisyMix on a range of benchmark datasets, including ImageNet-C, ImageNet-R, and ImageNet-P. Moreover, we provide theory to understand implicit regularization and robustness of NoisyMix.

Much recent interest has focused on the design of optimization algorithms from the discretization of an associated optimization flow, i.e., a system of differential equations (ODEs) whose trajectories solve an associated optimization problem. Such a design approach poses an important problem: how to find a principled methodology to design and discretize appropriate ODEs. This paper aims to provide a solution to this problem through the use of contraction theory. We first introduce general mathematical results that explain how contraction theory guarantees the stability of the implicit and explicit Euler integration methods. Then, we propose a novel system of ODEs, namely the Accelerated-Contracting-Nesterov flow, and use contraction theory to establish it is an optimization flow with exponential convergence rate, from which the linear convergence rate of its associated optimization algorithm is immediately established. Remarkably, a simple explicit Euler discretization of this flow corresponds to the Nesterov acceleration method. Finally, we present how our approach leads to performance guarantees in the design of optimization algorithms for time-varying optimization problems.

This paper studies the problem of computing a linear approximation of quadratic Wasserstein distance $W_2$. In particular, we compute an approximation of the negative homogeneous weighted Sobolev norm whose connection to Wasserstein distance follows from a classic linearization of a general Monge-Amp\'ere equation. Our contribution is threefold. First, we provide expository material on this classic linearization of Wasserstein distance including a quantitative error estimate. econd, we reduce the computational problem to solving a elliptic boundary value problem involving the Witten Laplacian, which is a Schr\"odinger operator of the form $H = -\Delta + V$, and describe an associated embedding. Third, for the case of probability distributions on the unit square $[0,1]^2$ represented by $n \times n$ arrays we present a fast code demonstrating our approach. Several numerical examples are presented.

We solve the output-feedback stabilization problem for a tank with a liquid modeled by the viscous Saint-Venant PDE system. The control input is the acceleration of the tank and a Control Lyapunov Functional methodology is used. The measurements are the tank position and the liquid level at the tank walls. The control scheme is a combination of a state feedback law with functional observers for the tank velocity and the liquid momentum. Four different types of output feedback stabilizers are proposed. A full-order observer and a reduced-order observer are used in order to estimate the tank velocity while the unmeasured liquid momentum is either estimated by using an appropriate scalar filter or is ignored. The reduced order observer differs from the full order observer because it omits the estimation of the measured tank position. Exponential convergence of the closed-loop system to the desired equilibrium point is achieved in each case. An algorithm is provided that guarantees that a robotic arm can move a glass of water to a pre-specified position no matter how full the glass is, without spilling water out of the glass, without residual end point sloshing and without measuring the water momentum and the glass velocity. Finally, the efficiency of the proposed output feedback laws is validated by numerical examples, obtained by using a simple finite-difference numerical scheme. The properties of the proposed, explicit, finite-difference scheme are determined.

Processing-in-memory (PIM) architectures have demonstrated great potential in accelerating numerous deep learning tasks. Particularly, resistive random-access memory (RRAM) devices provide a promising hardware substrate to build PIM accelerators due to their abilities to realize efficient in-situ vector-matrix multiplications (VMMs). However, existing PIM accelerators suffer from frequent and energy-intensive analog-to-digital (A/D) conversions, severely limiting their performance. This paper presents a new PIM architecture to efficiently accelerate deep learning tasks by minimizing the required A/D conversions with analog accumulation and neural approximated peripheral circuits. We first characterize the different dataflows employed by existing PIM accelerators, based on which a new dataflow is proposed to remarkably reduce the required A/D conversions for VMMs by extending shift and add (S+A) operations into the analog domain before the final quantizations. We then leverage a neural approximation method to design both analog accumulation circuits (S+A) and quantization circuits (ADCs) with RRAM crossbar arrays in a highly-efficient manner. Finally, we apply them to build an RRAM-based PIM accelerator (i.e., \textbf{Neural-PIM}) upon the proposed analog dataflow and evaluate its system-level performance. Evaluations on different benchmarks demonstrate that Neural-PIM can improve energy efficiency by 5.36x (1.73x) and speed up throughput by 3.43x (1.59x) without losing accuracy, compared to the state-of-the-art RRAM-based PIM accelerators, i.e., ISAAC (CASCADE).

Interpretation of Deep Neural Networks (DNNs) training as an optimal control problem with nonlinear dynamical systems has received considerable attention recently, yet the algorithmic development remains relatively limited. In this work, we make an attempt along this line by reformulating the training procedure from the trajectory optimization perspective. We first show that most widely-used algorithms for training DNNs can be linked to the Differential Dynamic Programming (DDP), a celebrated second-order trajectory optimization algorithm rooted in the Approximate Dynamic Programming. In this vein, we propose a new variant of DDP that can accept batch optimization for training feedforward networks, while integrating naturally with the recent progress in curvature approximation. The resulting algorithm features layer-wise feedback policies which improve convergence rate and reduce sensitivity to hyper-parameter over existing methods. We show that the algorithm is competitive against state-ofthe-art first and second order methods. Our work opens up new avenues for principled algorithmic design built upon the optimal control theory.

北京阿比特科技有限公司