亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we propose an efficient quadratic interpolation formula utilizing solution gradients computed and stored at nodes and demonstrate its application to a third-order cell-centered finite-volume discretization on tetrahedral grids. The proposed quadratic formula is constructed based on an efficient formula of computing a projected derivative. It is efficient in that it completely eliminates the need to compute and store second derivatives of solution variables or any other quantities, which are typically required in upgrading a second-order cell-centered unstructured-grid finite-volume discretization to third-order accuracy. Moreover, a high-order flux quadrature formula, as required for third-order accuracy, can also be simplified by utilizing the efficient projected-derivative formula, resulting in a numerical flux at a face centroid plus a curvature correction not involving second derivatives of the flux. Similarly, a source term can be integrated over a cell to high-order in the form of a source term evaluated at the cell centroid plus a curvature correction, again, not requiring second derivatives of the source term. The discretization is defined as an approximation to an integral form of a conservation law but the numerical solution is defined as a point value at a cell center, leading to another feature that there is no need to compute and store geometric moments for a quadratic polynomial to preserve a cell average. Third-order accuracy and improved second-order accuracy are demonstrated and investigated for simple but illustrative test cases in three dimensions.

相關內容

二階導數,是原函數導數的導數,將原函數進行二次求導。一般的,函數y=f(x)的導數y'=f'(x)仍然是x的函數,則y'=f'(x)的導數叫做函數y=f(x)的二階導數。

For ultra-reliable, low-latency communications (URLLC) applications such as mission-critical industrial control and extended reality (XR), it is important to ensure the communication quality of individual packets. Prior studies have considered Probabilistic Per-packet Real-time Communications (PPRC) guarantees for single-cell, single-channel networks, but they have not considered real-world complexities such as inter-cell interference in large-scale networks with multiple communication channels and heterogeneous real-time requirements. To fill the gap, we propose a real-time scheduling algorithm based on \emph{local-deadline-partition (LDP)}, and the LDP algorithm ensures PPRC guarantee for large-scale, multi-channel networks with heterogeneous real-time constraints. We also address the associated challenge of schedulability test. In particular, we propose the concept of \emph{feasible set}, identify a closed-form sufficient condition for the schedulability of PPRC traffic, and then propose an efficient distributed algorithm for the schedulability test. We numerically study the properties of the LDP algorithm and observe that it significantly improves the network capacity of URLLC, for instance, by a factor of 5-20 as compared with a typical method. Furthermore, the PPRC traffic supportable by the LDP algorithm is significantly higher than that of state-of-the-art comparison schemes. This demonstrates the potential of fine-grained scheduling algorithms for URLLC wireless systems regarding interference scenarios.

We propose a novel collocated projection method for solving the incompressible Navier-Stokes equations with arbitrary boundaries. Our approach employs non-graded octree grids, where all variables are stored at the nodes. To discretize the viscosity and projection steps, we utilize supra-convergent finite difference approximations with sharp boundary treatments. We demonstrate the stability of our projection on uniform grids, identify a sufficient stability condition on adaptive grids, and validate these findings numerically. We further demonstrate the accuracy and capabilities of our solver with several canonical two- and three-dimensional simulations of incompressible fluid flows. Overall, our method is second-order accurate, allows for dynamic grid adaptivity with arbitrary geometries, and reduces the overhead in code development through data collocation.

The problem Power Dominating Set (PDS) is motivated by the placement of phasor measurement units to monitor electrical networks. It asks for a minimum set of vertices in a graph that observes all remaining vertices by exhaustively applying two observation rules. Our contribution is twofold. First, we determine the parameterized complexity of PDS by proving it is $W[P]$-complete when parameterized with respect to the solution size. We note that it was only known to be $W[2]$-hard before. Our second and main contribution is a new algorithm for PDS that efficiently solves practical instances. Our algorithm consists of two complementary parts. The first is a set of reduction rules for PDS that can also be used in conjunction with previously existing algorithms. The second is an algorithm for solving the remaining kernel based on the implicit hitting set approach. Our evaluation on a set of power grid instances from the literature shows that our solver outperforms previous state-of-the-art solvers for PDS by more than one order of magnitude on average. Furthermore, our algorithm can solve previously unsolved instances of continental scale within a few minutes.

We present an efficient dimension-by-dimension finite-volume method which solves the adiabatic magnetohydrodynamics equations at high discretization order, using the constrained-transport approach on Cartesian grids. Results are presented up to tenth order of accuracy. This method requires only one reconstructed value per face for each computational cell. A passage through high-order point values leads to a modest growth of computational cost with increasing discretization order. At a given resolution, these high-order schemes present significantly less numerical dissipation than commonly employed lower-order approaches. Thus, results of comparable accuracy are achievable at a substantially coarser resolution, yielding overall performance gains. We also present a way to include physical dissipative terms: viscosity, magnetic diffusivity and cooling functions, respecting the finite-volume and constrained-transport frameworks.

In privacy under continual observation we study how to release differentially private estimates based on a dataset that evolves over time. The problem of releasing private prefix sums of $x_1,x_2,x_3,\dots \in\{0,1\}$ (where the value of each $x_i$ is to be private) is particularly well-studied, and a generalized form is used in state-of-the-art methods for private stochastic gradient descent (SGD). The seminal binary mechanism privately releases the first $t$ prefix sums with noise of variance polylogarithmic in $t$. Recently, Henzinger et al. and Denisov et al. showed that it is possible to improve on the binary mechanism in two ways: The variance of the noise can be reduced by a (large) constant factor, and also made more even across time steps. However, their algorithms for generating the noise distribution are not as efficient as one would like in terms of computation time and (in particular) space. We address the efficiency problem by presenting a simple alternative to the binary mechanism in which 1) generating the noise takes constant average time per value, 2) the variance is reduced by a factor about 4 compared to the binary mechanism, and 3) the noise distribution at each step is identical. Empirically, a simple Python implementation of our approach outperforms the running time of the approach of Henzinger et al., as well as an attempt to improve their algorithm using high-performance algorithms for multiplication with Toeplitz matrices.

We present high-order, finite element-based Second Moment Methods (SMMs) for solving radiation transport problems in two spatial dimensions. We leverage the close connection between the Variable Eddington Factor (VEF) method and SMM to convert existing discretizations of the VEF moment system to discretizations of the SMM moment system. The moment discretizations are coupled to a high-order Discontinuous Galerkin discretization of the Discrete Ordinates transport equations. We show that the resulting methods achieve high-order accuracy on high-order (curved) meshes, preserve the thick diffusion limit, and are effective on a challenging multi-material problem both in outer fixed-point iterations and in inner preconditioned iterative solver iterations for the discrete moment systems. We also present parallel scaling results and provide direct comparisons to the VEF algorithms the SMM algorithms were derived from.

Position based dynamics is a powerful technique for simulating a variety of materials. Its primary strength is its robustness when run with limited computational budget. We develop a novel approach to address problems with PBD for quasistatic hyperelastic materials. Even though PBD is based on the projection of static constraints, PBD is best suited for dynamic simulations. This is particularly relevant since the efficient creation of large data sets of plausible, but not necessarily accurate elastic equilibria is of increasing importance with the emergence of quasistatic neural networks. Furthermore, PBD projects one constraint at a time. We show that ignoring the effects of neighboring constraints limits its convergence and stability properties. Recent works have shown that PBD can be related to the Gauss-Seidel approximation of a Lagrange multiplier formulation of backward Euler time stepping, where each constraint is solved/projected independently of the others in an iterative fashion. We show that a position-based, rather than constraint-based nonlinear Gauss-Seidel approach solves these problems. Our approach retains the essential PBD feature of stable behavior with constrained computational budgets, but also allows for convergent behavior with expanded budgets. We demonstrate the efficacy of our method on a variety of representative hyperelastic problems and show that both successive over relaxation (SOR) and Chebyshev acceleration can be easily applied.

Motivated by the virtual machine scheduling problem in today's computing systems, we propose a new setting of stochastic bin-packing in service systems that allows the item sizes (job resource requirements) to vary over time. In this setting, items (jobs) arrive to the system, vary their sizes, and depart from the system following certain Markovian assumptions. We focus on minimizing the expected number of non-empty bins (active servers) in steady state, where the expectation in steady state is equal to the long-run time-average with probability $1$ under the Markovian assumptions. Our main result is a policy that achieves an optimality gap of $O(\sqrt{r})$ in the objective, where the optimal objective value is $\Theta(r)$ and $r$ is a scaling factor such that the item arrival intensity scales linearly with it. When specialized to the setting where the item sizes do not vary over time, our result improves upon the state-of-the-art $o(r)$ optimality gap. Our technical approach highlights a novel policy conversion framework that reduces the policy design problem to that in a single-bin (single-server) system.

Neural network compression has been an increasingly important subject, due to its practical implications in terms of reducing the computational requirements and its theoretical implications, as there is an explicit connection between compressibility and the generalization error. Recent studies have shown that the choice of the hyperparameters of stochastic gradient descent (SGD) can have an effect on the compressibility of the learned parameter vector. Even though these results have shed some light on the role of the training dynamics over compressibility, they relied on unverifiable assumptions and the resulting theory does not provide a practical guideline due to its implicitness. In this study, we propose a simple modification for SGD, such that the outputs of the algorithm will be provably compressible without making any nontrivial assumptions. We consider a one-hidden-layer neural network trained with SGD and we inject additive heavy-tailed noise to the iterates at each iteration. We then show that, for any compression rate, there exists a level of overparametrization (i.e., the number of hidden units), such that the output of the algorithm will be compressible with high probability. To achieve this result, we make two main technical contributions: (i) we build on a recent study on stochastic analysis and prove a 'propagation of chaos' result with improved rates for a class of heavy-tailed stochastic differential equations, and (ii) we derive strong-error estimates for their Euler discretization. We finally illustrate our approach on experiments, where the results suggest that the proposed approach achieves compressibility with a slight compromise from the training and test error.

Graph convolution networks (GCN) are increasingly popular in many applications, yet remain notoriously hard to train over large graph datasets. They need to compute node representations recursively from their neighbors. Current GCN training algorithms suffer from either high computational costs that grow exponentially with the number of layers, or high memory usage for loading the entire graph and node embeddings. In this paper, we propose a novel efficient layer-wise training framework for GCN (L-GCN), that disentangles feature aggregation and feature transformation during training, hence greatly reducing time and memory complexities. We present theoretical analysis for L-GCN under the graph isomorphism framework, that L-GCN leads to as powerful GCNs as the more costly conventional training algorithm does, under mild conditions. We further propose L^2-GCN, which learns a controller for each layer that can automatically adjust the training epochs per layer in L-GCN. Experiments show that L-GCN is faster than state-of-the-arts by at least an order of magnitude, with a consistent of memory usage not dependent on dataset size, while maintaining comparable prediction performance. With the learned controller, L^2-GCN can further cut the training time in half. Our codes are available at //github.com/Shen-Lab/L2-GCN.

北京阿比特科技有限公司