亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Implicitly filtered large-eddy simulation (LES) is by nature numerically under-resolved. With the sole exception of Fourier-spectral methods, discrete numerical derivative operators cannot accurately represent the dynamics of all of the represented scales. Since the resolution scale in an LES usually lies in the inertial range, these poorly represented scales are dynamically significant and errors in their dynamics can affect all resolved scales. This Letter is focused on characterizing the effects of numerical dispersion error by studying the energy cascade in LES of convecting homogeneous isotropic turbulence. Numerical energy and transfer spectra reveal that energy is not transferred at the appropriate rate to wavemodes where significant dispersion error is present. This leads to a deficiency of energy in highly dispersive modes and an accompanying pile up of energy in the well resolved modes, since dissipation by the subgrid model is diminished. An asymptotic analysis indicates that dispersion error causes a phase decoherence between triad interacting wavemodes, leading to a reduction in the mean energy transfer rate for these scales. These findings are relevant to a wide range of LES, since turbulence commonly convects through the grid in practical simulations. Further, these results indicate that the resolved scales should be defined to not include the dispersive modes.

相關內容

Deploying federated learning (FL) over wireless networks with resource-constrained devices requires balancing between accuracy, energy efficiency, and precision. Prior art on FL often requires devices to train deep neural networks (DNNs) using a 32-bit precision level for data representation to improve accuracy. However, such algorithms are impractical for resource-constrained devices since DNNs could require execution of millions of operations. Thus, training DNNs with a high precision level incurs a high energy cost for FL. In this paper, a quantized FL framework, that represents data with a finite level of precision in both local training and uplink transmission, is proposed. Here, the finite level of precision is captured through the use of quantized neural networks (QNNs) that quantize weights and activations in fixed-precision format. In the considered FL model, each device trains its QNN and transmits a quantized training result to the base station. Energy models for the local training and the transmission with the quantization are rigorously derived. An energy minimization problem is formulated with respect to the level of precision while ensuring convergence. To solve the problem, we first analytically derive the FL convergence rate and use a line search method. Simulation results show that our FL framework can reduce energy consumption by up to 53% compared to a standard FL model. The results also shed light on the tradeoff between precision, energy, and accuracy in FL over wireless networks.

We establish theoretical results about the low frequency contamination (i.e., long memory effects) induced by general nonstationarity for estimates such as the sample autocovariance and the periodogram, and deduce consequences for heteroskedasticity and autocorrelation robust (HAR) inference. We present explicit expressions for the asymptotic bias of these estimates. We distinguish cases where this contamination only occurs as a small-sample problem and cases where the contamination continues to hold asymptotically. We show theoretically that nonparametric smoothing over time is robust to low frequency contamination. Our results provide new insights on the debate between consistent versus inconsistent long-run variance (LRV) estimation. Existing LRV estimators tend to be in inflated when the data are nonstationary. This results in HAR tests that can be undersized and exhibit dramatic power losses. Our theory indicates that long bandwidths or fixed-b HAR tests suffer more from low frequency contamination relative to HAR tests based on HAC estimators, whereas recently introduced double kernel HAC estimators do not super from this problem. Finally, we present second-order Edgeworth expansions under nonstationarity about the distribution of HAC and DK-HAC estimators and about the corresponding t-test in the linear regression model.

The kinetic theory provides a good basis for developing numerical methods for multiscale gas flows covering a wide range of flow regimes. A particular challenge for kinetic schemes is whether they can capture the correct hydrodynamic behaviors of the system in the continuum regime (i.e., as the Knudsen number $\epsilon\ll 1$ ) without enforcing kinetic scale resolution. At the current stage, the main approach to analyze such property is the asymptotic preserving (AP) concept, which aims to show whether the kinetic scheme reduces to a solver for the hydrodynamic equations as $\epsilon \to 0$. However, the detailed asymptotic properties of the kinetic scheme are indistinguishable as $\epsilon$ is small but finite under the AP framework. In order to distinguish different characteristics of kinetic schemes, in this paper we introduce the concept of unified preserving (UP) aiming at assessing asmyptotic orders (in terms of $\epsilon$) of a kinetic scheme by employing the modified equation approach and Chapman-Enskon analysis. It is shown that the UP properties of a kinetic scheme generally depend on the spatial/temporal accuracy and closely on the inter-connections among the three scales (kinetic scale, numerical scale, and hydrodynamic scale). Specifically, the numerical resolution and specific discretization determine the numerical flow behaviors of the scheme in different regimes, especially in the near continuum limit. As two examples, the UP analysis is applied to the discrete unified gas-kinetic scheme (DUGKS) and a second-order implicit-explicit Runge-Kutta (IMEX-RK) scheme to evaluate their asymptotic behaviors in the continuum limit.

We introduce a new regularization model for incompressible fluid flow, which is a regularization of the EMAC formulation of the Navier-Stokes equations (NSE) that we call EMAC-Reg. The EMAC (energy, momentum, and angular momentum conserving) formulation has proved to be a useful formulation because it conserves energy, momentum and angular momentum even when the divergence constraint is only weakly enforced. However it is still a NSE formulation and so cannot resolve higher Reynolds number flows without very fine meshes. By carefully introducing regularization into the EMAC formulation, we create a model more suitable for coarser mesh computations but that still conserves the same quantities as EMAC, i.e., energy, momentum, and angular momentum. We show that EMAC-Reg, when semi-discretized with a finite element spatial discretization is well-posed and optimally accurate. Numerical results are provided that show EMAC-Reg is a robust coarse mesh model.

We investigate both theoretically and numerically the consistency between the nonlinear discretization in full order models (FOMs) and reduced order models (ROMs) for incompressible flows. To this end, we consider two cases: (i) FOM-ROM consistency, i.e., when we use the same nonlinearity discretization in the FOM and ROM; and (ii) FOM-ROM inconsistency, i.e., when we use different nonlinearity discretizations in the FOM and ROM. Analytically, we prove that while the FOM-ROM consistency yields optimal error bounds, FOM-ROM inconsistency yields additional terms dependent on the FOM divergence error, which prevent the ROM from recovering the FOM as the number of modes increases. Computationally, we consider channel flow around a cylinder and Kelvin-Helmholtz instability, and show that FOM-ROM consistency yields significantly more accurate results than the FOM-ROM inconsistency.

Balancing social utility and equity in distributing limited vaccines represents a critical policy concern for protecting against the prolonged COVID-19 pandemic. What is the nature of the trade-off between maximizing collective welfare and minimizing disparities between more and less privileged communities? To evaluate vaccination strategies, we propose a novel epidemic model that explicitly accounts for both demographic and mobility differences among communities and their association with heterogeneous COVID-19 risks, then calibrate it with large-scale data. Using this model, we find that social utility and equity can be simultaneously improved when vaccine access is prioritized for the most disadvantaged communities, which holds even when such communities manifest considerable vaccine reluctance. Nevertheless, equity among distinct demographic features are in tension due to their complex correlation in society. We design two behavior-and-demography-aware indices, community risk and societal harm, which capture the risks communities face and those they impose on society from not being vaccinated, to inform the design of comprehensive vaccine distribution strategies. Our study provides a framework for uniting utility and equity-based considerations in vaccine distribution, and sheds light on how to balance multiple ethical values in complex settings for epidemic control.

We consider the classical contention resolution problem where nodes arrive over time, each with a message to send. In each synchronous slot, each node can send or remain idle. If in a slot one node sends alone, it succeeds; otherwise, if multiple nodes send simultaneously, messages collide and none succeeds. Nodes can differentiate collision and silence only if collision detection is available. Ideally, a contention resolution algorithm should satisfy three criteria: low time complexity (or high throughput); low energy complexity, meaning each node does not make too many broadcast attempts; strong robustness, meaning the algorithm can maintain good performance even if slots can be jammed. Previous work has shown, with collision detection, there are "perfect" contention resolution algorithms satisfying all three criteria. On the other hand, without collision detection, it was not until 2020 that an algorithm was discovered which can achieve optimal time complexity and low energy cost, assuming there is no jamming. More recently, the trade-off between throughput and robustness was studied. However, an intriguing and important question remains unknown: without collision detection, are there robust algorithms achieving both low total time complexity and low per-node energy cost? In this paper, we answer the above question affirmatively. Specifically, we develop a new randomized algorithm for robust contention resolution without collision detection. Lower bounds show that it has both optimal time and energy complexity. If all nodes start execution simultaneously, we design another algorithm that is even faster, with similar energy complexity as the first algorithm. The separation on time complexity suggests for robust contention resolution without collision detection, ``batch'' instances (nodes start simultaneously) are inherently easier than ``scattered'' ones (nodes arrive over time).

We consider the energy complexity of the leader election problem in the single-hop radio network model, where each device has a unique identifier in $\{1, 2, \ldots, N\}$. Energy is a scarce resource for small battery-powered devices. For such devices, most of the energy is often spent on communication, not on computation. To approximate the actual energy cost, the energy complexity of an algorithm is defined as the maximum over all devices of the number of time slots where the device transmits or listens. Much progress has been made in understanding the energy complexity of leader election in radio networks, but very little is known about the trade-off between time and energy. $\textbf{Time-energy trade-off:}$ For any $k \geq \log \log N$, we show that a leader among at most $n$ devices can be elected deterministically in $O(k \cdot n^{1+\epsilon}) + O(k \cdot N^{1/k})$ time and $O(k)$ energy if each device can simultaneously transmit and listen, where $\epsilon > 0$ is any small constant. This improves upon the previous $O(N)$-time $O(\log \log N)$-energy algorithm by Chang et al. [STOC 2017]. We provide lower bounds to show that the time-energy trade-off of our algorithm is near-optimal. $\textbf{Dense instances:}$ For the dense instances where the number of devices is $n = \Theta(N)$, we design a deterministic leader election algorithm using only $O(1)$ energy. This improves upon the $O(\log^* N)$-energy algorithm by Jurdzi\'{n}ski et al. [PODC 2002] and the $O(\alpha(N))$-energy algorithm by Chang et al. [STOC 2017]. More specifically, we show that the optimal deterministic energy complexity of leader election is $\Theta\left(\max\left\{1, \log \frac{N}{n}\right\}\right)$ if the devices cannot simultaneously transmit and listen, and it is $\Theta\left(\max\left\{1, \log \log \frac{N}{n}\right\}\right)$ if they can.

We study sparse linear regression over a network of agents, modeled as an undirected graph (with no centralized node). The estimation problem is formulated as the minimization of the sum of the local LASSO loss functions plus a quadratic penalty of the consensus constraint -- the latter being instrumental to obtain distributed solution methods. While penalty-based consensus methods have been extensively studied in the optimization literature, their statistical and computational guarantees in the high dimensional setting remain unclear. This work provides an answer to this open problem. Our contribution is two-fold. First, we establish statistical consistency of the estimator: under a suitable choice of the penalty parameter, the optimal solution of the penalized problem achieves near optimal minimax rate $\mathcal{O}(s \log d/N)$ in $\ell_2$-loss, where $s$ is the sparsity value, $d$ is the ambient dimension, and $N$ is the total sample size in the network -- this matches centralized sample rates. Second, we show that the proximal-gradient algorithm applied to the penalized problem, which naturally leads to distributed implementations, converges linearly up to a tolerance of the order of the centralized statistical error -- the rate scales as $\mathcal{O}(d)$, revealing an unavoidable speed-accuracy dilemma.Numerical results demonstrate the tightness of the derived sample rate and convergence rate scalings.

Systems with stochastic time delay between the input and output present a number of unique challenges. Time domain noise leads to irregular alignments, obfuscates relationships and attenuates inferred coefficients. To handle these challenges, we introduce a maximum likelihood regression model that regards stochastic time delay as an "error" in the time domain. For a certain subset of problems, by modelling both prediction and time errors it is possible to outperform traditional models. Through a simulated experiment of a univariate problem, we demonstrate results that significantly improve upon Ordinary Least Squares (OLS) regression.

北京阿比特科技有限公司