亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper develops fast and accurate linear finite element method and fourth-order compact difference method combined with matrix transfer technique to solve high dimensional time-space fractional diffusion problem with spectral fractional Laplacian in space. In addition, a fast time stepping $L1$ scheme is used for time discretization. We can exactly evaluate fractional power of matrix in the proposed schemes, and perform matrix-vector multiplication by directly using a discrete sine transform and its inverse transform, which doesn't need to resort to any iteration method and can significantly reduce computation cost and memory. Further, we address the convergence analyses of full discrete scheme based on two types of spatial numerical methods. Finally, ample numerical examples are delivered to illustrate our theoretical analyses and the efficiency of the suggested schemes.

相關內容

FAST:Conference on File and Storage Technologies。 Explanation:文件和存儲技術會議。 Publisher:USENIX。 SIT:

We consider the setting of online convex optimization (OCO) with \textit{exp-concave} losses. The best regret bound known for this setting is $O(n\log{}T)$, where $n$ is the dimension and $T$ is the number of prediction rounds (treating all other quantities as constants and assuming $T$ is sufficiently large), and is attainable via the well-known Online Newton Step algorithm (ONS). However, ONS requires on each iteration to compute a projection (according to some matrix-induced norm) onto the feasible convex set, which is often computationally prohibitive in high-dimensional settings and when the feasible set admits a non-trivial structure. In this work we consider projection-free online algorithms for exp-concave and smooth losses, where by projection-free we refer to algorithms that rely only on the availability of a linear optimization oracle (LOO) for the feasible set, which in many applications of interest admits much more efficient implementations than a projection oracle. We present an LOO-based ONS-style algorithm, which using overall $O(T)$ calls to a LOO, guarantees in worst case regret bounded by $\widetilde{O}(n^{2/3}T^{2/3})$ (ignoring all quantities except for $n,T$). However, our algorithm is most interesting in an important and plausible low-dimensional data scenario: if the gradients (approximately) span a subspace of dimension at most $\rho$, $\rho << n$, the regret bound improves to $\widetilde{O}(\rho^{2/3}T^{2/3})$, and by applying standard deterministic sketching techniques, both the space and average additional per-iteration runtime requirements are only $O(\rho{}n)$ (instead of $O(n^2)$). This improves upon recently proposed LOO-based algorithms for OCO which, while having the same state-of-the-art dependence on the horizon $T$, suffer from regret/oracle complexity that scales with $\sqrt{n}$ or worse.

We propose in this paper efficient first/second-order time-stepping schemes for the evolutional Navier-Stokes-Nernst-Planck-Poisson equations. The proposed schemes are constructed using an auxiliary variable reformulation and sophisticated treatment of the terms coupling different equations. By introducing a dynamic equation for the auxiliary variable and reformulating the original equations into an equivalent system, we construct first- and second-order semi-implicit linearized schemes for the underlying problem. The main advantages of the proposed method are: (1) the schemes are unconditionally stable in the sense that a discrete energy keeps decay during the time stepping; (2) the concentration components of the discrete solution preserve positivity and mass conservation; (3) the delicate implementation shows that the proposed schemes can be very efficiently realized, with computational complexity close to a semi-implicit scheme. Some numerical examples are presented to demonstrate the accuracy and performance of the proposed method. As far as the best we know, this is the first second-order method which satisfies all the above properties for the Navier-Stokes-Nernst-Planck-Poisson equations.

This work introduces a highly-scalable spectral graph densification framework (SGL) for learning resistor networks with linear measurements, such as node voltages and currents. We show that the proposed graph learning approach is equivalent to solving the classical graphical Lasso problems with Laplacian-like precision matrices. We prove that given $O(\log N)$ pairs of voltage and current measurements, it is possible to recover sparse $N$-node resistor networks that can well preserve the effective resistance distances on the original graph. In addition, the learned graphs also preserve the structural (spectral) properties of the original graph, which can potentially be leveraged in many circuit design and optimization tasks. To achieve more scalable performance, we also introduce a solver-free method (SF-SGL) that exploits multilevel spectral approximation of the graphs and allows for a scalable and flexible decomposition of the entire graph spectrum (to be learned) into multiple different eigenvalue clusters (frequency bands). Such a solver-free approach allows us to more efficiently identify the most spectrally-critical edges for reducing various ranges of spectral embedding distortions. Through extensive experiments for a variety of real-world test cases, we show that the proposed approach is highly scalable for learning sparse resistor networks without sacrificing solution quality. We also introduce a data-driven EDA algorithm for vectorless power/thermal integrity verifications to allow estimating worst-case voltage/temperature (gradient) distributions across the entire chip by leveraging a few voltage/temperature measurements.

Stochastic human motion prediction (HMP) has generally been tackled with generative adversarial networks and variational autoencoders. Most prior works aim at predicting highly diverse movements in terms of the skeleton joints' dispersion. This has led to methods predicting fast and motion-divergent movements, which are often unrealistic and incoherent with past motion. Such methods also neglect contexts that need to anticipate diverse low-range behaviors, or actions, with subtle joint displacements. To address these issues, we present BeLFusion, a model that, for the first time, leverages latent diffusion models in HMP to sample from a latent space where behavior is disentangled from pose and motion. As a result, diversity is encouraged from a behavioral perspective. Thanks to our behavior coupler's ability to transfer sampled behavior to ongoing motion, BeLFusion's predictions display a variety of behaviors that are significantly more realistic than the state of the art. To support it, we introduce two metrics, the Area of the Cumulative Motion Distribution, and the Average Pairwise Distance Error, which are correlated to our definition of realism according to a qualitative study with 126 participants. Finally, we prove BeLFusion's generalization power in a new cross-dataset scenario for stochastic HMP.

Inverse problems involve making inference about unknown parameters of a physical process using observational data. This paper investigates an important class of inverse problems -- the estimation of the initial condition of a spatio-temporal advection-diffusion process using spatially sparse data streams. Three spatial sampling schemes are considered, including irregular, non-uniform and shifted uniform sampling. The irregular sampling scheme is the general scenario, while computationally efficient solutions are available in the spectral domain for non-uniform and shifted uniform sampling. For each sampling scheme, the inverse problem is formulated as a regularized convex optimization problem that minimizes the distance between forward model outputs and observations. The optimization problem is solved by the Alternating Direction Method of Multipliers algorithm, which also handles the situation when a linear inequality constraint (e.g., non-negativity) is imposed on the model output. Numerical examples are presented, code is made available on GitHub, and discussions are provided to generate some useful insights of the proposed inverse modeling approaches.

The fractional differential equation $L^\beta u = f$ posed on a compact metric graph is considered, where $\beta>\frac14$ and $L = \kappa - \frac{\mathrm{d}}{\mathrm{d} x}(H\frac{\mathrm{d}}{\mathrm{d} x})$ is a second-order elliptic operator equipped with certain vertex conditions and sufficiently smooth and positive coefficients $\kappa,H$. We demonstrate the existence of a unique solution for a general class of vertex conditions and derive the regularity of the solution in the specific case of Kirchhoff vertex conditions. These results are extended to the stochastic setting when $f$ is replaced by Gaussian white noise. For the deterministic and stochastic settings under generalized Kirchhoff vertex conditions, we propose a numerical solution based on a finite element approximation combined with a rational approximation of the fractional power $L^{-\beta}$. For the resulting approximation, the strong error is analyzed in the deterministic case, and the strong mean squared error as well as the $L_2(\Gamma\times \Gamma)$-error of the covariance function of the solution are analyzed in the stochastic setting. Explicit rates of convergences are derived for all cases. Numerical experiments for the example ${L = \kappa^2 - \Delta, \kappa>0}$ are performed to illustrate the theoretical results.

It is often useful to compactly summarize important properties of model parameters and training data so that they can be used later without storing and/or iterating over the entire dataset. As a specific case, we consider estimating the Function Space Distance (FSD) over a training set, i.e. the average discrepancy between the outputs of two neural networks. We propose a Linearized Activation Function TRick (LAFTR) and derive an efficient approximation to FSD for ReLU neural networks. The key idea is to approximate the architecture as a linear network with stochastic gating. Despite requiring only one parameter per unit of the network, our approach outcompetes other parametric approximations with larger memory requirements. Applied to continual learning, our parametric approximation is competitive with state-of-the-art nonparametric approximations, which require storing many training examples. Furthermore, we show its efficacy in estimating influence functions accurately and detecting mislabeled examples without expensive iterations over the entire dataset.

This paper extends the inverse substructuring (IS) approach to the state-space domain and presents a novel state-space substructuring (SSS) technique that embeds the dynamics of connecting elements (CEs) in the Lagrange Multiplier State-Space Substructuring (LM-SSS) formulation via compatibility relaxation. This coupling approach makes it possible to incorporate into LM-SSS connecting elements that are suitable for being characterized by inverse substructuring (e.g. rubber mounts) by simply using information from one of its off diagonal apparent mass terms. Therefore, the information obtained from an in-situ experimental characterization of the CEs is enough to include them into the coupling formulation. Moreover, LM-SSS with compatibility relaxation makes it possible to couple an unlimited number of components and CEs, requiring only one matrix inversion to compute the coupled state-space model (SSM). Two post-processing procedures to enable the computation of minimal-order coupled models by using this approach are also presented. Numerical and experimental substructuring applications are exploited to prove the validity of the proposed methods. It is found that the IS approach can be accurately applied on state-space models representative of components linked by CEs to identify models representative of the diagonal apparent mass terms of the CEs, provided that the CEs can be accurately characterized by the underlying assumptions of IS. In this way, state-space models representative of experimentally characterized CEs can be found without performing decoupling operations. Hence, these models are not contaminated with spurious states. Furthermore, it was found that the developed coupling approach is reliable, when the dynamics of the CEs can be accurately characterized by IS, thus making it possible to compute reliable coupled models that are not composed by spurious states.

Diffusion models have shown incredible capabilities as generative models; indeed, they power the current state-of-the-art models on text-conditioned image generation such as Imagen and DALL-E 2. In this work we review, demystify, and unify the understanding of diffusion models across both variational and score-based perspectives. We first derive Variational Diffusion Models (VDM) as a special case of a Markovian Hierarchical Variational Autoencoder, where three key assumptions enable tractable computation and scalable optimization of the ELBO. We then prove that optimizing a VDM boils down to learning a neural network to predict one of three potential objectives: the original source input from any arbitrary noisification of it, the original source noise from any arbitrarily noisified input, or the score function of a noisified input at any arbitrary noise level. We then dive deeper into what it means to learn the score function, and connect the variational perspective of a diffusion model explicitly with the Score-based Generative Modeling perspective through Tweedie's Formula. Lastly, we cover how to learn a conditional distribution using diffusion models via guidance.

As soon as abstract mathematical computations were adapted to computation on digital computers, the problem of efficient representation, manipulation, and communication of the numerical values in those computations arose. Strongly related to the problem of numerical representation is the problem of quantization: in what manner should a set of continuous real-valued numbers be distributed over a fixed discrete set of numbers to minimize the number of bits required and also to maximize the accuracy of the attendant computations? This perennial problem of quantization is particularly relevant whenever memory and/or computational resources are severely restricted, and it has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas. Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x; and, in fact, reductions of 4x to 8x are often realized in practice in these applications. Thus, it is not surprising that quantization has emerged recently as an important and very active sub-area of research in the efficient implementation of computations associated with Neural Networks. In this article, we survey approaches to the problem of quantizing the numerical values in deep Neural Network computations, covering the advantages/disadvantages of current methods. With this survey and its organization, we hope to have presented a useful snapshot of the current research in quantization for Neural Networks and to have given an intelligent organization to ease the evaluation of future research in this area.

北京阿比特科技有限公司