亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given an intractable distribution $p$, the problem of variational inference (VI) is to compute the best approximation $q$ from some more tractable family $\mathcal{Q}$. Most commonly the approximation is found by minimizing a Kullback-Leibler (KL) divergence. However, there exist other valid choices of divergences, and when $\mathcal{Q}$ does not contain~$p$, each divergence champions a different solution. We analyze how the choice of divergence affects the outcome of VI when a Gaussian with a dense covariance matrix is approximated by a Gaussian with a diagonal covariance matrix. In this setting we show that different divergences can be \textit{ordered} by the amount that their variational approximations misestimate various measures of uncertainty, such as the variance, precision, and entropy. We also derive an impossibility theorem showing that no two of these measures can be simultaneously matched by a factorized approximation; hence, the choice of divergence informs which measure, if any, is correctly estimated. Our analysis covers the KL divergence, the R\'enyi divergences, and a score-based divergence that compares $\nabla\log p$ and $\nabla\log q$. We empirically evaluate whether these orderings hold when VI is used to approximate non-Gaussian distributions.

相關內容

Deep learning surrogate models have shown promise in solving partial differential equations (PDEs). Among them, the Fourier neural operator (FNO) achieves good accuracy, and is significantly faster compared to numerical solvers, on a variety of PDEs, such as fluid flows. However, the FNO uses the Fast Fourier transform (FFT), which is limited to rectangular domains with uniform grids. In this work, we propose a new framework, viz., geo-FNO, to solve PDEs on arbitrary geometries. Geo-FNO learns to deform the input (physical) domain, which may be irregular, into a latent space with a uniform grid. The FNO model with the FFT is applied in the latent space. The resulting geo-FNO model has both the computation efficiency of FFT and the flexibility of handling arbitrary geometries. Our geo-FNO is also flexible in terms of its input formats, viz., point clouds, meshes, and design parameters are all valid inputs. We consider a variety of PDEs such as the Elasticity, Plasticity, Euler's, and Navier-Stokes equations, and both forward modeling and inverse design problems. Geo-FNO is $10^5$ times faster than the standard numerical solvers and twice more accurate compared to direct interpolation on existing ML-based PDE solvers such as the standard FNO.

For a graph $G$, a subset $S\subseteq V(G)$ is called a resolving set of $G$ if, for any two vertices $u,v\in V(G)$, there exists a vertex $w\in S$ such that $d(w,u)\neq d(w,v)$. The Metric Dimension problem takes as input a graph $G$ on $n$ vertices and a positive integer $k$, and asks whether there exists a resolving set of size at most $k$. In another metric-based graph problem, Geodetic Set, the input is a graph $G$ and an integer $k$, and the objective is to determine whether there exists a subset $S\subseteq V(G)$ of size at most $k$ such that, for any vertex $u \in V(G)$, there are two vertices $s_1, s_2 \in S$ such that $u$ lies on a shortest path from $s_1$ to $s_2$. These two classical problems turn out to be intractable with respect to the natural parameter, i.e., the solution size, as well as most structural parameters, including the feedback vertex set number and pathwidth. Some of the very few existing tractable results state that they are both FPT with respect to the vertex cover number $vc$. More precisely, we observe that both problems admit an FPT algorithm running in time $2^{\mathcal{O}(vc^2)}\cdot n^{\mathcal{O}(1)}$, and a kernelization algorithm that outputs a kernel with $2^{\mathcal{O}(vc)}$ vertices. We prove that unless the Exponential Time Hypothesis fails, Metric Dimension and Geodetic Set, even on graphs of bounded diameter, neither admit an FPT algorithm running in time $2^{o(vc^2)}\cdot n^{\mathcal(1)}$, nor a kernelization algorithm that reduces the solution size and outputs a kernel with $2^{o(vc)}$ vertices. The versatility of our technique enables us to apply it to both these problems. We only know of one other problem in the literature that admits such a tight lower bound. Similarly, the list of known problems with exponential lower bounds on the number of vertices in kernelized instances is very short.

An intelligent omni-surface (IOS) assisted holographic multiple-input and multiple-output architecture is conceived for $360^\circ$ full-space coverage at a low energy consumption. The theoretical ergodic rate lower bound of our non-orthogonal multiple access (NOMA) scheme is derived based on the moment matching approximation method, while considering the signal distortion at transceivers imposed by hardware impairments (HWIs). Furthermore, the asymptotically ergodic rate lower bound is derived both for an infinite number of IOS elements and for continuous aperture surfaces. Both the theoretical analysis and the simulation results show that the achievable rate of the NOMA scheme is higher than that of its orthogonal multiple access counterpart. Furthermore, owing to the HWIs at the transceivers, the achievable rate saturates at high signal-to-noise ratio region, instead of reaching its theoretical maximum.

Event-based sensors, distinguished by their high temporal resolution of 1 $\mathrm{\mu}\text{s}$ and a dynamic range of 120 $\text{dB}$, stand out as ideal tools for deployment in fast-paced settings like vehicles and drones. Traditional object detection techniques that utilize Artificial Neural Networks (ANNs) face challenges due to the sparse and asynchronous nature of the events these sensors capture. In contrast, Spiking Neural Networks (SNNs) offer a promising alternative, providing a temporal representation that is inherently aligned with event-based data. This paper explores the unique membrane potential dynamics of SNNs and their ability to modulate sparse events. We introduce an innovative spike-triggered adaptive threshold mechanism designed for stable training. Building on these insights, we present a specialized spiking feature pyramid network (SpikeFPN) optimized for automotive event-based object detection. Comprehensive evaluations demonstrate that SpikeFPN surpasses both traditional SNNs and advanced ANNs enhanced with attention mechanisms. Evidently, SpikeFPN achieves a mean Average Precision (mAP) of 0.477 on the GEN1 Automotive Detection (GAD) benchmark dataset, marking significant increases over the selected SNN baselines. Moreover, the efficient design of SpikeFPN ensures robust performance while optimizing computational resources, attributed to its innate sparse computation capabilities.

Solving a linear system $Ax=b$ is a fundamental scientific computing primitive for which numerous solvers and preconditioners have been developed. These come with parameters whose optimal values depend on the system being solved and are often impossible or too expensive to identify; thus in practice sub-optimal heuristics are used. We consider the common setting in which many related linear systems need to be solved, e.g. during a single numerical simulation. In this scenario, can we sequentially choose parameters that attain a near-optimal overall number of iterations, without extra matrix computations? We answer in the affirmative for Successive Over-Relaxation (SOR), a standard solver whose parameter $\omega$ has a strong impact on its runtime. For this method, we prove that a bandit online learning algorithm--using only the number of iterations as feedback--can select parameters for a sequence of instances such that the overall cost approaches that of the best fixed $\omega$ as the sequence length increases. Furthermore, when given additional structural information, we show that a contextual bandit method asymptotically achieves the performance of the instance-optimal policy, which selects the best $\omega$ for each instance. Our work provides the first learning-theoretic treatment of high-precision linear system solvers and the first end-to-end guarantees for data-driven scientific computing, demonstrating theoretically the potential to speed up numerical methods using well-understood learning algorithms.

Federated Learning (FL) allows multiple privacy-sensitive applications to leverage their dataset for a global model construction without any disclosure of the information. One of those domains is healthcare, where groups of silos collaborate in order to generate a global predictor with improved accuracy and generalization. However, the inherent challenge lies in the high heterogeneity of medical data, necessitating sophisticated techniques for assessment and compensation. This paper presents a comprehensive exploration of the mathematical formalization and taxonomy of heterogeneity within FL environments, focusing on the intricacies of medical data. In particular, we address the evaluation and comparison of the most popular FL algorithms with respect to their ability to cope with quantity-based, feature and label distribution-based heterogeneity. The goal is to provide a quantitative evaluation of the impact of data heterogeneity in FL systems for healthcare networks as well as a guideline on FL algorithm selection. Our research extends beyond existing studies by benchmarking seven of the most common FL algorithms against the unique challenges posed by medical data use cases. The paper targets the prediction of the risk of stroke recurrence through a set of tabular clinical reports collected by different federated hospital silos: data heterogeneity frequently encountered in this scenario and its impact on FL performance are discussed.

Posterior sampling, i.e., exponential mechanism to sample from the posterior distribution, provides $\varepsilon$-pure differential privacy (DP) guarantees and does not suffer from potentially unbounded privacy breach introduced by $(\varepsilon,\delta)$-approximate DP. In practice, however, one needs to apply approximate sampling methods such as Markov chain Monte Carlo (MCMC), thus re-introducing the unappealing $\delta$-approximation error into the privacy guarantees. To bridge this gap, we propose the Approximate SAample Perturbation (abbr. ASAP) algorithm which perturbs an MCMC sample with noise proportional to its Wasserstein-infinity ($W_\infty$) distance from a reference distribution that satisfies pure DP or pure Gaussian DP (i.e., $\delta=0$). We then leverage a Metropolis-Hastings algorithm to generate the sample and prove that the algorithm converges in $W_\infty$ distance. We show that by combining our new techniques with a localization step, we obtain the first nearly linear-time algorithm that achieves the optimal rates in the DP-ERM problem with strongly convex and smooth losses.

We study the complexity of a fundamental algorithm for fairly allocating indivisible items, the round-robin algorithm. For $n$ agents and $m$ items, we show that the algorithm can be implemented in time $O(nm\log(m/n))$ in the worst case. If the agents' preferences are uniformly random, we establish an improved (expected) running time of $O(nm + m\log m)$. On the other hand, assuming comparison queries between items, we prove that $\Omega(nm + m\log m)$ queries are necessary to implement the algorithm, even when randomization is allowed. We also derive bounds in noise models where the answers to queries are incorrect with some probability. Our proofs involve novel applications of tools from multi-armed bandit, information theory, as well as posets and linear extensions.

In this paper, the key objects of interest are the sequential covariance matrices $\mathbf{S}_{n,t}$ and their largest eigenvalues. Here, the matrix $\mathbf{S}_{n,t}$ is computed as the empirical covariance associated with observations $\{\mathbf{x}_1,\ldots,\mathbf{x}_{ \lfloor nt \rfloor } \}$, for $t\in [0,1]$. The observations $\mathbf{x}_1,\ldots,\mathbf{x}_n$ are assumed to be i.i.d. $p$-dimensional vectors with zero mean, and a covariance matrix that is a fixed-rank perturbation of the identity matrix. Treating $\{ \mathbf{S}_{n,t}\}_{t \in [0,1]}$ as a matrix-valued stochastic process indexed by $t$, we study the behavior of the largest eigenvalues of $\mathbf{S}_{n,t}$, as $t$ varies, with $n$ and $p$ increasing simultaneously, so that $p/n \to y \in (0,1)$. As a key contribution of this work, we establish the weak convergence of the stochastic process corresponding to the sample spiked eigenvalues, if their population counterparts exceed the critical phase-transition threshold. Our analysis of the limiting process is fully comprehensive revealing, in general, non-Gaussian limiting processes. As an application, we consider a class of change-point problems, where the interest is in detecting structural breaks in the covariance caused by a change in magnitude of the spiked eigenvalues. For this purpose, we propose two different maximal statistics corresponding to centered spiked eigenvalues of the sequential covariances. We show the existence of limiting null distributions for these statistics, and prove consistency of the test under fixed alternatives. Moreover, we compare the behavior of the proposed tests through a simulation study.

We study a fundamental problem in Computational Geometry, the planar two-center problem. In this problem, the input is a set $S$ of $n$ points in the plane and the goal is to find two smallest congruent disks whose union contains all points of $S$. A longstanding open problem has been to obtain an $O(n\log n)$-time algorithm for planar two-center, matching the $\Omega(n\log n)$ lower bound given by Eppstein [SODA'97]. Towards this, researchers have made a lot of efforts over decades. The previous best algorithm, given by Wang [SoCG'20], solves the problem in $O(n\log^2 n)$ time. In this paper, we present an $O(n\log n)$-time (deterministic) algorithm for planar two-center, which completely resolves this open problem.

北京阿比特科技有限公司