亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We prove that the single-site Glauber dynamics for sampling proper $q$-colorings mixes in $O_\Delta(n\log n)$ time on line graphs with $n$ vertices and maximum degree $\Delta$ when $q>(1+o(1))\Delta$. The main tool in our proof is the matrix trickle-down theorem developed by Abdolazimi, Liu and Oveis Gharan (FOCS, 2021).

相關內容

We consider a novel algorithm, for the completion of partially observed low-rank matrices in a structured setting where each entry can be chosen from a finite discrete alphabet set, such as in common recommender systems. The proposed low-rank matrix completion (MC) method is an improved variation of state-of-the-art (SotA) discrete aware matrix completion method which we previously proposed, in which discreteness is enforced by an $\ell_0$-norm regularizer, not by replaced with the $\ell_1$-norm, but instead approximated by a continuous and differentiable function normalized via fractional programming (FP) under a proximal gradient (PG) framework. Simulation results demonstrate the superior performance of the new method compared to the SotA techniques as well as the earlier $\ell_1$-norm-based discrete-aware matrix completion approach.

This paper presents a novel self-supervised two-frame multi-camera metric depth estimation network, termed M${^2}$Depth, which is designed to predict reliable scale-aware surrounding depth in autonomous driving. Unlike the previous works that use multi-view images from a single time-step or multiple time-step images from a single camera, M${^2}$Depth takes temporally adjacent two-frame images from multiple cameras as inputs and produces high-quality surrounding depth. We first construct cost volumes in spatial and temporal domains individually and propose a spatial-temporal fusion module that integrates the spatial-temporal information to yield a strong volume presentation. We additionally combine the neural prior from SAM features with internal features to reduce the ambiguity between foreground and background and strengthen the depth edges. Extensive experimental results on nuScenes and DDAD benchmarks show M${^2}$Depth achieves state-of-the-art performance. More results can be found in //heiheishuang.xyz/M2Depth .

The number of low-weight codewords is critical to the performance of error-correcting codes. In 1970, Kasami and Tokura characterized the codewords of Reed-Muller (RM) codes whose weights are less than $2w_{\min}$, where $w_{\min}$ represents the minimum weight. In this paper, we extend their results to decreasing polar codes. We present the closed-form expressions for the number of codewords in decreasing polar codes with weights less than $2w_{\min}$. Moreover, the proposed enumeration algorithm runs in polynomial time with respect to the code length.

Coresets are arguably the most popular compression paradigm for center-based clustering objectives such as $k$-means. Given a point set $P$, a coreset $\Omega$ is a small, weighted summary that preserves the cost of all candidate solutions $S$ up to a $(1\pm \varepsilon)$ factor. For $k$-means in $d$-dimensional Euclidean space the cost for solution $S$ is $\sum_{p\in P}\min_{s\in S}\|p-s\|^2$. A very popular method for coreset construction, both in theory and practice, is Sensitivity Sampling, where points are sampled in proportion to their importance. We show that Sensitivity Sampling yields optimal coresets of size $\tilde{O}(k/\varepsilon^2\min(\sqrt{k},\varepsilon^{-2}))$ for worst-case instances. Uniquely among all known coreset algorithms, for well-clusterable data sets with $\Omega(1)$ cost stability, Sensitivity Sampling gives coresets of size $\tilde{O}(k/\varepsilon^2)$, improving over the worst-case lower bound. Notably, Sensitivity Sampling does not have to know the cost stability in order to exploit it: It is appropriately sensitive to the clusterability of the data set while being oblivious to it. We also show that any coreset for stable instances consisting of only input points must have size $\Omega(k/\varepsilon^2)$. Our results for Sensitivity Sampling also extend to the $k$-median problem, and more general metric spaces.

AMD Instinct$^\text{TM}$ MI300A is the world's first data center accelerated processing unit (APU) with memory shared between the AMD "Zen 4" EPYC$^\text{TM}$ cores and third generation CDNA$^\text{TM}$ compute units. A single memory space offers several advantages: i) it eliminates the need for data replication and costly data transfers, ii) it substantially simplifies application development and allows an incremental acceleration of applications, iii) is easy to maintain, and iv) its potential can be well realized via the abstractions in the OpenMP 5.2 standard, where the host and the device data environments can be unified in a more performant way. In this article, we provide a blueprint of the APU programming model leveraging unified memory and highlight key distinctions compared to the conventional approach with discrete GPUs. OpenFOAM, an open-source C++ library for computational fluid dynamics, is presented as a case study to emphasize the flexibility and ease of offloading a full-scale production-ready application on MI300 APUs using directive-based OpenMP programming.

For $\tilde{f}(t) = \exp(\frac{\alpha-1}{\alpha}t)$, this paper proposes a $\tilde{f}$-mean information gain measure. R\'{e}nyi divergence is shown to be the maximum $\tilde{f}$-mean information gain incurred at each elementary event $y$ of channel output $Y$ and Sibson mutual information is the $\tilde{f}$-mean of this $Y$-elementary information gain. Both are proposed as $\alpha$-leakage measures, indicating the most information an adversary can obtain on sensitive data. It is shown that the existing $\alpha$-leakage by Arimoto mutual information can be expressed as $\tilde{f}$-mean measures by a scaled probability. Further, Sibson mutual information is interpreted as the maximum $\tilde{f}$-mean information gain over all estimation decisions applied to channel output. This reveals that the exiting generalized Blahut-Arimoto method for computing R\'{e}nyi capacity (or Gallager's error exponent) in fact maximizes a $\tilde{f}$-mean information gain iteratively over estimation decision and channel input. This paper also derives a decomposition of $\tilde{f}$-mean information gain, analogous to the Sibson identity for R\'{e}nyi divergence.

In this paper, for any fixed positive integers $t$ and $q>2$, we construct $q$-ary codes correcting a burst of at most $t$ deletions with redundancy $\log n+8\log\log n+o(\log\log n)+\gamma_{q,t}$ bits and near-linear encoding/decoding complexity, where $n$ is the message length and $\gamma_{q,t}$ is a constant that only depends on $q$ and $t$. In previous works there are constructions of such codes with redundancy $\log n+O(\log q\log\log n)$ bits or $\log n+O(t^2\log\log n)+O(t\log q)$. The redundancy of our new construction is independent of $q$ and $t$ in the second term.

In this short note, we give a novel algorithm for $O(1)$ round triangle counting in bounded arboricity graphs. Counting triangles in $O(1)$ rounds (exactly) is listed as one of the interesting remaining open problems in the recent survey of Im et al. [IKLMV23]. The previous paper of Biswas et al. [BELMR20], which achieved the best bounds under this setting, used $O(\log \log n)$ rounds in sublinear space per machine and $O(m\alpha)$ total space where $\alpha$ is the arboricity of the graph and $n$ and $m$ are the number of vertices and edges in the graph, respectively. Our new algorithm is very simple, achieves the optimal $O(1)$ rounds without increasing the space per machine and the total space, and has the potential of being easily implementable in practice.

For nearly three decades, language models derived from the $n$-gram assumption held the state of the art on the task. The key to their success lay in the application of various smoothing techniques that served to combat overfitting. However, when neural language models toppled $n$-gram models as the best performers, $n$-gram smoothing techniques became less relevant. Indeed, it would hardly be an understatement to suggest that the line of inquiry into $n$-gram smoothing techniques became dormant. This paper re-opens the role classical $n$-gram smoothing techniques may play in the age of neural language models. First, we draw a formal equivalence between label smoothing, a popular regularization technique for neural language models, and add-$\lambda$ smoothing. Second, we derive a generalized framework for converting any $n$-gram smoothing technique into a regularizer compatible with neural language models. Our empirical results find that our novel regularizers are comparable to and, indeed, sometimes outperform label smoothing on language modeling and machine translation.

Click-through rate (CTR) prediction plays a critical role in recommender systems and online advertising. The data used in these applications are multi-field categorical data, where each feature belongs to one field. Field information is proved to be important and there are several works considering fields in their models. In this paper, we proposed a novel approach to model the field information effectively and efficiently. The proposed approach is a direct improvement of FwFM, and is named as Field-matrixed Factorization Machines (FmFM, or $FM^2$). We also proposed a new explanation of FM and FwFM within the FmFM framework, and compared it with the FFM. Besides pruning the cross terms, our model supports field-specific variable dimensions of embedding vectors, which acts as soft pruning. We also proposed an efficient way to minimize the dimension while keeping the model performance. The FmFM model can also be optimized further by caching the intermediate vectors, and it only takes thousands of floating-point operations (FLOPs) to make a prediction. Our experiment results show that it can out-perform the FFM, which is more complex. The FmFM model's performance is also comparable to DNN models which require much more FLOPs in runtime.

北京阿比特科技有限公司