亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We define the relative fractional independence number of two graphs, $G$ and $H$, as $$\alpha^*(G|H)=\max_{W}\frac{\alpha(G\boxtimes W)}{\alpha(H\boxtimes W)},$$ where the maximum is taken over all graphs $W$, $G\boxtimes W$ is the strong product of $G$ and $W$, and $\alpha$ denotes the independence number. We give a non-trivial linear program to compute $\alpha^*(G|H)$ and discuss some of its properties. We show that $$\alpha^*(G|H)\geq \frac{X(G)}{X(H)},$$ where $X(G)$ can be the independence number, the zero-error Shannon capacity, the fractional independence number, the Lov'{a}sz number, or the Schrijver's or Szegedy's variants of the Lov'{a}sz number of a graph $G$. This inequality is the first explicit non-trivial upper bound on the ratio of the invariants of two arbitrary graphs, as mentioned earlier, which can also be used to obtain upper or lower bounds for these invariants. As explicit applications, we present new upper bounds for the ratio of the zero-error Shannon capacity of two Cayley graphs and compute new lower bounds on the Shannon capacity of certain Johnson graphs (yielding the exact value of their Haemers number). Moreover, we show that the relative fractional independence number can be used to present a stronger version of the well-known No-Homomorphism Lemma. The No-Homomorphism Lemma is widely used to show the non-existence of a homomorphism between two graphs and is also used to give an upper bound on the independence number of a graph. Our extension of the No-Homomorphism Lemma is computationally more accessible than its original version.

相關內容

The quantization problem aims to find the best possible approximation of probability measures on ${\mathbb{R}}^d$ using finite, discrete measures. The Wasserstein distance is a typical choice to measure the quality of the approximation. This contribution investigates the properties and robustness of the entropy-regularized quantization problem, which relaxes the standard quantization problem. The proposed approximation technique naturally adopts the softmin function, which is well known for its robustness in terms of theoretical and practicability standpoints. Moreover, we use the entropy-regularized Wasserstein distance to evaluate the quality of the soft quantization problem's approximation, and we implement a stochastic gradient approach to achieve the optimal solutions. The control parameter in our proposed method allows for the adjustment of the optimization problem's difficulty level, providing significant advantages when dealing with exceptionally challenging problems of interest. As well, this contribution empirically illustrates the performance of the method in various expositions.

Event-based sensors, distinguished by their high temporal resolution of 1$\mathrm{\mu s}$ and a dynamic range of 120$\mathrm{dB}$, stand out as ideal tools for deployment in fast-paced settings like vehicles and drones. Traditional object detection techniques that utilize Artificial Neural Networks (ANNs) face challenges due to the sparse and asynchronous nature of the events these sensors capture. In contrast, Spiking Neural Networks (SNNs) offer a promising alternative, providing a temporal representation that is inherently aligned with event-based data. This paper explores the unique membrane potential dynamics of SNNs and their ability to modulate sparse events. We introduce an innovative spike-triggered adaptive threshold mechanism designed for stable training. Building on these insights, we present a specialized spiking feature pyramid network (SpikeFPN) optimized for automotive event-based object detection. Comprehensive evaluations demonstrate that SpikeFPN surpasses both traditional SNNs and advanced ANNs enhanced with attention mechanisms. Evidently, SpikeFPN achieves a mean Average Precision (mAP) of 0.477 on the {GEN1 Automotive Detection (GAD)} benchmark dataset, marking a significant increase of 9.7\% over the previous best SNN. Moreover, the efficient design of SpikeFPN ensures robust performance while optimizing computational resources, attributed to its innate sparse computation capabilities.

We introduce two new classes of measures of information for statistical experiments which generalise and subsume $\phi$-divergences, integral probability metrics, $\mathfrak{N}$-distances (MMD), and $(f,\Gamma)$ divergences between two or more distributions. This enables us to derive a simple geometrical relationship between measures of information and the Bayes risk of a statistical decision problem, thus extending the variational $\phi$-divergence representation to multiple distributions in an entirely symmetric manner. The new families of divergence are closed under the action of Markov operators which yields an information processing equality which is a refinement and generalisation of the classical data processing inequality. This equality gives insight into the significance of the choice of the hypothesis class in classical risk minimization.

Given $n$-vertex simple graphs $X$ and $Y$, the friends-and-strangers graph $\mathsf{FS}(X, Y)$ has as its vertices all $n!$ bijections from $V(X)$ to $V(Y)$, with bijections $\sigma, \tau$ adjacent if and only if they differ on two adjacent elements of $V(X)$ whose mappings are adjacent in $Y$. We consider the setting where $X$ and $Y$ are both edge-subgraphs of $K_{r,r}$: due to a parity obstruction, $\mathsf{FS}(X,Y)$ is always disconnected in this setting. Sharpening a result of Bangachev, we show that if $X$ and $Y$ respectively have minimum degrees $\delta(X)$ and $\delta(Y)$ and they satisfy $\delta(X) + \delta(Y) \geq \lfloor 3r/2 \rfloor + 1$, then $\mathsf{FS}(X,Y)$ has exactly two connected components. This proves that the cutoff for $\mathsf{FS}(X,Y)$ to avoid isolated vertices is equal to the cutoff for $\mathsf{FS}(X,Y)$ to have exactly two connected components. We also consider a probabilistic setup in which we fix $Y$ to be $K_{r,r}$, but randomly generate $X$ by including each edge in $K_{r,r}$ independently with probability $p$. Invoking a result of Zhu, we exhibit a phase transition phenomenon with threshold function $(\log r)/r$: below the threshold, $\mathsf{FS}(X,Y)$ has more than two connected components with high probability, while above the threshold, $\mathsf{FS}(X,Y)$ has exactly two connected components with high probability. Altogether, our results settle a conjecture and completely answer two problems of Alon, Defant, and Kravitz.

The combined universal probability $\mathbf{m}(D)$ of strings $x$ in sets $D$ is close to max $\m(x)$ over $x$ in $D$: their logs differ by at most $D$'s information $\mathbf{I}(D:\mathcal{H})$ about the halting sequence $\mathcal{H}$.

An adjacency sketching or implicit labeling scheme for a family $\cal F$ of graphs is a method that defines for any $n$ vertex $G \in \cal F$ an assignment of labels to each vertex in $G$, so that the labels of two vertices tell you whether or not they are adjacent. The goal is to come up with labeling schemes that use as few bits as possible to represent the labels. By using randomness when assigning labels, it is sometimes possible to produce adjacency sketches with much smaller label sizes, but this comes at the cost of introducing some probability of error. Both deterministic and randomized labeling schemes have been extensively studied, as they have applications for distributed data structures and deeper connections to universal graphs and communication complexity. The main question of interest is which graph families have schemes using short labels, usually $O(\log n)$ in the deterministic case or constant for randomized sketches. In this work we consider the resilience of probabilistic adjacency sketches against an adversary making adaptive queries to the labels. This differs from the previously analyzed probabilistic setting which is ``one shot". We show that in the adaptive adversarial case the size of the labels is tightly related to the maximal degree of the graphs in $\cal F$. This results in a stronger characterization compared to what is known in the non-adversarial setting. In more detail, we construct sketches that fail with probability $\varepsilon$ for graphs with maximal degree $d$ using $2d\log (1/\varepsilon)$ bit labels and show that this is roughly the best that can be done for any specific graph of maximal degree $d$, e.g.\ a $d$-ary tree.

Given a continuous definable function $f: S \to \mathbb{R}$ on a definable set $S$, we study sublevel sets of the form $S^f_t = \{x \in S: f(x) \leq t\}$ for all $t \in \mathbb{R}$. Using o-minimal structures, we prove that the Euler characteristic of $S^f_t$ is right continuous with respect to $t$. Furthermore, when $S$ is compact, we show that $S^f_{t+\delta}$ deformation retracts to $S^f_t$ for all sufficiently small $\delta > 0$. Applying these results, we also characterize the relationship between the concepts of Euler characteristic transform and smooth Euler characteristic transform in topological data analysis.

We propose a general algorithm of constructing an extended formulation for any given set of linear constraints with integer coefficients. Our algorithm consists of two phases: first construct a decision diagram $(V,E)$ that somehow represents a given $m \times n$ constraint matrix, and then build an equivalent set of $|E|$ linear constraints over $n+|V|$ variables. That is, the size of the resultant extended formulation depends not explicitly on the number $m$ of the original constraints, but on its decision diagram representation. Therefore, we may significantly reduce the computation time for optimization problems with integer constraint matrices by solving them under the extended formulations, especially when we obtain concise decision diagram representations for the matrices. We can apply our method to $1$-norm regularized hard margin optimization over the binary instance space $\{0,1\}^n$, which can be formulated as a linear programming problem with $m$ constraints with $\{-1,0,1\}$-valued coefficients over $n$ variables, where $m$ is the size of the given sample. Furthermore, introducing slack variables over the edges of the decision diagram, we establish a variant formulation of soft margin optimization. We demonstrate the effectiveness of our extended formulations for integer programming and the $1$-norm regularized soft margin optimization tasks over synthetic and real datasets.

We propose and analyze exact and inexact regularized Newton-type methods for finding a global saddle point of \emph{convex-concave} unconstrained min-max optimization problems. Compared to first-order methods, our understanding of second-order methods for min-max optimization is relatively limited, as obtaining global rates of convergence with second-order information is much more involved. In this paper, we examine how second-order information can be used to speed up extra-gradient methods, even under inexactness. Specifically, we show that the proposed algorithms generate iterates that remain within a bounded set and the averaged iterates converge to an $\epsilon$-saddle point within $O(\epsilon^{-2/3})$ iterations in terms of a restricted gap function. Our algorithms match the theoretically established lower bound in this context and our analysis provides a simple and intuitive convergence analysis for second-order methods without any boundedness requirements. Finally, we present a series of numerical experiments on synthetic and real data that demonstrate the efficiency of the proposed algorithms.

Consider the normal linear regression setup when the number of covariates p is much larger than the sample size n, and the covariates form correlated groups. The response variable y is not related to an entire group of covariates in all or none basis, rather the sparsity assumption persists within and between groups. We extend the traditional g-prior setup to this framework. Variable selection consistency of the proposed method is shown under fairly general conditions, assuming the covariates to be random and allowing the true model to grow with both n and p. For the purpose of implementation of the proposed g-prior method to high-dimensional setup, we propose two procedures. First, a group screening procedure, termed as group SIS (GSIS), and secondly, a novel stochastic search variable selection algorithm, termed as group informed variable selection algorithm (GiVSA), which uses the known group structure efficiently to explore the model space without discarding any covariate based on an initial screening. Screening consistency of GSIS, and theoretical mixing time of GiVSA are studied using the canonical path ensemble approach of Yang et al. (2016). Performance of the proposed prior with implementation of GSIS as well as GiVSA are validated using various simulated examples and a real data related to residential buildings.

北京阿比特科技有限公司