亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The (bitwise) complement $\overline{x}$ of a binary word $x$ is obtained by changing each $0$ in $x$ to $1$ and vice versa. An $\textit{antisquare}$ is a nonempty word of the form $x\, \overline{x}$. In this paper, we study infinite binary words that do not contain arbitrarily large antisquares. For example, we show that the repetition threshold for the language of infinite binary words containing exactly two distinct antisquares is $(5+\sqrt{5})/2$. We also study repetition thresholds for related classes, where "two" in the previous sentence is replaced by a larger number. We say a binary word is $\textit{good}$ if the only antisquares it contains are $01$ and $10$. We characterize the minimal antisquares, that is, those words that are antisquares but all proper factors are good. We determine the growth rate of the number of good words of length $n$ and determine the repetition threshold between polynomial and exponential growth for the number of good words.

相關內容

An absent factor of a string $w$ is a string $u$ which does not occur as a contiguous substring (a.k.a. factor) inside $w$. We extend this well-studied notion and define absent subsequences: a string $u$ is an absent subsequence of a string $w$ if $u$ does not occur as subsequence (a.k.a. scattered factor) inside $w$. Of particular interest to us are minimal absent subsequences, i.e., absent subsequences whose every subsequence is not absent, and shortest absent subsequences, i.e., absent subsequences of minimal length. We show a series of combinatorial and algorithmic results regarding these two notions. For instance: we give combinatorial characterisations of the sets of minimal and, respectively, shortest absent subsequences in a word, as well as compact representations of these sets; we show how we can test efficiently if a string is a shortest or minimal absent subsequence in a word, and we give efficient algorithms computing the lexicographically smallest absent subsequence of each kind; also, we show how a data structure for answering shortest absent subsequence-queries for the factors of a given string can be efficiently computed.

We introduce Clifford Group Equivariant Neural Networks: a novel approach for constructing $\mathrm{O}(n)$- and $\mathrm{E}(n)$-equivariant models. We identify and study the $\textit{Clifford group}$, a subgroup inside the Clifford algebra whose definition we adjust to achieve several favorable properties. Primarily, the group's action forms an orthogonal automorphism that extends beyond the typical vector space to the entire Clifford algebra while respecting the multivector grading. This leads to several non-equivalent subrepresentations corresponding to the multivector decomposition. Furthermore, we prove that the action respects not just the vector space structure of the Clifford algebra but also its multiplicative structure, i.e., the geometric product. These findings imply that every polynomial in multivectors, An advantage worth mentioning is that we obtain expressive layers that can elegantly generalize to inner-product spaces of any dimension. We demonstrate, notably from a single core implementation, state-of-the-art performance on several distinct tasks, including a three-dimensional $n$-body experiment, a four-dimensional Lorentz-equivariant high-energy physics experiment, and a five-dimensional convex hull experiment.

Subset Sum Ratio is the following optimization problem: Given a set of $n$ positive numbers $I$, find disjoint subsets $X,Y \subseteq I$ minimizing the ratio $\max\{\Sigma(X)/\Sigma(Y),\Sigma(Y)/\Sigma(X)\}$, where $\Sigma(Z)$ denotes the sum of all elements of $Z$. Subset Sum Ratio is an optimization variant of the Equal Subset Sum problem. It was introduced by Woeginger and Yu in '92 and is known to admit an FPTAS [Bazgan, Santha, Tuza '98]. The best approximation schemes before this work had running time $O(n^4/\varepsilon)$ [Melissinos, Pagourtzis '18], $\tilde O(n^{2.3}/\varepsilon^{2.6})$ and $\tilde O(n^2/\varepsilon^3)$ [Alonistiotis et al. '22]. In this work, we present an improved approximation scheme for Subset Sum Ratio running in time $O(n / \varepsilon^{0.9386})$. Here we assume that the items are given in sorted order, otherwise we need an additional running time of $O(n \log n)$ for sorting. Our improved running time simultaneously improves the dependence on $n$ to linear and the dependence on $1/\varepsilon$ to sublinear. For comparison, the related Subset Sum problem admits an approximation scheme running in time $O(n/\varepsilon)$ [Gens, Levner '79]. If one would achieve an approximation scheme with running time $\tilde O(n / \varepsilon^{0.99})$ for Subset Sum, then one would falsify the Strong Exponential Time Hypothesis [Abboud, Bringmann, Hermelin, Shabtay '19] as well as the Min-Plus-Convolution Hypothesis [Bringmann, Nakos '21]. We thus establish that Subset Sum Ratio admits faster approximation schemes than Subset Sum. This comes as a surprise, since at any point in time before this work the best known approximation scheme for Subset Sum Ratio had a worse running time than the best known approximation scheme for Subset Sum.

Consider the {$\ell_{\alpha}$} regularized linear regression, also termed Bridge regression. For $\alpha\in (0,1)$, Bridge regression enjoys several statistical properties of interest such as sparsity and near-unbiasedness of the estimates (Fan and Li, 2001). However, the main difficulty lies in the non-convex nature of the penalty for these values of $\alpha$, which makes an optimization procedure challenging and usually it is only possible to find a local optimum. To address this issue, Polson et al. (2013) took a sampling based fully Bayesian approach to this problem, using the correspondence between the Bridge penalty and a power exponential prior on the regression coefficients. However, their sampling procedure relies on Markov chain Monte Carlo (MCMC) techniques, which are inherently sequential and not scalable to large problem dimensions. Cross validation approaches are similarly computation-intensive. To this end, our contribution is a novel \emph{non-iterative} method to fit a Bridge regression model. The main contribution lies in an explicit formula for Stein's unbiased risk estimate for the out of sample prediction risk of Bridge regression, which can then be optimized to select the desired tuning parameters, allowing us to completely bypass MCMC as well as computation-intensive cross validation approaches. Our procedure yields results in a fraction of computational times compared to iterative schemes, without any appreciable loss in statistical performance. An R implementation is publicly available online at: //github.com/loriaJ/Sure-tuned_BridgeRegression .

Optimal Transport is a useful metric to compare probability distributions and to compute a pairing given a ground cost. Its entropic regularization variant (eOT) is crucial to have fast algorithms and reflect fuzzy/noisy matchings. This work focuses on Inverse Optimal Transport (iOT), the problem of inferring the ground cost from samples drawn from a coupling that solves an eOT problem. It is a relevant problem that can be used to infer unobserved/missing links, and to obtain meaningful information about the structure of the ground cost yielding the pairing. On one side, iOT benefits from convexity, but on the other side, being ill-posed, it requires regularization to handle the sampling noise. This work presents an in-depth theoretical study of the l1 regularization to model for instance Euclidean costs with sparse interactions between features. Specifically, we derive a sufficient condition for the robust recovery of the sparsity of the ground cost that can be seen as a far reaching generalization of the Lasso's celebrated Irrepresentability Condition. To provide additional insight into this condition, we work out in detail the Gaussian case. We show that as the entropic penalty varies, the iOT problem interpolates between a graphical Lasso and a classical Lasso, thereby establishing a connection between iOT and graph estimation, an important problem in ML.

Quantum multiplication is a fundamental operation in quantum computing. Most existing quantum multipliers require $O(n)$ qubits to multiply two $n$-bit integer numbers, limiting their applicability to multiply large integer numbers using near-term quantum computers. In this paper, we propose the Quantum Multiplier Based on Exponent Adder (QMbead), a new approach that addresses this limitation by requiring just $\log_2(n)$ qubits to multiply two $n$-bit integer numbers. QMbead uses a so-called exponent encoding to represent two multiplicands as two superposition states, respectively, and then employs a quantum adder to obtain the sum of these two superposition states, and subsequently measures the outputs of the quantum adder to calculate the product of the multiplicands. This paper presents two types of quantum adders based on the quantum Fourier transform (QFT) for use in QMbead. The circuit depth of QMbead is determined by the chosen quantum adder, being $O(\log^2 n)$ when using the two QFT-based adders. If leveraging a logarithmic-depth quantum adder, the time complexity of QMbead is $O(n \log n)$, identical to that of the fastest classical multiplication algorithm, Harvey-Hoeven algorithm. Interestingly, QMbead maintains an advantage over the Harvey-Hoeven algorithm, given that the latter is only suitable for excessively large numbers, whereas QMbead is valid for both small and large numbers. The multiplicand can be either an integer or a decimal number. QMbead has been successfully implemented on quantum simulators to compute products with a bit length of up to 273 bits using only 17 qubits. This establishes QMbead as an efficient solution for multiplying large integer or decimal numbers with many bits.

This paper studies the prediction of a target $\mathbf{z}$ from a pair of random variables $(\mathbf{x},\mathbf{y})$, where the ground-truth predictor is additive $\mathbb{E}[\mathbf{z} \mid \mathbf{x},\mathbf{y}] = f_\star(\mathbf{x}) +g_{\star}(\mathbf{y})$. We study the performance of empirical risk minimization (ERM) over functions $f+g$, $f \in F$ and $g \in G$, fit on a given training distribution, but evaluated on a test distribution which exhibits covariate shift. We show that, when the class $F$ is "simpler" than $G$ (measured, e.g., in terms of its metric entropy), our predictor is more resilient to $\textbf{heterogenous covariate shifts}$ in which the shift in $\mathbf{x}$ is much greater than that in $\mathbf{y}$. Our analysis proceeds by demonstrating that ERM behaves $\textbf{qualitatively similarly to orthogonal machine learning}$: the rate at which ERM recovers the $f$-component of the predictor has only a lower-order dependence on the complexity of the class $G$, adjusted for partial non-indentifiability introduced by the additive structure. These results rely on a novel H\"older style inequality for the Dudley integral which may be of independent interest. Moreover, we corroborate our theoretical findings with experiments demonstrating improved resilience to shifts in "simpler" features across numerous domains.

Consider that there are $k\le n$ agents in a simple, connected, and undirected graph $G=(V,E)$ with $n$ nodes and $m$ edges. The goal of the dispersion problem is to move these $k$ agents to distinct nodes. Agents can communicate only when they are at the same node, and no other means of communication such as whiteboards are available. We assume that the agents operate synchronously. We consider two scenarios: when all agents are initially located at any single node (rooted setting) and when they are initially distributed over any one or more nodes (general setting). Kshemkalyani and Sharma presented a dispersion algorithm for the general setting, which uses $O(m_k)$ time and $\log(k+\delta)$ bits of memory per agent [OPODIS 2021]. Here, $m_k$ is the maximum number of edges in any induced subgraph of $G$ with $k$ nodes, and $\delta$ is the maximum degree of $G$. This algorithm is the fastest in the literature, as no algorithm with $o(m_k)$ time has been discovered even for the rooted setting. In this paper, we present faster algorithms for both the rooted and general settings. First, we present an algorithm for the rooted setting that solves the dispersion problem in $O(k\log \min(k,\delta))=O(k\log k)$ time using $O(\log \delta)$ bits of memory per agent. Next, we propose an algorithm for the general setting that achieves dispersion in $O(k (\log k)\cdot (\log \min(k,\delta))=O(k \log^2 k)$ time using $O(\log (k+\delta))$ bits.

We consider an online binary prediction setting where a forecaster observes a sequence of $T$ bits one by one. Before each bit is revealed, the forecaster predicts the probability that the bit is $1$. The forecaster is called well-calibrated if for each $p \in [0, 1]$, among the $n_p$ bits for which the forecaster predicts probability $p$, the actual number of ones, $m_p$, is indeed equal to $p \cdot n_p$. The calibration error, defined as $\sum_p |m_p - p n_p|$, quantifies the extent to which the forecaster deviates from being well-calibrated. It has long been known that an $O(T^{2/3})$ calibration error is achievable even when the bits are chosen adversarially, and possibly based on the previous predictions. However, little is known on the lower bound side, except an $\Omega(\sqrt{T})$ bound that follows from the trivial example of independent fair coin flips. In this paper, we prove an $\Omega(T^{0.528})$ bound on the calibration error, which is the first super-$\sqrt{T}$ lower bound for this setting to the best of our knowledge. The technical contributions of our work include two lower bound techniques, early stopping and sidestepping, which circumvent the obstacles that have previously hindered strong calibration lower bounds. We also propose an abstraction of the prediction setting, termed the Sign-Preservation game, which may be of independent interest. This game has a much smaller state space than the full prediction setting and allows simpler analyses. The $\Omega(T^{0.528})$ lower bound follows from a general reduction theorem that translates lower bounds on the game value of Sign-Preservation into lower bounds on the calibration error.

In this paper, we prove the following non-linear generalization of the classical Sylvester-Gallai theorem. Let $\mathbb{K}$ be an algebraically closed field of characteristic $0$, and $\mathcal{F}=\{F_1,\cdots,F_m\} \subset \mathbb{K}[x_1,\cdots,x_N]$ be a set of irreducible homogeneous polynomials of degree at most $d$ such that $F_i$ is not a scalar multiple of $F_j$ for $i\neq j$. Suppose that for any two distinct $F_i,F_j\in \mathcal{F}$, there is $k\neq i,j$ such that $F_k\in \mathrm{rad}(F_i,F_j)$. We prove that such radical SG configurations must be low dimensional. More precisely, we show that there exists a function $\lambda : \mathbb{N} \to \mathbb{N}$, independent of $\mathbb{K},N$ and $m$, such that any such configuration $\mathcal{F}$ must satisfy $$ \dim (\mathrm{span}_{\mathbb{K}}{\mathcal{F}}) \leq \lambda(d). $$ Our result confirms a conjecture of Gupta [Gup14, Conjecture 2] and generalizes the quadratic and cubic Sylvester-Gallai theorems of [S20,OS22]. Our result takes us one step closer towards the first deterministic polynomial time algorithm for the Polynomial Identity Testing (PIT) problem for depth-4 circuits of bounded top and bottom fanins. Our result, when combined with the Stillman uniformity type results of [AH20a,DLL19,ESS21], yields uniform bounds for several algebraic invariants such as projective dimension, Betti numbers and Castelnuovo-Mumford regularity of ideals generated by radical SG configurations.

北京阿比特科技有限公司