亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We investigate a structural generalisation of treewidth we call $\mathcal{A}$-blind-treewidth where $\mathcal{A}$ denotes an annotated graph class. This width parameter is defined by evaluating only the size of those bags $B$ of tree-decompositions for a graph $G$ where ${(G,B) \notin \mathcal{A}}$. For the two cases where $\mathcal{A}$ is (i) the class $\mathcal{B}$ of all pairs ${(G,X)}$ such that no odd cycle in $G$ contains more than one vertex of ${X \subseteq V(G)}$ and (ii) the class $\mathcal{B}$ together with the class $\mathcal{P}$ of all pairs ${(G,X)}$ such that the "torso" of $X$ in $G$ is planar. For both classes, $\mathcal{B}$ and ${\mathcal{B} \cup \mathcal{P}}$, we obtain analogues of the Grid Theorem by Robertson and Seymour and FPT-algorithms that either compute decompositions of small width or correctly determine that the width of a given graph is large. Moreover, we present FPT-algorithms for Maximum Independent Set on graphs of bounded $\mathcal{B}$-blind-treewidth and Maximum Cut on graphs of bounded ${(\mathcal{B}\cup\mathcal{P})}$-blind-treewidth.

相關內容

We investigate shift-invariant vectorial Boolean functions on $n$ bits that are induced from Boolean functions on $k$ bits, for $k\leq n$. We consider such functions that are not necessarily permutations, but are, in some sense, almost bijective, and their cryptographic properties. In this context, we define an almost lifting as a Boolean function for which there is an upper bound on the number of collisions of its induced functions that does not depend on $n$. We show that if a Boolean function with diameter $k$ is an almost lifting, then the maximum number of collisions of its induced functions is $2^{k-1}$ for any $n$. Moreover, we search for functions in the class of almost liftings that have good cryptographic properties and for which the non-bijectivity does not cause major security weaknesses. These functions generalize the well-known map $\chi$ used in the Keccak hash function.

We explore the theoretical possibility of learning $d$-dimensional targets with $W$-parameter models by gradient flow (GF) when $W<d$. Our main result shows that if the targets are described by a particular $d$-dimensional probability distribution, then there exist models with as few as two parameters that can learn the targets with arbitrarily high success probability. On the other hand, we show that for $W<d$ there is necessarily a large subset of GF-non-learnable targets. In particular, the set of learnable targets is not dense in $\mathbb R^d$, and any subset of $\mathbb R^d$ homeomorphic to the $W$-dimensional sphere contains non-learnable targets. Finally, we observe that the model in our main theorem on almost guaranteed two-parameter learning is constructed using a hierarchical procedure and as a result is not expressible by a single elementary function. We show that this limitation is essential in the sense that most models written in terms of elementary functions cannot achieve the learnability demonstrated in this theorem.

We propose a new numerical method for $\alpha$-dissipative solutions of the Hunter-Saxton equation, where $\alpha$ belongs to $W^{1, \infty}(\mathbb{R}, [0, 1))$. The method combines a projection operator with a generalized method of characteristics and an iteration scheme, which is based on enforcing minimal time steps whenever breaking times cluster. Numerical examples illustrate that these minimal time steps increase the efficiency of the algorithm substantially. Moreover, convergence of the wave profile is shown in $C([0, T], L^{\infty}(\mathbb{R}))$ for any finite $T \geq 0$.

QAC$^0$ is the class of constant-depth quantum circuits with polynomially many ancillary qubits, where Toffoli gates on arbitrarily many qubits are allowed. In this work, we show that the parity function cannot be computed in QAC$^0$, resolving a long-standing open problem in quantum circuit complexity more than twenty years old. As a result, this proves ${\rm QAC}^0 \subsetneqq {\rm QAC}_{\rm wf}^0$. We also show that any QAC circuit of depth $d$ that approximately computes parity on $n$ bits requires $2^{\widetilde{\Omega}(n^{1/d})}$ ancillary qubits, which is close to tight. This implies a similar lower bound on approximately preparing cat states using QAC circuits. Finally, we prove a quantum analog of the Linial-Mansour-Nisan theorem for QAC$^0$. This implies that, for any QAC$^0$ circuit $U$ with $a={\rm poly}(n)$ ancillary qubits, and for any $x\in\{0,1\}^n$, the correlation between $Q(x)$ and the parity function is bounded by ${1}/{2} + 2^{-\widetilde{\Omega}(n^{1/d})}$, where $Q(x)$ denotes the output of measuring the output qubit of $U|x,0^a\rangle$. All the above consequences rely on the following technical result. If $U$ is a QAC$^0$ circuit with $a={\rm poly}(n)$ ancillary qubits, then there is a distribution $\mathcal{D}$ of bounded polynomials of degree polylog$(n)$ such that with high probability, a random polynomial from $\mathcal{D}$ approximates the function $\langle x,0^a| U^\dag Z_{n+1} U |x,0^a\rangle$ for a large fraction of $x\in \{0,1\}^n$. This result is analogous to the Razborov-Smolensky result on the approximation of AC$^0$ circuits by random low-degree polynomials.

Statistical learning under distribution shift is challenging when neither prior knowledge nor fully accessible data from the target distribution is available. Distributionally robust learning (DRL) aims to control the worst-case statistical performance within an uncertainty set of candidate distributions, but how to properly specify the set remains challenging. To enable distributional robustness without being overly conservative, in this paper, we propose a shape-constrained approach to DRL, which incorporates prior information about the way in which the unknown target distribution differs from its estimate. More specifically, we assume the unknown density ratio between the target distribution and its estimate is isotonic with respect to some partial order. At the population level, we provide a solution to the shape-constrained optimization problem that does not involve the isotonic constraint. At the sample level, we provide consistency results for an empirical estimator of the target in a range of different settings. Empirical studies on both synthetic and real data examples demonstrate the improved accuracy of the proposed shape-constrained approach.

We consider the problem of counting the copies of a length-$k$ pattern $\sigma$ in a sequence $f \colon [n] \to \mathbb{R}$, where a copy is a subset of indices $i_1 < \ldots < i_k \in [n]$ such that $f(i_j) < f(i_\ell)$ if and only if $\sigma(j) < \sigma(\ell)$. This problem is motivated by a range of connections and applications in ranking, nonparametric statistics, combinatorics, and fine-grained complexity, especially when $k$ is a small fixed constant. Recent advances have significantly improved our understanding of counting and detecting patterns. Guillemot and Marx [2014] demonstrated that the detection variant is solvable in $O(n)$ time for any fixed $k$. Their proof has laid the foundations for the discovery of the twin-width, a concept that has notably advanced parameterized complexity in recent years. Counting, in contrast, is harder: it has a conditional lower bound of $n^{\Omega(k / \log k)}$ [Berendsohn, Kozma, and Marx 2019] and is expected to be polynomially harder than detection as early as $k = 4$, given its equivalence to counting $4$-cycles in graphs [Dudek and Gawrychowski, 2020]. In this work, we design a deterministic near-linear time $(1+\varepsilon)$-approximation algorithm for counting $\sigma$-copies in $f$ for all $k \leq 5$. Combined with the conditional lower bound for $k=4$, this establishes the first known separation between approximate and exact algorithms for pattern counting. Interestingly, our algorithm leverages the Birg\'e decomposition -- a sublinear tool for monotone distributions widely used in distribution testing -- which, to our knowledge, has not been applied in a pattern counting context before.

In decision-making, maxitive functions are used for worst-case and best-case evaluations. Maxitivity gives rise to a rich structure that is well-studied in the context of the pointwise order. In this article, we investigate maxitivity with respect to general preorders and provide a representation theorem for such functionals. The results are illustrated for different stochastic orders in the literature, including the usual stochastic order, the increasing convex/concave order, and the dispersive order.

We observe an unknown regression function of $d$ variables $f(\boldsymbol{t})$, $\boldsymbol{t} \in[0,1]^d$, in the Gaussian white noise model of intensity $\varepsilon>0$. We assume that the function $f$ is regular and that it is a sum of $k$-variate functions, where $k$ varies from $1$ to $s$ ($1\leq s\leq d$). These functions are unknown to us and only few of them are nonzero. In this article, we address the problem of identifying the nonzero components of $f$ in the case when $d=d_\varepsilon\to \infty$ as $\varepsilon\to 0$ and $s$ is either fixed or $s=s_\varepsilon\to \infty$, $s=o(d)$ as $\varepsilon\to \infty$. This may be viewed as a variable selection problem. We derive the conditions when exact variable selection in the model at hand is possible and provide a selection procedure that achieves this type of selection. The procedure is adaptive to a degree of model sparsity described by the sparsity parameter $\beta\in(0,1)$. We also derive conditions that make the exact variable selection impossible. Our results augment previous work in this area.

We develop two novel couplings between general pure-jump L\'evy processes in $\R^d$ and apply them to obtain upper bounds on the rate of convergence in an appropriate Wasserstein distance on the path space for a wide class of L\'evy processes attracted to a multidimensional stable process in the small-time regime. We also establish general lower bounds based on certain universal properties of slowly varying functions and the relationship between the Wasserstein and Toscani--Fourier distances of the marginals. Our upper and lower bounds typically have matching rates. In particular, the rate of convergence is polynomial for the domain of normal attraction and slower than a slowly varying function for the domain of non-normal attraction.

In this article, we study the problem of recovering symmetric $m$-tensor fields (including vector fields) supported in a unit disk $\mathbb{D}$ from a set of generalized V-line transforms, namely longitudinal, transverse, and mixed V-line transforms, and their integral moments. We work in a circular geometric setup, where the V-lines have vertices on a circle, and the axis of symmetry is orthogonal to the circle. We present two approaches to recover a symmetric $m$-tensor field from the combination of longitudinal, transverse, and mixed V-line transforms. With the help of these inversion results, we are able to give an explicit kernel description for these transforms. We also derive inversion algorithms to reconstruct a symmetric $m$-tensor field from its first $(m+1)$ moment longitudinal/transverse V-line transforms.

北京阿比特科技有限公司