亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The subset cover problem for $k \geq 1$ hash functions, which can be seen as an extension of the collision problem, was introduced in 2002 by Reyzin and Reyzin to analyse the security of their hash-function based signature scheme HORS. The security of many hash-based signature schemes relies on this problem or a variant of this problem (e.g. HORS, SPHINCS, SPHINCS+, \dots). Recently, Yuan, Tibouchi and Abe (2022) introduced a variant to the subset cover problem, called restricted subset cover, and proposed a quantum algorithm for this problem. In this work, we prove that any quantum algorithm needs to make $\Omega\left(k^{-\frac{2^{k-1}}{2^k-1}}\cdot N^{\frac{2^{k-1}-1}{2^k-1}}\right)$ queries to the underlying hash functions to solve the restricted subset cover problem, which essentially matches the query complexity of the algorithm proposed by Yuan, Tibouchi and Abe. We also analyze the security of the general $(r,k)$-subset cover problem, which is the underlying problem that implies the unforgeability of HORS under a $r$-chosen message attack (for $r \geq 1$). We prove that a generic quantum algorithm needs to make $\Omega\left(N^{k/5}\right)$ queries to the underlying hash functions to find a $(1,k)$-subset cover. We also propose a quantum algorithm that finds a $(r,k)$-subset cover making $O\left(N^{k/(2+2r)}\right)$ queries to the $k$ hash functions.

相關內容

In this paper, we study the design and analysis of a class of efficient algorithms for computing the Gromov-Wasserstein (GW) distance tailored to large-scale graph learning tasks. Armed with the Luo-Tseng error bound condition~\citep{luo1992error}, two proposed algorithms, called Bregman Alternating Projected Gradient (BAPG) and hybrid Bregman Proximal Gradient (hBPG) enjoy the convergence guarantees. Upon task-specific properties, our analysis further provides novel theoretical insights to guide how to select the best-fit method. As a result, we are able to provide comprehensive experiments to validate the effectiveness of our methods on a host of tasks, including graph alignment, graph partition, and shape matching. In terms of both wall-clock time and modeling performance, the proposed methods achieve state-of-the-art results.

Finite dynamical systems (FDSs) are commonly used to model systems with a finite number of states that evolve deterministically and at discrete time steps. Considered up to isomorphism, those correspond to functional graphs. As such, FDSs have a sum and product operation, which correspond to the direct sum and direct product of their respective graphs; the collection of FDSs endowed with these operations then forms a semiring. The algebraic structure of the product of FDSs is particularly interesting. For instance, an FDS can be factorised if and only if it is composed of two sub-systems running in parallel. In this work, we further the understanding of the factorisation, division, and root finding problems for FDSs. Firstly, an FDS $A$ is cancellative if one can divide by it unambiguously, i.e. $AX = AY$ implies $X = Y$. We prove that an FDS $A$ is cancellative if and only if it has a fixpoint. Secondly, we prove that if an FDS $A$ has a $k$-th root (i.e. $B$ such that $B^k = A$), then it is unique. Thirdly, unlike integers, the monoid of FDS product does not have unique factorisation into irreducibles. We instead exhibit a large class of monoids of FDSs with unique factorisation. To obtain our main results, we introduce the unrolling of an FDS, which can be viewed as a space-time expansion of the system. This allows us to work with (possibly infinite) trees, where the product is easier to handle than its counterpart for FDSs.

Linear computation broadcast (LCBC) refers to a setting with $d$ dimensional data stored at a central server, where $K$ users, each with some prior linear side-information, wish to retrieve various linear combinations of the data. The goal is to determine the minimum amount of information that must be broadcast to satisfy all the users. The reciprocal of the optimal broadcast cost is the capacity of LCBC. The capacity is known for up to $K=3$ users. Since LCBC includes index coding as a special case, large $K$ settings of LCBC are at least as hard as the index coding problem. Instead of the general setting (all instances), by focusing on the generic setting (almost all instances) this work shows that the generic capacity of the symmetric LCBC (where every user has $m'$ dimensions of side-information and $m$ dimensions of demand) for large number of users ($K>d$ suffices) is $C_g=1/\Delta_g$, where $\Delta_g=\min\left\{\max\{0,d-m'\}, \frac{dm}{m+m'}\right\}$, is the broadcast cost that is both achievable and unbeatable asymptotically almost surely for large $n$, among all LCBC instances with the given parameters $p,K,d,m,m'$. Relative to baseline schemes of random coding or separate transmissions, $C_g$ shows an extremal gain by a factor of $K$ as a function of number of users, and by a factor of $\approx d/4$ as a function of data dimensions, when optimized over remaining parameters. For arbitrary number of users, the generic capacity of the symmetric LCBC is characterized within a factor of $2$.

Offline reinforcement learning (RL) leverages previously collected data for policy optimization without any further active exploration. Despite the recent interest in this problem, its theoretical results in neural network function approximation settings remain elusive. In this paper, we study the statistical theory of offline RL with deep ReLU network function approximation. In particular, we establish the sample complexity of $n = \tilde{\mathcal{O}}( H^{4 + 4 \frac{d}{\alpha}} \kappa_{\mu}^{1 + \frac{d}{\alpha}} \epsilon^{-2 - 2\frac{d}{\alpha}} )$ for offline RL with deep ReLU networks, where $\kappa_{\mu}$ is a measure of distributional shift, {$H = (1-\gamma)^{-1}$ is the effective horizon length}, $d$ is the dimension of the state-action space, $\alpha$ is a (possibly fractional) smoothness parameter of the underlying Markov decision process (MDP), and $\epsilon$ is a user-specified error. Notably, our sample complexity holds under two novel considerations: the Besov dynamic closure and the correlated structure. While the Besov dynamic closure subsumes the dynamic conditions for offline RL in the prior works, the correlated structure renders the prior works of offline RL with general/neural network function approximation improper or inefficient {in long (effective) horizon problems}. To the best of our knowledge, this is the first theoretical characterization of the sample complexity of offline RL with deep neural network function approximation under the general Besov regularity condition that goes beyond {the linearity regime} in the traditional Reproducing Hilbert kernel spaces and Neural Tangent Kernels.

Over the past 27 years, quantum computing has seen a huge rise in interest from both academia and industry. At the current rate, quantum computers are growing in size rapidly backed up by the increase of research in the field. Significant efforts are being made to improve the reliability of quantum hardware and to develop suitable software to program quantum computers. In contrast, the verification of quantum programs has received relatively less attention. Verifying programs is especially important in the quantum setting due to how difficult it is to program complex algorithms correctly on resource-constrained and error-prone quantum hardware. Research into creating verification frameworks for quantum programs has seen recent development, with a variety of tools implemented using a collection of theoretical ideas. This survey aims to be a short introduction into the area of formal verification of quantum programs, bringing together theory and tools developed to date. Further, this survey examines some of the challenges that the field may face in the future, namely the development of complex quantum algorithms.

According to the public goods game (PGG) protocol, participants decide freely whether they want to contribute to a common pool or not, but the resulting benefit is distributed equally. A conceptually similar dilemma situation may emerge when participants consider if they claim a common resource but the related cost is covered equally by all group members. The latter establishes a reversed form of the original public goods game (R-PGG). In this work, we show that R-PGG is equivalent to PGG in several circumstances, starting from the traditional analysis, via the evolutionary approach in unstructured populations, to Monte Carlo simulations in structured populations. However, there are also cases when the behavior of R-PGG could be surprisingly different from the outcome of PGG. When the key parameters are heterogeneous, for instance, the results of PGG and R-PGG could be diverse even if we apply the same amplitudes of heterogeneity. We find that the heterogeneity in R-PGG generally impedes cooperation, while the opposite is observed for PGG. These diverse system reactions can be understood if we follow how payoff functions change when introducing heterogeneity in the parameter space. This analysis also reveals the distinct roles of cooperator and defector strategies in the mentioned games. Our observations may hopefully stimulate further research to check the potential differences between PGG and R-PGG due to the alternative complexity of conditions.

Software implements a significant proportion of functionality in factory automation. Thus, efficient development and the reuse of software parts, so-called units, enhance competitiveness. Thereby, complex control software units are more difficult to understand, leading to increased development, testing and maintenance costs. However, measuring complexity is challenging due to many different, subjective views on the topic. This paper compares different complexity definitions from literature and considers with a qualitative questionnaire study the complexity perception of domain experts, who confirm the importance of objective measures to compare complexity. The paper proposes a set of metrics that measure various classes of software complexity to identify the most complex software units as a prerequisite for refactoring. The metrics include complexity caused by size, data structure, control flow, information flow and lexical structure. Unlike most literature approaches, the metrics are compliant with graphical and textual languages from the IEC 61131-3 standard. Further, a concept for interpreting the metric results is presented. A comprehensive evaluation with industrial software from two German plant manufacturers validates the metrics' suitability to measure complexity.

This paper is concerned with a direct sampling method for imaging the support of a frequency-dependent source term embedded in a homogeneous and isotropic medium. The source term is given by the Fourier transform of a time-dependent source whose radiating period in the time domain is known. The time-dependent source is supposed to be stationary in the sense that its compact support does not vary along the time variable. Via a multi-frequency direct sampling method, we show that the smallest strip containing the source support and perpendicular to the observation direction can be recovered from far-field patterns at a fixed observation angle. With multiple but sparse observation directions, the shape of the convex hull of the source support can be recovered. The frequency-domain analysis performed here can be used to handle inverse time-dependent source problems. Our algorithm has low computational overhead and is robust against noise. Numerical experiments in both two and three dimensions have proved our theoretical findings.

We study geometric variations of the discriminating code problem. In the \emph{discrete version} of the problem, a finite set of points $P$ and a finite set of objects $S$ are given in $\mathbb{R}^d$. The objective is to choose a subset $S^* \subseteq S$ of minimum cardinality such that for each point $p_i \in P$, the subset $S_i^* \subseteq S^*$ covering $p_i$ satisfies $S_i^*\neq \emptyset$, and each pair $p_i,p_j \in P$, $i \neq j$, we have $S_i^* \neq S_j^*$. In the \emph{continuous version} of the problem, the solution set $S^*$ can be chosen freely among a (potentially infinite) class of allowed geometric objects. In the 1-dimensional case ($d=1$), the points in $P$ are placed on a horizontal line $L$, and the objects in $S$ are finite-length line segments aligned with $L$ (called intervals). We show that the discrete version of this problem is NP-complete. This is somewhat surprising as the continuous version is known to be polynomial-time solvable. Still, for the 1-dimensional discrete version, we design a polynomial-time $2$-approximation algorithm. We also design a PTAS for both discrete and continuous versions in one dimension, for the restriction where the intervals are all required to have the same length. We then study the 2-dimensional case ($d=2$) for axis-parallel unit square objects. We show that both continuous and discrete versions are NP-complete, and design polynomial-time approximation algorithms that produce $(16\cdot OPT+1)$-approximate and $(64\cdot OPT+1)$-approximate solutions respectively, using rounding of suitably defined integer linear programming problems. We show that the identifying code problem for axis-parallel unit square intersection graphs (in $d=2$) can be solved in the same manner as for the discrete version of the discriminating code problem for unit square objects.

When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.

北京阿比特科技有限公司