亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In recent years a new class of symmetric-key primitives over $\mathbb{F}_p$ that are essential to Multi-Party Computation and Zero-Knowledge Proofs based protocols have emerged. Towards improving the efficiency of such primitives, a number of new block ciphers and hash functions over $\mathbb{F}_p$ were proposed. These new primitives also showed that following alternative design strategies to the classical Substitution-Permutation Network (SPN) and Feistel Networks leads to more efficient cipher and hash function designs over $\mathbb{F}_p$ specifically for large odd primes $p$. In view of these efforts, in this work we build an \emph{algebraic framework} that allows the systematic exploration of viable and efficient design strategies for constructing symmetric-key (iterative) permutations over $\mathbb{F}_p$. We first identify iterative polynomial dynamical systems over finite fields as the central building block of almost all block cipher design strategies. We propose a generalized triangular polynomial dynamical system (GTDS), and based on the GTDS we provide a generic definition of an iterative (keyed) permutation over $\mathbb{F}_p^n$. Our GTDS-based generic definition is able to describe the three most well-known design strategies, namely SPNs, Feistel networks and Lai--Massey. Consequently, the block ciphers that are constructed following these design strategies can also be instantiated from our generic definition. Moreover, we find that the recently proposed \texttt{Griffin} design, which neither follows the Feistel nor the SPN design, can be described using the generic GTDS-based definition. We also show that a new generalized Lai--Massey construction can be instantiated from the GTDS-based definition. We further provide generic analysis of the GTDS including an upper bound on the differential uniformity and the correlation.

相關內容

We generalize signature Gr\"obner bases, previously studied in the free algebra over a field or polynomial rings over a ring, to ideals in the mixed algebra $R[x_1,...,x_k]\langle y_1,\dots,y_n \rangle$ where $R$ is a principal ideal domain. We give an algorithm for computing them, combining elements from the theory of commutative and noncommutative (signature) Gr\"obner bases, and prove its correctness. Applications include extensions of the free algebra with commutative variables, e.g., for homogenization purposes or for performing ideal theoretic operations such as intersections, and computations over $\mathbb{Z}$ as universal proofs over fields of arbitrary characteristic. By extending the signature cover criterion to our setting, our algorithm also lifts some technical restrictions from previous noncommutative signature-based algorithms, now allowing, e.g., elimination orderings. We provide a prototype implementation for the case when $R$ is a field, and show that our algorithm for the mixed algebra is more efficient than classical approaches using existing algorithms.

We study the problem of maintaining a differentially private decaying sum under continual observation. We give a unifying framework and an efficient algorithm for this problem for \emph{any sufficiently smooth} function. Our algorithm is the first differentially private algorithm that does not have a multiplicative error for polynomially-decaying weights. Our algorithm improves on all prior works on differentially private decaying sums under continual observation and recovers exactly the additive error for the special case of continual counting from Henzinger et al. (SODA 2023) as a corollary. Our algorithm is a variant of the factorization mechanism whose error depends on the $\gamma_2$ and $\gamma_F$ norm of the underlying matrix. We give a constructive proof for an almost exact upper bound on the $\gamma_2$ and $\gamma_F$ norm and an almost tight lower bound on the $\gamma_2$ norm for a large class of lower-triangular matrices. This is the first non-trivial lower bound for lower-triangular matrices whose non-zero entries are not all the same. It includes matrices for all continual decaying sums problems, resulting in an upper bound on the additive error of any differentially private decaying sums algorithm under continual observation. We also explore some implications of our result in discrepancy theory and operator algebra. Given the importance of the $\gamma_2$ norm in computer science and the extensive work in mathematics, we believe our result will have further applications.

Grammar compression is a general compression framework in which a string $T$ of length $N$ is represented as a context-free grammar of size $n$ whose language contains only $T$. In this paper, we focus on studying the limitations of algorithms and data structures operating on strings in grammar-compressed form. Previous work focused on proving lower bounds for grammars constructed using algorithms that achieve the approximation ratio $\rho=\mathcal{O}(\text{polylog }N)$. Unfortunately, for the majority of grammar compressors, $\rho$ is either unknown or satisfies $\rho=\omega(\text{polylog }N)$. In their seminal paper, Charikar et al. [IEEE Trans. Inf. Theory 2005] studied seven popular grammar compression algorithms: RePair, Greedy, LongestMatch, Sequential, Bisection, LZ78, and $\alpha$-Balanced. Only one of them ($\alpha$-Balanced) is known to achieve $\rho=\mathcal{O}(\text{polylog }N)$. We develop the first technique for proving lower bounds for data structures and algorithms on grammars that is fully general and does not depend on the approximation ratio $\rho$ of the used grammar compressor. Using this technique, we first prove that $\Omega(\log N/\log \log N)$ time is required for random access on RePair, Greedy, LongestMatch, Sequential, and Bisection, while $\Omega(\log\log N)$ time is required for random access to LZ78. All these lower bounds hold within space $\mathcal{O}(n\text{ polylog }N)$ and match the existing upper bounds. We also generalize this technique to prove several conditional lower bounds for compressed computation. For example, we prove that unless the Combinatorial $k$-Clique Conjecture fails, there is no combinatorial algorithm for CFG parsing on Bisection (for which it holds $\rho=\tilde{\Theta}(N^{1/2})$) that runs in $\mathcal{O}(n^c\cdot N^{3-\epsilon})$ time for all constants $c>0$ and $\epsilon>0$. Previously, this was known only for $c<2\epsilon$.

Peer production platforms like Wikipedia commonly suffer from content gaps. Prior research suggests recommender systems can help solve this problem, by guiding editors towards underrepresented topics. However, it remains unclear whether this approach would result in less relevant recommendations, leading to reduced overall engagement with recommended items. To answer this question, we first conducted offline analyses (Study 1) on SuggestBot, a task-routing recommender system for Wikipedia, then did a three-month controlled experiment (Study 2). Our results show that presenting users with articles from underrepresented topics increased the proportion of work done on those articles without significantly reducing overall recommendation uptake. We discuss the implications of our results, including how ignoring the article discovery process can artificially narrow recommendations. We draw parallels between this phenomenon and the common issue of ``filter bubbles'' to show how any platform that employs recommender systems is susceptible to it.

We consider hypergraph network design problems where the goal is to construct a hypergraph satisfying certain properties. In graph network design problems, the number of edges in an arbitrary solution is at most the square of the number of vertices. In contrast, in hypergraph network design problems, the number of hyperedges in an arbitrary solution could be exponential in the number of vertices and hence, additional care is necessary to design polynomial-time algorithms. The central theme of this work is to show that certain hypergraph network design problems admit solutions with polynomial number of hyperedges and moreover, can be solved in strongly polynomial time. Our work improves on the previous fastest pseudo-polynomial run-time for these problems. In addition, we develop algorithms that return (near-)uniform hypergraphs as solutions. The hypergraph network design problems that we focus upon are splitting-off operation in hypergraphs, connectivity augmentation using hyperedges, and covering skew-supermodular functions using hyperedges. Our definition of the splitting-off operation in hypergraphs and our proof showing the existence of the operation using a strongly polynomial-time algorithm to compute it are likely to be of independent graph-theoretical interest.

We study the validity of the Neumann or Born series approach in solving the Helmholtz equation and coefficient identification in related inverse scattering problems. Precisely, we derive a sufficient and necessary condition under which the series is strongly convergent. We also investigate the rate of convergence of the series. The obtained condition is optimal and it can be much weaker than the traditional requirement for the convergence of the series. Our approach makes use of reduction space techniques proposed by Suzuki \cite{Suzuki-1976}. Furthermore we propose an interpolation method that allows the use of the Neumann series in all cases. Finally, we provide several numerical tests with different medium functions and frequency values to validate our theoretical results.

We present a novel Graph-based debiasing Algorithm for Underreported Data (GRAUD) aiming at an efficient joint estimation of event counts and discovery probabilities across spatial or graphical structures. This innovative method provides a solution to problems seen in fields such as policing data and COVID-$19$ data analysis. Our approach avoids the need for strong priors typically associated with Bayesian frameworks. By leveraging the graph structures on unknown variables $n$ and $p$, our method debiases the under-report data and estimates the discovery probability at the same time. We validate the effectiveness of our method through simulation experiments and illustrate its practicality in one real-world application: police 911 calls-to-service data.

In many stochastic service systems, decision-makers find themselves making a sequence of decisions, with the number of decisions being unpredictable. To enhance these decisions, it is crucial to uncover the causal impact these decisions have through careful analysis of observational data from the system. However, these decisions are not made independently, as they are shaped by previous decisions and outcomes. This phenomenon is called sequential bias and violates a key assumption in causal inference that one person's decision does not interfere with the potential outcomes of another. To address this issue, we establish a connection between sequential bias and the subfield of causal inference known as dynamic treatment regimes. We expand these frameworks to account for the random number of decisions by modeling the decision-making process as a marked point process. Consequently, we can define and identify causal effects to quantify sequential bias. Moreover, we propose estimators and explore their properties, including double robustness and semiparametric efficiency. In a case study of 27,831 encounters with a large academic emergency department, we use our approach to demonstrate that the decision to route a patient to an area for low acuity patients has a significant impact on the care of future patients.

The criticality problem in nuclear engineering asks for the principal eigen-pair of a Boltzmann operator describing neutron transport in a reactor core. Being able to reliably design, and control such reactors requires assessing these quantities within quantifiable accuracy tolerances. In this paper we propose a paradigm that deviates from the common practice of approximately solving the corresponding spectral problem with a fixed, presumably sufficiently fine discretization. Instead, the present approach is based on first contriving iterative schemes, formulated in function space, that are shown to converge at a quantitative rate without assuming any a priori excess regularity properties, and that exploit only properties of the optical parameters in the underlying radiative transfer model. We develop the analytical and numerical tools for approximately realizing each iteration step withing judiciously chosen accuracy tolerances, verified by a posteriori estimates, so as to still warrant quantifiable convergence to the exact eigen-pair. This is carried out in full first for a Newton scheme. Since this is only locally convergent we analyze in addition the convergence of a power iteration in function space to produce sufficiently accurate initial guesses. Here we have to deal with intrinsic difficulties posed by compact but unsymmetric operators preventing standard arguments used in the finite dimensional case. Our main point is that we can avoid any condition on an initial guess to be already in a small neighborhood of the exact solution. We close with a discussion of remaining intrinsic obstructions to a certifiable numerical implementation, mainly related to not knowing the gap between the principal eigenvalue and the next smaller one in modulus.

Learning with expert advice and multi-armed bandit are two classic online decision problems which differ on how the information is observed in each round of the game. We study a family of problems interpolating the two. For a vector $\mathbf{m}=(m_1,\dots,m_K)\in \mathbb{N}^K$, an instance of $\mathbf{m}$-MAB indicates that the arms are partitioned into $K$ groups and the $i$-th group contains $m_i$ arms. Once an arm is pulled, the losses of all arms in the same group are observed. We prove tight minimax regret bounds for $\mathbf{m}$-MAB and design an optimal PAC algorithm for its pure exploration version, $\mathbf{m}$-BAI, where the goal is to identify the arm with minimum loss with as few rounds as possible. We show that the minimax regret of $\mathbf{m}$-MAB is $\Theta\left(\sqrt{T\sum_{k=1}^K\log (m_k+1)}\right)$ and the minimum number of pulls for an $(\epsilon,0.05)$-PAC algorithm of $\mathbf{m}$-BAI is $\Theta\left(\frac{1}{\epsilon^2}\cdot \sum_{k=1}^K\log (m_k+1)\right)$. Both our upper bounds and lower bounds for $\mathbf{m}$-MAB can be extended to a more general setting, namely the bandit with graph feedback, in terms of the clique cover and related graph parameters. As consequences, we obtained tight minimax regret bounds for several families of feedback graphs.

北京阿比特科技有限公司