亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A one-dimensional sequence $u_0, u_1, u_2, \ldots \in [0, 1)$ is said to be completely uniformly distributed (CUD) if overlapping $s$-blocks $(u_i, u_{i+1}, \ldots , u_{i+s-1})$, $i = 0, 1, 2, \ldots$, are uniformly distributed for every dimension $s \geq 1$. This concept naturally arises in Markov chain quasi-Monte Carlo (QMC). However, the definition of CUD sequences is not constructive, and thus there remains the problem of how to implement the Markov chain QMC algorithm in practice. Harase (2021) focused on the $t$-value, which is a measure of uniformity widely used in the study of QMC, and implemented short-period Tausworthe generators (i.e., linear feedback shift register generators) over the two-element field $\mathbb{F}_2$ that approximate CUD sequences by running for the entire period. In this paper, we generalize a search algorithm over $\mathbb{F}_2$ to that over arbitrary finite fields $\mathbb{F}_b$ with $b$ elements and conduct a search for Tausworthe generators over $\mathbb{F}_b$ with $t$-values zero (i.e., optimal) for dimension $s = 3$ and small for $s \geq 4$, especially in the case where $b = 3, 4$, and $5$. We provide a parameter table of Tausworthe generators over $\mathbb{F}_4$, and report a comparison between our new generators over $\mathbb{F}_4$ and existing generators over $\mathbb{F}_2$ in numerical examples using Markov chain QMC.

相關內容

In this paper, we address the optimization problem of moments of Age of Information (AoI) for active and passive users in a random access network. In this network, active users broadcast sensing data while passive users only receive signals. Collisions occur when multiple active users transmit simultaneously, and passive users are unable to receive signals while any active user is transmitting. Each active user follows a Markov process for their transmissions. We aim to minimize the weighted sum of any moments of AoI for both active and passive users in this network. To achieve this, we employ a second-order analysis to analyze the system. Specifically, we characterize an active user's transmission Markov process by its mean and temporal process. We show that any moment of the AoI can be expressed a function of the mean and temporal variance, which, in turn, enables us to derive the optimal transmission Markov process. Our simulation results demonstrate that this proposed strategy outperforms other baseline policies that use different active user transmission models.

We consider the following decision problems: given a finite, rational Markov chain, source and target states, and a rational threshold, does there exist an n such that the probability of reaching the target from the source in n steps is equal to the threshold (resp. crosses the threshold)? These problems are known to be equivalent to the Skolem (resp. Positivity) problems for Linear Recurrence Sequences (LRS). These are number-theoretic problems whose decidability has been open for decades. We present a short, self-contained, and elementary reduction from LRS to Markov Chains that improves the state of the art as follows: (a) We reduce to ergodic Markov Chains, a class that is widely used in Model Checking. (b) We reduce LRS to Markov Chains of significantly lower order than before. We thus get sharper hardness results for a more ubiquitous class of Markov Chains. Immediate applications include problems in modeling biological systems, and regular automata-based counting problems.

Policy optimization methods with function approximation are widely used in multi-agent reinforcement learning. However, it remains elusive how to design such algorithms with statistical guarantees. Leveraging a multi-agent performance difference lemma that characterizes the landscape of multi-agent policy optimization, we find that the localized action value function serves as an ideal descent direction for each local policy. Motivated by the observation, we present a multi-agent PPO algorithm in which the local policy of each agent is updated similarly to vanilla PPO. We prove that with standard regularity conditions on the Markov game and problem-dependent quantities, our algorithm converges to the globally optimal policy at a sublinear rate. We extend our algorithm to the off-policy setting and introduce pessimism to policy evaluation, which aligns with experiments. To our knowledge, this is the first provably convergent multi-agent PPO algorithm in cooperative Markov games.

Given graphs $H$ and $G$, possibly with vertex-colors, a homomorphism is a function $f:V(H)\to V(G)$ that preserves colors and edges. Many interesting counting problems (e.g., subgraph and induced subgraph counts) are finite linear combinations $p(\cdot)=\sum_{H}\alpha_{H}\hom(H,\cdot)$ of homomorphism counts, and such linear combinations are known to be hard to evaluate iff they contain a large-treewidth graph $S$. The hardness can be shown in two steps: First, the problems $\hom(S,\cdot)$ for colorful (i.e., bijectively colored) large-treewidth graphs $S$ are shown to be hard. In a second step, these problems are reduced to finite linear combinations of homomorphism counts that contain the uncolored version $S^{\circ}$ of $S$. This step can be performed via inclusion-exclusion in $2^{|E(S)|}\mathrm{poly}(n,s)$ time, where $n$ is the size of the input graph and $s$ is the maximum number of vertices among all graphs in the linear combination. We show that the second step can be performed even in time $4^{\Delta(S)}\mathrm{poly}(n,s)$, where $\Delta(S)$ is the maximum degree of $S$. Our reduction is based on graph products with Cai-F\"urer-Immerman graphs, a novel technique that is likely of independent interest. For colorful graphs $S$ of constant maximum degree, this technique yields a polynomial-time reduction from $\hom(S,\cdot)$ to linear combinations of homomorphism counts involving $S^{\circ}$. Under certain conditions, it actually suffices that a supergraph $T$ of $S^{\circ}$ is contained in the target linear combination. The new reduction yields $\mathsf{\#P}$-hardness results for several counting problems that could previously be studied only under parameterized complexity assumptions. This includes the problems of counting, on input a graph from a restricted graph class and a general graph $G$, the homomorphisms or (induced) subgraph copies from $H$ in $G$.

Let a polytope $P$ be defined by a system $A x \leq b$. We consider the problem of counting the number of integer points inside $P$, assuming that $P$ is $\Delta$-modular, where the polytope $P$ is called $\Delta$-modular if all the rank sub-determinants of $A$ are bounded by $\Delta$ in the absolute value. We present a new FPT-algorithm, parameterized by $\Delta$ and by the maximal number of vertices in $P$, where the maximum is taken by all r.h.s. vectors $b$. We show that our algorithm is more efficient for $\Delta$-modular problems than the approach of A. Barvinok et al. To this end, we do not directly compute the short rational generating function for $P \cap Z^n$, which is commonly used for the considered problem. Instead, we use the dynamic programming principle to compute its particular representation in the form of exponential series that depends on a single variable. We completely do not rely to the Barvinok's unimodular sign decomposition technique. Using our new complexity bound, we consider different special cases that may be of independent interest. For example, we give FPT-algorithms for counting the integer points number in $\Delta$-modular simplices and similar polytopes that have $n + O(1)$ facets. As a special case, for any fixed $m$, we give an FPT-algorithm to count solutions of the unbounded $m$-dimensional $\Delta$-modular subset-sum problem.

We show that the sparsified block elimination algorithm for solving undirected Laplacian linear systems from [Kyng-Lee-Peng-Sachdeva-Spielman STOC'16] directly works for directed Laplacians. Given access to a sparsification algorithm that, on graphs with $n$ vertices and $m$ edges, takes time $\mathcal{T}_{\rm S}(m)$ to output a sparsifier with $\mathcal{N}_{\rm S}(n)$ edges, our algorithm solves a directed Eulerian system on $n$ vertices and $m$ edges to $\epsilon$ relative accuracy in time $$ O(\mathcal{T}_{\rm S}(m) + {\mathcal{N}_{\rm S}(n)\log {n}\log(n/\epsilon)}) + \tilde{O}(\mathcal{T}_{\rm S}(\mathcal{N}_{\rm S}(n)) \log n), $$ where the $\tilde{O}(\cdot)$ notation hides $\log\log(n)$ factors. By previous results, this implies improved runtimes for linear systems in strongly connected directed graphs, PageRank matrices, and asymmetric M-matrices. When combined with slower constructions of smaller Eulerian sparsifiers based on short cycle decompositions, it also gives a solver that runs in $O(n \log^{5}n \log(n / \epsilon))$ time after $O(n^2 \log^{O(1)} n)$ pre-processing. At the core of our analyses are constructions of augmented matrices whose Schur complements encode error matrices.

A novel algorithm is proposed for quantitative comparisons between compact surfaces embedded in the three-dimensional Euclidian space. The key idea is to identify those objects with the associated surface measures and compute a weak distance between them using the Fourier transform on the ambient space. In particular, the inhomogeneous Sobolev norm of negative order for a difference between two surface measures is evaluated via the Plancherel theorem, which amounts to approximating an weighted integral norm of smooth data on the frequency space. This approach allows several advantages including high accuracy due to fast-converging numerical quadrature rules, acceleration by the nonuniform fast Fourier transform, and parallelization on many-core processors. In numerical experiments, the 2-sphere, which is an example whose Fourier transform is explicitly known, is compared with its icosahedral discretization, and it is observed that the piecewise linear approximations converge to the smooth object at the quadratic rate up to small truncation.

In Lipschitz two-dimensional domains, we study a Brinkman-Darcy-Forchheimer problem on the weighted spaces $\mathbf{H}_0^1(\omega,\Omega) \times L^2(\omega,\Omega)/\mathbb{R}$, where $\omega$ belongs to the Muckenhoupt class $A_2$. Under a suitable smallness assumption, we establish the existence and uniqueness of a solution. We propose a finite element scheme and obtain a quasi-best approximation result in energy norm \`a la C\'ea under the assumption that $\Omega$ is convex. We also devise an a posteriori error estimator and investigate its reliability and efficiency properties. Finally, we design a simple adaptive strategy that yields optimal experimental rates of convergence for the numerical examples that we perform.

We present CausalSim, a causal framework for unbiased trace-driven simulation. Current trace-driven simulators assume that the interventions being simulated (e.g., a new algorithm) would not affect the validity of the traces. However, real-world traces are often biased by the choices algorithms make during trace collection, and hence replaying traces under an intervention may lead to incorrect results. CausalSim addresses this challenge by learning a causal model of the system dynamics and latent factors capturing the underlying system conditions during trace collection. It learns these models using an initial randomized control trial (RCT) under a fixed set of algorithms, and then applies them to remove biases from trace data when simulating new algorithms. Key to CausalSim is mapping unbiased trace-driven simulation to a tensor completion problem with extremely sparse observations. By exploiting a basic distributional invariance property present in RCT data, CausalSim enables a novel tensor completion method despite the sparsity of observations. Our extensive evaluation of CausalSim on both real and synthetic datasets, including more than ten months of real data from the Puffer video streaming system shows it improves simulation accuracy, reducing errors by 53% and 61% on average compared to expert-designed and supervised learning baselines. Moreover, CausalSim provides markedly different insights about ABR algorithms compared to the biased baseline simulator, which we validate with a real deployment.

Given a matroid $M=(E,{\cal I})$, and a total ordering over the elements $E$, a broken circuit is a circuit where the smallest element is removed and an NBC independent set is an independent set in ${\cal I}$ with no broken circuit. The set of NBC independent sets of any matroid $M$ define a simplicial complex called the broken circuit complex which has been the subject of intense study in combinatorics. Recently, Adiprasito, Huh and Katz showed that the face of numbers of any broken circuit complex form a log-concave sequence, proving a long-standing conjecture of Rota. We study counting and optimization problems on NBC bases of a generic matroid. We find several fundamental differences with the independent set complex: for example, we show that it is NP-hard to find the max-weight NBC base of a matroid or that the convex hull of NBC bases of a matroid has edges of arbitrary large length. We also give evidence that the natural down-up walk on the space of NBC bases of a matroid may not mix rapidly by showing that for some family of matroids it is NP-hard to count the number of NBC bases after certain conditionings.

北京阿比特科技有限公司