亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We provide a simple and natural solution to the problem of generating all $2^n \cdot n!$ signed permutations of $[n] = \{1,2,\ldots,n\}$. Our solution provides a pleasing generalization of the most famous ordering of permutations: plain changes (Steinhaus-Johnson-Trotter algorithm). In plain changes, the $n!$ permutations of $[n]$ are ordered so that successive permutations differ by swapping a pair of adjacent symbols, and the order is often visualized as a weaving pattern involving $n$ ropes. Here we model a signed permutation using $n$ ribbons with two distinct sides, and each successive configuration is created by twisting (i.e., swapping and turning over) two neighboring ribbons or a single ribbon. By greedily prioritizing $2$-twists of the largest symbol before $1$-twists of the largest symbol, we create a signed version of plain change's memorable zig-zag pattern. We provide a loopless algorithm (i.e., worst-case $\mathcal{O}(1)$-time per object) by extending the well-known mixed-radix Gray code algorithm.

相關內容

The $a$-number is an invariant of the isomorphism class of the $p$-torsion group scheme. We use the Cartier operator on $H^0(\mathcal{A}_2,\Omega^1)$ to find a closed formula for the $a$-number of the form $\mathcal{A}_2 = v(Y^{\sqrt{q}}+Y-x^{\frac{\sqrt{q}+1}{2}})$ where $q=p^s$ over the finite field $\mathbb{F}_{q^2}$. The application of the computed $a$-number in coding theory is illustrated by the relationship between the algebraic properties of the curve and the parameters of codes that are supported by it.

We establish the unique ergodicity of the Markov chain generated by the stochastic theta method (STM) with $\theta \in [1/2, 1]$ for monotone SODEs, without growth restriction on the coefficients, driven by nondegenerate multiplicative noise. The main ingredient of the arguments lies in the construction of new Lyapunov functions, involving the coefficients, the stepsize, and $\theta$, and the irreducibility and the strong Feller property for the STM. We also generalize the arguments to the STM and its Galerkin-based full discretizations for a class of monotone SPDEs driven by infinite-dimensional nondegenerate multiplicative trace-class noise. Applying these results to the stochastic Allen--Cahn equation indicates that its drift-implicit Euler scheme is uniquely ergodic for any interface thickness, which gives an affirmative answer to a question proposed in (J. Cui, J. Hong, and L. Sun, Stochastic Process. Appl. (2021): 55--93). Numerical experiments verify our theoretical results.

For an input graph $G=(V, E)$ and a source vertex $s \in V$, the \emph{$\alpha$-approximate vertex fault-tolerant distance sensitivity oracle} (\emph{$\alpha$-VSDO}) answers an $\alpha$-approximate distance from $s$ to $t$ in $G-x$ for any query $(x, t)$. It is a data structure version of the so-called single-source replacement path problem (SSRP). In this paper, we present a new \emph{nearly linear time} algorithm of constructing the $(1 + \epsilon)$-VSDO for any weighted directed graph of $n$ vertices and $m$ edges with integer weights in range $[1, W]$, and any positive constant $\epsilon \in (0, 1]$. More precisely, the presented oracle attains $\tilde{O}(m / \epsilon + n /\epsilon^2)$ construction time, $\tilde{O}(n/ \epsilon)$ size, and $\tilde{O}(1/\epsilon)$ query time for any polynomially-bounded $W$. To the best of our knowledge, this is the first non-trivial result for SSRP/VSDO beating the trivial $\tilde{O}(mn)$ computation time for directed graphs with polynomially-bounded edge weights. Such a result has been unknown so far even for the setting of $(1 + \epsilon)$-approximation. It also implies that the known barrier of $\Omega(m\sqrt{n})$ time for the exact SSRP by Chechik and Magen~[ICALP2020] does not apply to the case of approximation.

Say we have a collection of independent random variables $X_0, ... , X_n$, where $X_0 \sim \mathcal{N}(\mu_0, \sigma_0^2)$, but $X_i \sim \mathcal{N}(\mu, \sigma^2)$, for $1 \leq i \leq n$. We characterize the distribution of $R_0 := 1 + \sum_{i=1}^{n} \mathbf{1}\{X_i \leq X_0\}$, the rank of the random variable whose distribution potentially differs from that of the others -- the odd normal out. We show that $R_0 - 1$ is approximately beta-binomial, an approximation that becomes equality as $\sigma/\sigma_0$ or $(\mu-\mu_0)/\sigma_0$ become large or small. The intra-class correlation of the approximating beta-binomial depends on $\Pr(X_1 \leq X_0)$ and $\Pr(X_1 \leq X_0, X_2 \leq X_0)$. Our approach relies on the conjugacy of the beta distribution for the binomial: $\Phi((X_0-\mu)/\sigma)$ is approximately $\mathrm{Beta}(\alpha(\sigma/\sigma_0, (\mu-\mu_0)/\sigma_0), \beta(\sigma/\sigma_0, (\mu-\mu_0)/\sigma_0))$ for functions $\alpha, \beta > 0$. We study the distributions of the in-normal ranks. Throughout, simulations corroborate the formulae we derive.

We consider the problem of sampling discrete field configurations $\phi$ from the Boltzmann distribution $[d\phi] Z^{-1} e^{-S[\phi]}$, where $S$ is the lattice-discretization of the continuous Euclidean action $\mathcal S$ of some quantum field theory. Since such densities arise as the approximation of the underlying functional density $[\mathcal D\phi(x)] \mathcal Z^{-1} e^{-\mathcal S[\phi(x)]}$, we frame the task as an instance of operator learning. In particular, we propose to approximate a time-dependent operator $\mathcal V_t$ whose time integral provides a mapping between the functional distributions of the free theory $[\mathcal D\phi(x)] \mathcal Z_0^{-1} e^{-\mathcal S_{0}[\phi(x)]}$ and of the target theory $[\mathcal D\phi(x)]\mathcal Z^{-1}e^{-\mathcal S[\phi(x)]}$. Whenever a particular lattice is chosen, the operator $\mathcal V_t$ can be discretized to a finite dimensional, time-dependent vector field $V_t$ which in turn induces a continuous normalizing flow between finite dimensional distributions over the chosen lattice. This flow can then be trained to be a diffeormorphism between the discretized free and target theories $[d\phi] Z_0^{-1} e^{-S_{0}[\phi]}$, $[d\phi] Z^{-1}e^{-S[\phi]}$. We run experiments on the $\phi^4$-theory to explore to what extent such operator-based flow architectures generalize to lattice sizes they were not trained on and show that pretraining on smaller lattices can lead to speedup over training only a target lattice size.

We propose a contour integral-based algorithm for computing a few singular values of a matrix or a few generalized singular values of a matrix pencil. Mathematically, the generalized singular values of a matrix pencil are the eigenvalues of an equivalent Hermitian-definite matrix pencil, known as the Jordan-Wielandt matrix pencil. However, direct application of the FEAST solver does not fully exploit the structure of this problem. We analyze several projection strategies on the Jordan-Wielandt matrix pencil, and propose an effective and robust scheme tailored to GSVD. Both theoretical analysis and numerical experiments demonstrate that our algorithm achieves rapid convergence and satisfactory accuracy.

We study the influence minimization problem: given a graph $G$ and a seed set $S$, blocking at most $b$ nodes or $b$ edges such that the influence spread of the seed set is minimized. This is a pivotal yet underexplored aspect of network analytics, which can limit the spread of undesirable phenomena in networks, such as misinformation and epidemics. Given the inherent NP-hardness of the problem under the IC and LT models, previous studies have employed greedy algorithms and Monte Carlo Simulations for its resolution. However, existing techniques become cost-prohibitive when applied to large networks due to the necessity of enumerating all the candidate blockers and computing the decrease in expected spread from blocking each of them. This significantly restricts the practicality and effectiveness of existing methods, especially when prompt decision-making is crucial. In this paper, we propose the AdvancedGreedy algorithm, which utilizes a novel graph sampling technique that incorporates the dominator tree structure. We find that AdvancedGreedy can achieve a $(1-1/e-\epsilon)$-approximation in the problem under the LT model. Experimental evaluations on real-life networks reveal that our proposed algorithms exhibit a significant enhancement in efficiency, surpassing the state-of-the-art algorithm by three orders of magnitude, while achieving high effectiveness.

Nash equilibrium} (NE) can be stated as a formal theorem on a multilinear form, free of game theory terminology. On the other hand, inspired by this formalism, we state and prove a {\it multilinear minimax theorem}, a generalization of von Neumann's bilinear minimax theorem. As in the bilinear case, the proof is based on relating the underlying optimizations to a primal-dual pair of linear programming problems, albeit more complicated LPs. The theorem together with its proof is of independent interest. Next, we use the theorem to associate to a multilinear form in NE a {\it multilinear minimax relaxation} (MMR), where the primal-dual pair of solutions induce an approximate equilibrium point that provides a nontrivial upper bound on a convex combination of {\it expected payoffs} in any NE solution. In fact we show any positive probability vector associated to the players induces a corresponding {\it diagonally-scaled} MMR approximate equilibrium with its associated upper bound. By virtue of the proof of the multilinear minimax theorem, MMR solution can be computed in polynomial-time. On the other hand, it is known that even in bimatrix games NE is {\it PPAD-complete}, a complexity class in NP not known to be in P. The quality of MMR solution and the efficiency of solving the underlying LPs are the subject of further investigation. However, as shown in a separate article, for a large set of test problems in bimatrix games, not only the MMR payoffs for both players are better than any NE payoffs, so is the computing time of MMR in contrast with that of Lemke-Howsen algorithm. In large size problems the latter algorithm even fails to produce a Nash equilibrium. In summary, solving MMR provides a worthy approximation even if Nash equilibrium is shown to be computable in polynomial-time.

The Bimatrix Nash Equilibrium (NE) for $m \times n$ real matrices $R$ and $C$, denoted as the {\it Row} and {\it Column} players, is characterized as follows: Let $\Delta =S_m \times S_n$, where $S_k$ denotes the unit simplex in $\mathbb{R}^k$. For a given point $p=(x,y) \in \Delta$, define $R[p]=x^TRy$ and $C[p]=x^TCy$. Consequently, there exists a subset $\Delta_* \subset \Delta$ such that for any $p_*=(x_*,y_*) \in \Delta_*$, $\max_{p \in \Delta, y=y_*}R[p]=R[p_*]$ and $\max_{p \in \Delta, x=x_* } C[p]=C[p_*]$. The computational complexity of bimatrix NE falls within the class of {\it PPAD-complete}. Although the von Neumann Minimax Theorem is a special case of bimatrix NE, we introduce a novel extension termed {\it Trilinear Minimax Relaxation} (TMR) with the following implications: Let $\lambda^*=\min_{\alpha \in S_{2}} \max_{p \in \Delta} (\alpha_1 R[p]+ \alpha_2C[p])$ and $\lambda_*=\max_{p \in \Delta} \min_{\alpha \in S_{2}} (\alpha_1 R[p]+ \alpha_2C[p])$. $\lambda^* \geq \lambda_*$. $\lambda^*$ is computable as a linear programming in $O(mn)$ time, ensuring $\max_{p_* \in \Delta_*}\min \{R[p_*], C[p_*]\} \leq \lambda^*$, meaning that in any Nash Equilibrium it is not possible to have both players' payoffs to exceed $\lambda^*$. $\lambda^*=\lambda_*$ if and only if there exists $p^* \in \Delta$ such that $\lambda^*= \min\{R[p^*], C[p^*]\}$. Such a $p^*$ serves as an approximate Nash Equilibrium. We analyze the cases where such $p^*$ exists and is computable. Even when $\lambda^* > \lambda_*$, we derive approximate Nash Equilibria. In summary, the aforementioned properties of TMR and its efficient computational aspects underscore its significance and relevance for Nash Equilibrium, irrespective of the computational complexity associated with bimatrix Nash Equilibrium. Finally, we extend TMR to scenarios involving three or more players.

In multi-turn dialog, utterances do not always take the full form of sentences \cite{Carbonell1983DiscoursePA}, which naturally makes understanding the dialog context more difficult. However, it is essential to fully grasp the dialog context to generate a reasonable response. Hence, in this paper, we propose to improve the response generation performance by examining the model's ability to answer a reading comprehension question, where the question is focused on the omitted information in the dialog. Enlightened by the multi-task learning scheme, we propose a joint framework that unifies these two tasks, sharing the same encoder to extract the common and task-invariant features with different decoders to learn task-specific features. To better fusing information from the question and the dialog history in the encoding part, we propose to augment the Transformer architecture with a memory updater, which is designed to selectively store and update the history dialog information so as to support downstream tasks. For the experiment, we employ human annotators to write and examine a large-scale dialog reading comprehension dataset. Extensive experiments are conducted on this dataset, and the results show that the proposed model brings substantial improvements over several strong baselines on both tasks. In this way, we demonstrate that reasoning can indeed help better response generation and vice versa. We release our large-scale dataset for further research.

北京阿比特科技有限公司