亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

For a given elliptic curve $E$ over a finite local ring, we denote by $E^{\infty}$ its subgroup at infinity. Every point $P \in E^{\infty}$ can be described solely in terms of its $x$-coordinate $P_x$, which can be therefore used to parameterize all its multiples $nP$. We refer to the coefficient of $(P_x)^i$ in the parameterization of $(nP)_x$ as the $i$-th multiplication polynomial. We show that this coefficient is a degree-$i$ rational polynomial without a constant term in $n$. We also prove that no primes greater than $i$ may appear in the denominators of its terms. As a consequence, for every finite field $\mathbb{F}_q$ and any $k\in\mathbb{N}^*$, we prescribe the group structure of a generic elliptic curve defined over $\mathbb{F}_q[X]/(X^k)$, and we show that their ECDLP on $E^{\infty}$ may be efficiently solved.

相關內容

We study online convex optimization where the possible actions are trace-one elements in a symmetric cone, generalizing the extensively-studied experts setup and its quantum counterpart. Symmetric cones provide a unifying framework for some of the most important optimization models, including linear, second-order cone, and semidefinite optimization. Using tools from the field of Euclidean Jordan Algebras, we introduce the Symmetric-Cone Multiplicative Weights Update (SCMWU), a projection-free algorithm for online optimization over the trace-one slice of an arbitrary symmetric cone. We show that SCMWU is equivalent to Follow-the-Regularized-Leader and Online Mirror Descent with symmetric-cone negative entropy as regularizer. Using this structural result we show that SCMWU is a no-regret algorithm, and verify our theoretical results with extensive experiments. Our results unify and generalize the analysis for the Multiplicative Weights Update method over the probability simplex and the Matrix Multiplicative Weights Update method over the set of density matrices.

In this paper, we construct an efficient linear and fully decoupled finite difference scheme for wormhole propagation with heat transmission process on staggered grids, which only requires solving a sequence of linear elliptic equations at each time step. We first derive the positivity preserving properties for the discrete porosity and its difference quotient in time, and then obtain optimal error estimates for the velocity, pressure, concentration, porosity and temperature in different norms rigorously and carefully by establishing several auxiliary lemmas for the highly coupled nonlinear system. Numerical experiments in two- and three-dimensional cases are provided to verify our theoretical results and illustrate the capabilities of the constructed method.

A graph $H$ is a clique graph if $H$ is a vertex-disjoin union of cliques. Abu-Khzam (2017) introduced the $(a,d)$-{Cluster Editing} problem, where for fixed natural numbers $a,d$, given a graph $G$ and vertex-weights $a^*:\ V(G)\rightarrow \{0,1,\dots, a\}$ and $d^*{}:\ V(G)\rightarrow \{0,1,\dots, d\}$, we are to decide whether $G$ can be turned into a cluster graph by deleting at most $d^*(v)$ edges incident to every $v\in V(G)$ and adding at most $a^*(v)$ edges incident to every $v\in V(G)$. Results by Komusiewicz and Uhlmann (2012) and Abu-Khzam (2017) provided a dichotomy of complexity (in P or NP-complete) of $(a,d)$-{Cluster Editing} for all pairs $a,d$ apart from $a=d=1.$ Abu-Khzam (2017) conjectured that $(1,1)$-{Cluster Editing} is in P. We resolve Abu-Khzam's conjecture in affirmative by (i) providing a serious of five polynomial-time reductions to $C_3$-free and $C_4$-free graphs of maximum degree at most 3, and (ii) designing a polynomial-time algorithm for solving $(1,1)$-{Cluster Editing} on $C_3$-free and $C_4$-free graphs of maximum degree at most 3.

Since Jacobson [FOCS89] initiated the investigation of succinct graph encodings 35 years ago, there has been a long list of results on balancing the generality of the class, the speed, the succinctness of the encoding, and the query support. Let Cn denote the set consisting of the graphs in a class C that with at most n vertices. A class C is nontrivial if the information-theoretically min number log |Cn| of bits to distinguish the members of Cn is Omega(n). An encoding scheme based upon a single class C is C-opt if it takes a graph G of Cn and produces in deterministic O(n) time an encoded string of at most log |Cn| + o(log |Cn|) bits from which G can be recovered in O(n) time. Despite the extensive efforts in the literature, trees and general graphs were the only nontrivial classes C admitting C-opt encoding schemes that support the degree query in O(1) time. Basing an encoding scheme upon a single class ignores the possibility of a shorter encoded string using additional properties of the graph input. To leverage the inherent structures of individual graphs, we propose to base an encoding scheme upon of multiple classes: An encoding scheme based upon a family F of classes, accepting all graphs in UF, is F-opt if it is C-opt for each C in F. Having a C-opt encoding scheme for each C in F does not guarantee an F-opt encoding scheme. Under this more stringent criterion, we present an F-opt encoding scheme for a family F of an infinite number of classes such that UF comprises all graphs of bounded Hadwiger numbers. F consists of the nontrivial quasi-monotone classes of k-clique-minor-free graphs for each positive integer k. Our F-opt scheme supports queries of degree, adjacency, neighbor-listing, and bounded-distance shortest path in O(1) time per output. We broaden the graph classes admitting opt encoding schemes that also efficiently support fundamental queries.

We consider the problem $(\mathrm{P})$ of fitting $n$ standard Gaussian random vectors in $\mathbb{R}^d$ to the boundary of a centered ellipsoid, as $n, d \to \infty$. This problem is conjectured to have a sharp feasibility transition: for any $\varepsilon > 0$, if $n \leq (1 - \varepsilon) d^2 / 4$ then $(\mathrm{P})$ has a solution with high probability, while $(\mathrm{P})$ has no solutions with high probability if $n \geq (1 + \varepsilon) d^2 /4$. So far, only a trivial bound $n \geq d^2 / 2$ is known on the negative side, while the best results on the positive side assume $n \leq d^2 / \mathrm{polylog}(d)$. In this work, we improve over previous approaches using a key result of Bartl & Mendelson on the concentration of Gram matrices of random vectors under mild assumptions on their tail behavior. This allows us to give a simple proof that $(\mathrm{P})$ is feasible with high probability when $n \leq d^2 / C$, for a (possibly large) constant $C > 0$.

This study focuses on (traditional and unsourced) multiple-access communication over a single transmit and multiple ($M$) receive antennas. We assume full or partial channel state information (CSI) at the receiver. It is known that to fully achieve the fundamental limits (even asymptotically) the decoder needs to jointly estimate all user codewords, doing which directly is computationally infeasible. We propose a low-complexity solution, termed coded orthogonal modulation multiple-access (COMMA), in which users first encode their messages via a long (multi-user interference aware) outer code operating over a $q$-ary alphabet. These symbols are modulated onto $q$ orthogonal waveforms. At the decoder a multiple-measurement vector approximate message passing (MMV-AMP) algorithm estimates several candidates (out of $q$) for each user, with the remaining uncertainty resolved by the single-user outer decoders. Numerically, we show that COMMA outperforms a standard solution based on linear multiuser detection (MUD) with Gaussian signaling. Theoretically, we derive bounds and scaling laws for $M$, the number of users $K_a$, SNR, and $q$, allowing to quantify the trade-off between receive antennas and spectral efficiency. The orthogonal signaling scheme is applicable to unsourced random access and, with chirp sequences as basis, allows for low-complexity fast Fourier transform (FFT) based receivers that are resilient to frequency and timing offsets.

In this paper, we devise a scheme for kernelizing, in sublinear space and polynomial time, various problems on planar graphs. The scheme exploits planarity to ensure that the resulting algorithms run in polynomial time and use O((sqrt(n) + k) log n) bits of space, where n is the number of vertices in the input instance and k is the intended solution size. As examples, we apply the scheme to Dominating Set and Vertex Cover. For Dominating Set, we also show that a well-known kernelization algorithm due to Alber et al. (JACM 2004) can be carried out in polynomial time and space O(k log n). Along the way, we devise restricted-memory procedures for computing region decompositions and approximating the aforementioned problems, which might be of independent interest.

Time-dependent scheduling with linear deterioration involves determining when to execute jobs whose processing times degrade as their beginning is delayed. Each job i is associated with a release time r_i and a processing time function p_i(s_i)=alpha_i + beta_i*s_i, where alpha_i, beta_i>0$ are constants and s_i is the job's start time. In this setting, the approximability of both single-machine minimum makespan and total completion time problems remains open. Here, we take a step forward by developing new bounds and approximation results for the interesting special case of the problems with uniform deterioration, i.e.\ beta_i=beta, for each i. The key contribution is a O(1+1/beta)-approximation algorithm for the makespan problem and a O(1+1/beta^2)-approximation algorithm for the total completion time problem. Further, we propose greedy constant-factor approximation algorithms for instances with beta=O(1/n) and beta=Omega(n), where n is the number of jobs. Our analysis is based on a new approach for comparing computed and optimal schedules via bounding pseudomatchings.

Let $G$ be an undirected graph, and $s,t$ distinguished vertices of $G$. A minimal $s,t$-separator is an inclusion-wise minimal vertex-set whose removal places $s$ and $t$ in distinct connected components. We present an algorithm for listing the minimal $s,t$-separators of a graph in non-decreasing order of cardinality, in polynomial-delay. This problem finds applications in various algorithms parameterized by treewidth, which include query evaluation in relational databases, probabilistic inference, and many more. In the process, we prove several results that are of independent interest. We establish a new island of tractability to the intensively studied 2-disjoint connected subgraphs problem, which is NP-complete even for restricted graph classes that include planar graphs, and prove new characterizations of minimal $s,t$-separators. Ours is the first to present a ranked enumeration algorithm for minimal separators where the delay is polynomial in the size of the input graph.

The volume function V(t) of a compact set S\in R^d is just the Lebesgue measure of the set of points within a distance to S not larger than t. According to some classical results in geometric measure theory, the volume function turns out to be a polynomial, at least in a finite interval, under a quite intuitive, easy to interpret, sufficient condition (called ``positive reach'') which can be seen as an extension of the notion of convexity. However, many other simple sets, not fulfilling the positive reach condition, have also a polynomial volume function. To our knowledge, there is no general, simple geometric description of such sets. Still, the polynomial character of $V(t)$ has some relevant consequences since the polynomial coefficients carry some useful geometric information. In particular, the constant term is the volume of S and the first order coefficient is the boundary measure (in Minkowski's sense). This paper is focused on sets whose volume function is polynomial on some interval starting at zero, whose length (that we call ``polynomial reach'') might be unknown. Our main goal is to approximate such polynomial reach by statistical means, using only a large enough random sample of points inside S. The practical motivation is simple: when the value of the polynomial reach , or rather a lower bound for it, is approximately known, the polynomial coefficients can be estimated from the sample points by using standard methods in polynomial approximation. As a result, we get a quite general method to estimate the volume and boundary measure of the set, relying only on an inner sample of points and not requiring the use any smoothing parameter. This paper explores the theoretical and practical aspects of this idea.

小貼士
登錄享
相關主題
北京阿比特科技有限公司