亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Let $\mathbb{N}$ be the set of positive integers. A radio labeling of a graph $G$ is a mapping $\varphi : V(G) \rightarrow \mathbb{N} \cup \{0\}$ such that the inequality $|\varphi(u)-\varphi(v)| \geq diam(G) + 1 - d(u,v)$ holds for every pair of distinct vertices $u,v$ of $G$, where $diam(G)$ and $d(u,v)$ are the diameter of $G$ and distance between $u$ and $v$ in $G$, respectively. The radio number $rn(G)$ of $G$ is the smallest number $k$ such that $G$ has radio labeling $\varphi$ with $\max\{\varphi(v) : v \in V(G)\}$ = $k$. Das et al. [Discrete Math. $\mathbf{340}$(2017) 855-861] gave a technique to find a lower bound for the radio number of graphs. In [Algorithms and Discrete Applied Mathematics: CALDAM 2019, Lecture Notes in Computer Science $\mathbf{11394}$, springer, Cham, 2019, 161-173], Bantva modified this technique for finding an improved lower bound on the radio number of graphs and gave a necessary and sufficient condition to achieve the improved lower bound. In this paper, one more useful necessary and sufficient condition to achieve the improved lower bound for the radio number of graphs is given. Using this result, the radio number of the Cartesian product of a path and a wheel graphs is determined.

相關內容

As set systems, hypergraphs are omnipresent and have various representations. In a geometric representation of a hypergraph $H=(V,E)$, each vertex $v\in V$ is a associated with a point $p_v\in \mathbb{R}^d$ and each hyperedge $e\in E$ is associated with a connected set $s_e\subset \mathbb{R}^d$ such that $\{p_v\mid v\in V\}\cap s_e=\{p_v\mid v\in e\}$ for all $e\in E$. We say that a given hypergraph $H$ is representable by some (infinite) family $\mathcal{F}$ of sets in $\mathbb{R}^d$, if there exist $P\subset \mathbb{R}^d$ and $S \subseteq \mathcal{F}$ such that $(P,S)$ is a geometric representation of $H$. For a family $\mathcal{F}$, we define RECOGNITION($\mathcal{F}$) as the problem to determine if a given hypergraph is representable by $\mathcal{F}$. It is known that the RECOGNITION problem is ER-hard for halfspaces in $\mathbb{R}^d$. We study the families of balls and ellipsoids in $\mathbb{R}^d$, as well as other convex sets, and show that their RECOGNITION problems are also ER-complete. This means that these recognition problems are equivalent to deciding whether a multivariate system of polynomial equations with integer coefficients has a real solution.

Given a boolean predicate $\Pi$ on labeled networks (e.g., proper coloring, leader election, etc.), a self-stabilizing algorithm for $\Pi$ is a distributed algorithm that can start from any initial configuration of the network (i.e., every node has an arbitrary value assigned to each of its variables), and eventually converge to a configuration satisfying $\Pi$. It is known that leader election does not have a deterministic self-stabilizing algorithm using a constant-size register at each node, i.e., for some networks, some of their nodes must have registers whose sizes grow with the size $n$ of the networks. On the other hand, it is also known that leader election can be solved by a deterministic self-stabilizing algorithm using registers of $O(\log \log n)$ bits per node in any $n$-node bounded-degree network. We show that this latter space complexity is optimal. Specifically, we prove that every deterministic self-stabilizing algorithm solving leader election must use $\Omega(\log \log n)$-bit per node registers in some $n$-node networks. In addition, we show that our lower bounds go beyond leader election, and apply to all problems that cannot be solved by anonymous algorithms.

Neural networks that satisfy invariance with respect to input permutations have been widely studied in machine learning literature. However, in many applications, only a subset of all input permutations is of interest. For heterogeneous graph data, one can focus on permutations that preserve node types. We fully characterize linear layers invariant to such permutations. We verify experimentally that implementing these layers in graph neural network architectures allows learning important node interactions more effectively than existing techniques. We show that the dimension of space of these layers is given by a generalization of Bell numbers, extending the work (Maron et al., 2019). We further narrow the invariant network design space by addressing a question about the sizes of tensor layers necessary for function approximation on graph data. Our findings suggest that function approximation on a graph with $n$ nodes can be done with tensors of sizes $\leq n$, which is tighter than the best-known bound $\leq n(n-1)/2$. For $d \times d$ image data with translation symmetry, our methods give a tight upper bound $2d - 1$ (instead of $d^{4}$) on sizes of invariant tensor generators via a surprising connection to Davenport constants.

We introduce two new classes of covering codes in graphs for every positive integer $r$. These new codes are called local $r$-identifying and local $r$-locating-dominating codes and they are derived from $r$-identifying and $r$-locating-dominating codes, respectively. We study the sizes of optimal local 1-identifying codes in binary hypercubes. We obtain lower and upper bounds that are asymptotically tight. Together the bounds show that the cost of changing covering codes into local 1-identifying codes is negligible. For some small $n$ optimal constructions are obtained. Moreover, the upper bound is obtained by a linear code construction. Also, we study the densities of optimal local 1-identifying codes and local 1-locating-dominating codes in the infinite square grid, the hexagonal grid, the triangular rid and the king grid. We prove that seven out of eight of our constructions have optimal densities.

It is well known that ordinary persistence on graphs can be computed more efficiently than the general persistence. Recently, it has also been shown that zigzag persistence on graphs also exhibits similar behavior. Motivated by these results, we revisit graph persistence and propose efficient algorithms especially for local updates on filtrations, similar to what is done in ordinary persistence for computing the vineyard. We show that, for a filtration of length $m$ (i) switches (transpositions) in ordinary graph persistence can be done in $O(\log^4 m)$ amortized time; (ii) zigzag persistence on graphs can be computed in $O(m\log m)$ time, which improves a recent $O(m\log^4n)$ time algorithm assuming $n$, the size of the union of all graphs in the filtration, satisfies $n\in\Omega({m^\varepsilon})$ for any fixed $0<\varepsilon<1$; (iii) open-closed, closed-open, and closed-closed bars in dimension $0$ for graph zigzag persistence can be updated in $O(\log^4m)$ amortized time, whereas the open-open bars in dimension $0$ and closed-closed bars in dimension $1$ can be done in $O(m)$ time.

In FEM-based EEG and MEG source analysis, the subtraction approach has been proposed to simulate sensor measurements generated by neural activity. While this approach possesses a rigorous foundation and produces accurate results, its major downside is that it is computationally prohibitively expensive in practical applications. To overcome this, we developed a new approach, called the localized subtraction approach. This approach is designed to preserve the mathematical foundation of the subtraction approach, while also leading to sparse right-hand sides in the FEM formulation, making it efficiently computable. We achieve this by introducing a cut-off into the subtraction, restricting its influence to the immediate neighborhood of the source. In this work, this approach will be presented, analyzed, and compared to other state-of-the-art FEM right-hand side approaches. Furthermore, we discuss how to arrive at an efficient and stable implementation. We perform validation in multi-layer sphere models where analytical solutions exist. There, we demonstrate that the localized subtraction approach is vastly more efficient than the subtraction approach. Moreover, we find that for the EEG forward problem, the localized subtraction approach is less dependent on the global structure of the FEM mesh when compared to the subtraction approach. Additionally, we show the localized subtraction approach to rival, and in many cases even surpass, the other investigated approaches in terms of accuracy. For the MEG forward problem, we show the localized subtraction approach and the subtraction approach to produce highly accurate approximations of the volume currents close to the source. The localized subtraction approach thus reduces the computational cost of the subtraction approach to an extent that makes it usable in practical applications without sacrificing rigorousness and accuracy.

Intelligent reflecting surfaces (IRSs) are envisioned as a low-cost solution to achieve high spectral and energy efficiency in future communication systems due to their ability to customize wireless propagation environments. Although resource allocation design for IRS-assisted multiuser wireless communication systems has been exhaustively investigated in the literature, the optimal design and performance of such systems are still not well understood. To fill this gap, in this paper, we study optimal resource allocation for IRS-assisted multiuser multiple-input single-output (MISO) systems. In particular, we jointly optimize the beamforming at the base station (BS) and the discrete IRS phase shifts to minimize the total transmit power. For attaining the globally optimal solution of the formulated non-convex combinatorial optimization problem, we develop a resource allocation algorithm with guaranteed convergence based on Schur's complement and the generalized Bender's decomposition. Our numerical results reveal that the proposed algorithm can significantly reduce the BS transmit power compared to the state-of-the-art suboptimal alternating optimization-based approach, especially for moderate-to-large numbers of IRS elements.

Likelihood-free inference methods typically make use of a distance between simulated and real data. A common example is the maximum mean discrepancy (MMD), which has previously been used for approximate Bayesian computation, minimum distance estimation, generalised Bayesian inference, and within the nonparametric learning framework. The MMD is commonly estimated at a root-$m$ rate, where $m$ is the number of simulated samples. This can lead to significant computational challenges since a large $m$ is required to obtain an accurate estimate, which is crucial for parameter estimation. In this paper, we propose a novel estimator for the MMD with significantly improved sample complexity. The estimator is particularly well suited for computationally expensive smooth simulators with low- to mid-dimensional inputs. This claim is supported through both theoretical results and an extensive simulation study on benchmark simulators.

We introduce a logistic regression model for data pairs consisting of a binary response and a covariate residing in a non-Euclidean metric space without vector structures. Based on the proposed model we also develop a binary classifier for non-Euclidean objects. We propose a maximum likelihood estimator for the non-Euclidean regression coefficient in the model, and provide upper bounds on the estimation error under various metric entropy conditions that quantify complexity of the underlying metric space. Matching lower bounds are derived for the important metric spaces commonly seen in statistics, establishing optimality of the proposed estimator in such spaces. Similarly, an upper bound on the excess risk of the developed classifier is provided for general metric spaces. A finer upper bound and a matching lower bound, and thus optimality of the proposed classifier, are established for Riemannian manifolds. We investigate the numerical performance of the proposed estimator and classifier via simulation studies, and illustrate their practical merits via an application to task-related fMRI data.

We aim at investigating the solvability/insolvability of nondeterministic logarithmic-space (NL) decision, search, and optimization problems parameterized by size parameters using simultaneously polynomial time and sub-linear space on multi-tape deterministic Turing machines. We are particularly focused on a special NL-complete problem, 2SAT parameterized by the total number $m_{vbl}(\phi)$ of Boolean variables of each given Boolean formula $\phi$. It is shown that 2SAT with $n$ variables and $m$ clauses can be solved simultaneously polynomial time and $(n/2^{c\sqrt{\log{n}}})\, polylog(m+n)$ space for an absolute constant $c>0$. This fact inspires us to propose a new, practical working hypothesis, called the linear space hypothesis (LSH), which states that 2SAT$_3$ -- a restricted variant of 2SAT in which each variable of a given 2CNF formula appears at most 3 times in the form of literals -- cannot be solved simultaneously in polynomial time using strictly ``sub-linear'' (i.e., $m_{vbl}(x)^{\varepsilon}\, polylog(|x|)$ for a certain fixed constant $\varepsilon\in[0,1)$) space on all instances $x$. Immediate consequences of this working hypothesis include $\mathrm{L}\neq\mathrm{NL}$, $\mathrm{LOGDCFL}\neq\mathrm{LOGCFL}$, and $\mathrm{SC}\neq \mathrm{NSC}$. For our investigation, since standard logarithmic-space reductions may no longer preserve polynomial-time sub-linear-space complexity, we need to introduce a new, restricted notion of ``short reduction.'' For a syntactically restricted version of NL, called Syntactic NL$_{\omega}$ or SNL$_{\omega}$, $(\mathrm{2SAT}_3,m_{vbl})$ is in fact hard for parameterized SNL$_{\omega}$ via such short reductions. This fact supports the legitimacy of our working hypothesis.

北京阿比特科技有限公司