亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A $r$-role assignment of a simple graph $G$ is an assignment of $r$ distinct roles to the vertices of $G$, such that two vertices with the same role have the same set of roles assigned to related vertices. Furthermore, a specific $r$-role assignment defines a role graph, in which the vertices are the distinct $r$ roles, and there is an edge between two roles whenever there are two related vertices in the graph $G$ that correspond to these roles. We consider complementary prisms, which are graphs formed from the disjoint union of the graph with its respective complement, adding the edges of a perfect matching between their corresponding vertices. In this work, we characterize the complementary prisms that do not admit a $3$-role assignment. We highlight that all of them are complementary prisms of disconnected bipartite graphs. Moreover, using our findings, we show that the problem of deciding whether a complementary prism has a $3$-role assignment can be solved in polynomial time.

相關內容

In 2017, Aharoni proposed the following generalization of the Caccetta-H\"{a}ggkvist conjecture: if $G$ is a simple $n$-vertex edge-colored graph with $n$ color classes of size at least $r$, then $G$ contains a rainbow cycle of length at most $\lceil n/r \rceil$. In this paper, we prove that, for fixed $r$, Aharoni's conjecture holds up to an additive constant. Specifically, we show that for each fixed $r \geq 1$, there exists a constant $c_r$ such that if $G$ is a simple $n$-vertex edge-colored graph with $n$ color classes of size at least $r$, then $G$ contains a rainbow cycle of length at most $n/r + c_r$.

We consider finite element approximations of ill-posed elliptic problems with conditional stability. The notion of {\emph{optimal error estimates}} is defined including both convergence with respect to mesh parameter and perturbations in data. The rate of convergence is determined by the conditional stability of the underlying continuous problem and the polynomial order of the finite element approximation space. A proof is given that no finite element approximation can converge at a better rate than that given by the definition, justifying the concept. A recently introduced class of finite element methods with weakly consistent regularisation is recalled and the associated error estimates are shown to be quasi optimal in the sense of our definition.

We give an almost complete characterization of the hardness of $c$-coloring $\chi$-chromatic graphs with distributed algorithms, for a wide range of models of distributed computing. In particular, we show that these problems do not admit any distributed quantum advantage. To do that: 1) We give a new distributed algorithm that finds a $c$-coloring in $\chi$-chromatic graphs in $\tilde{\mathcal{O}}(n^{\frac{1}{\alpha}})$ rounds, with $\alpha = \bigl\lfloor\frac{c-1}{\chi - 1}\bigr\rfloor$. 2) We prove that any distributed algorithm for this problem requires $\Omega(n^{\frac{1}{\alpha}})$ rounds. Our upper bound holds in the classical, deterministic LOCAL model, while the near-matching lower bound holds in the non-signaling model. This model, introduced by Arfaoui and Fraigniaud in 2014, captures all models of distributed graph algorithms that obey physical causality; this includes not only classical deterministic LOCAL and randomized LOCAL but also quantum-LOCAL, even with a pre-shared quantum state. We also show that similar arguments can be used to prove that, e.g., 3-coloring 2-dimensional grids or $c$-coloring trees remain hard problems even for the non-signaling model, and in particular do not admit any quantum advantage. Our lower-bound arguments are purely graph-theoretic at heart; no background on quantum information theory is needed to establish the proofs.

Given a hypergraph $\mathcal{H}$, the dual hypergraph of $\mathcal{H}$ is the hypergraph of all minimal transversals of $\mathcal{H}$. The dual hypergraph is always Sperner, that is, no hyperedge contains another. A special case of Sperner hypergraphs are the conformal Sperner hypergraphs, which correspond to the families of maximal cliques of graphs. All these notions play an important role in many fields of mathematics and computer science, including combinatorics, algebra, database theory, etc. In this paper we study conformality of dual hypergraphs and prove several results related to the problem of recognizing this property. In particular, we show that the problem is in co-NP and can be solved in polynomial time for hypergraphs of bounded dimension. In the special case of dimension $3$, we reduce the problem to $2$-Satisfiability. Our approach has an implication in algorithmic graph theory: we obtain a polynomial-time algorithm for recognizing graphs in which all minimal transversals of maximal cliques have size at most $k$, for any fixed $k$.

We consider the problem of approximating a function from $L^2$ by an element of a given $m$-dimensional space $V_m$, associated with some feature map $\varphi$, using evaluations of the function at random points $x_1,\dots,x_n$. After recalling some results on optimal weighted least-squares using independent and identically distributed points, we consider weighted least-squares using projection determinantal point processes (DPP) or volume sampling. These distributions introduce dependence between the points that promotes diversity in the selected features $\varphi(x_i)$. We first provide a generalized version of volume-rescaled sampling yielding quasi-optimality results in expectation with a number of samples $n = O(m\log(m))$, that means that the expected $L^2$ error is bounded by a constant times the best approximation error in $L^2$. Also, further assuming that the function is in some normed vector space $H$ continuously embedded in $L^2$, we further prove that the approximation is almost surely bounded by the best approximation error measured in the $H$-norm. This includes the cases of functions from $L^\infty$ or reproducing kernel Hilbert spaces. Finally, we present an alternative strategy consisting in using independent repetitions of projection DPP (or volume sampling), yielding similar error bounds as with i.i.d. or volume sampling, but in practice with a much lower number of samples. Numerical experiments illustrate the performance of the different strategies.

For any positive integer $q\geq 2$ and any real number $\delta\in(0,1)$, let $\alpha_q(n,\delta n)$ denote the maximum size of a subset of $\mathbb{Z}_q^n$ with minimum Hamming distance at least $\delta n$, where $\mathbb{Z}_q=\{0,1,\dotsc,q-1\}$ and $n\in\mathbb{N}$. The asymptotic rate function is defined by $ R_q(\delta) = \limsup_{n\rightarrow\infty}\frac{1}{n}\log_q\alpha_q(n,\delta n).$ The famous $q$-ary asymptotic Gilbert-Varshamov bound, obtained in the 1950s, states that \[ R_q(\delta) \geq 1 - \delta\log_q(q-1)-\delta\log_q\frac{1}{\delta}-(1-\delta)\log_q\frac{1}{1-\delta} \stackrel{\mathrm{def}}{=}R_\mathrm{GV}(\delta,q) \] for all positive integers $q\geq 2$ and $0<\delta<1-q^{-1}$. In the case that $q$ is an even power of a prime with $q\geq 49$, the $q$-ary Gilbert-Varshamov bound was firstly improved by using algebraic geometry codes in the works of Tsfasman, Vladut, and Zink and of Ihara in the 1980s. These algebraic geometry codes have been modified to improve the $q$-ary Gilbert-Varshamov bound $R_\mathrm{GV}(\delta,q)$ at a specific tangent point $\delta=\delta_0\in (0,1)$ of the curve $R_\mathrm{GV}(\delta,q)$ for each given integer $q\geq 46$. However, the $q$-ary Gilbert-Varshamov bound $R_\mathrm{GV}(\delta,q)$ at $\delta=1/2$, i.e., $R_\mathrm{GV}(1/2,q)$, remains the largest known lower bound of $R_q(1/2)$ for infinitely many positive integers $q$ which is a generic prime and which is a generic non-prime-power integer. In this paper, by using codes from geometry of numbers introduced by Lenstra in the 1980s, we prove that the $q$-ary Gilbert-Varshamov bound $R_\mathrm{GV}(\delta,q)$ with $\delta\in(0,1)$ can be improved for all but finitely many positive integers $q$. It is shown that the growth defined by $\eta(\delta)= \liminf_{q\rightarrow\infty}\frac{1}{\log q}\log[1-\delta-R_q(\delta)]^{-1}$ for every $\delta\in(0,1)$ has actually a nontrivial lower bound.

Although the asymptotic properties of the parameter estimator have been derived in the $p_{0}$ model for directed graphs with the differentially private bi-degree sequence, asymptotic theory in general models is still lacking. In this paper, we release the bi-degree sequence of directed graphs via the discrete Laplace mechanism, which satisfies differential privacy. We use the moment method to estimate the unknown model parameter. We establish a unified asymptotic result, in which consistency and asymptotic normality of the differentially private estimator holds. We apply the unified theoretical result to the Probit model. Simulations and a real data demonstrate our theoretical findings.

Lattices are architected metamaterials whose properties strongly depend on their geometrical design. The analogy between lattices and graphs enables the use of graph neural networks (GNNs) as a faster surrogate model compared to traditional methods such as finite element modelling. In this work, we generate a big dataset of structure-property relationships for strut-based lattices. The dataset is made available to the community which can fuel the development of methods anchored in physical principles for the fitting of fourth-order tensors. In addition, we present a higher-order GNN model trained on this dataset. The key features of the model are (i) SE(3) equivariance, and (ii) consistency with the thermodynamic law of conservation of energy. We compare the model to non-equivariant models based on a number of error metrics and demonstrate its benefits in terms of predictive performance and reduced training requirements. Finally, we demonstrate an example application of the model to an architected material design task. The methods which we developed are applicable to fourth-order tensors beyond elasticity such as piezo-optical tensor etc.

This paper focuses on estimating the invariant density function $f_X$ of the strongly mixing stationary process $X_t$ in the multiplicative measurement errors model $Y_t = X_t U_t$, where $U_t$ is also a strongly mixing stationary process. We propose a novel approach to handle non-independent data, typical in real-world scenarios. For instance, data collected from various groups may exhibit interdependencies within each group, resembling data generated from $m$-dependent stationary processes, a subset of stationary processes. This study extends the applicability of the model $Y_t = X_t U_t$ to diverse scientific domains dealing with complex dependent data. The paper outlines our estimation techniques, discusses convergence rates, establishes a lower bound on the minimax risk, and demonstrates the asymptotic normality of the estimator for $f_X$ under smooth error distributions. Through examples and simulations, we showcase the efficacy of our estimator. The paper concludes by providing proofs for the presented theoretical results.v

Summarizing book-length documents (>100K tokens) that exceed the context window size of large language models (LLMs) requires first breaking the input document into smaller chunks and then prompting an LLM to merge, update, and compress chunk-level summaries. Despite the complexity and importance of this task, it has yet to be meaningfully studied due to the challenges of evaluation: existing book-length summarization datasets (e.g., BookSum) are in the pretraining data of most public LLMs, and existing evaluation methods struggle to capture errors made by modern LLM summarizers. In this paper, we present the first study of the coherence of LLM-based book-length summarizers implemented via two prompting workflows: (1) hierarchically merging chunk-level summaries, and (2) incrementally updating a running summary. We obtain 1193 fine-grained human annotations on GPT-4 generated summaries of 100 recently-published books and identify eight common types of coherence errors made by LLMs. Because human evaluation is expensive and time-consuming, we develop an automatic metric, BooookScore, that measures the proportion of sentences in a summary that do not contain any of the identified error types. BooookScore has high agreement with human annotations and allows us to systematically evaluate the impact of many other critical parameters (e.g., chunk size, base LLM) while saving $15K USD and 500 hours in human evaluation costs. We find that closed-source LLMs such as GPT-4 and Claude 2 produce summaries with higher BooookScore than those generated by open-source models. While LLaMA 2 falls behind other models, Mixtral achieves performance on par with GPT-3.5-Turbo. Incremental updating yields lower BooookScore but higher level of detail than hierarchical merging, a trade-off sometimes preferred by annotators.

北京阿比特科技有限公司