亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper has two objectives. One is to give a linear time algorithm that solves the stable roommates problem (i.e., obtains one stable matching) using the stable marriage problem. The idea is that a stable matching of a roommate instance $I$ is a stable matching (that however must satisfy a certain condition) of some marriage instance $I'$. $I'$ is obtained just by making two copies of $I$, one for the men's table and the other for the women's table. The second objective is to investigate the possibility of reducing the roommate problem to the marriage problem (with a one-to-one correspondence between their stable matchings) in polynomial time. For a given $I$, we construct the rotation POSET $P$ of $I'$ and then we ``halve'' it to obtain $P'$, by which we can forget the above condition and can use all the closed subsets of $P'$ for all the stable matchings of $I$. Unfortunately, this approach works (runs in polynomial time) only for restricted instances.

相關內容

In this article we present new results about the expressivity of Graph Neural Networks (GNNs). We prove that for any GNN with piecewise polynomial activations, whose architecture size does not grow with the graph input sizes, there exists a pair of non-isomorphic rooted trees of depth two such that the GNN cannot distinguish their root vertex up to an arbitrary number of iterations. The proof relies on tools from the algebra of symmetric polynomials. In contrast, it was already known that unbounded GNNs (those whose size is allowed to change with the graph sizes) with piecewise polynomial activations can distinguish these vertices in only two iterations. Our results imply a strict separation between bounded and unbounded size GNNs, answering an open question formulated by [Grohe, 2021]. We next prove that if one allows activations that are not piecewise polynomial, then in two iterations a single neuron perceptron can distinguish the root vertices of any pair of nonisomorphic trees of depth two (our results hold for activations like the sigmoid, hyperbolic tan and others). This shows how the power of graph neural networks can change drastically if one changes the activation function of the neural networks. The proof of this result utilizes the Lindemann-Weierstrauss theorem from transcendental number theory.

We consider networks of processes that all execute the same finite-state protocol and communicate via a rendez-vous mechanism. When a process requests a rendez-vous, another process can respond to it and they both change their control states accordingly. We focus here on a specific semantics, called non-blocking, where the process requesting a rendez-vous can change its state even if no process can respond to it. In this context, we study the parameterised coverability problem of a configuration, which consists in determining whether there is an initialnumber of processes and an execution allowing to reach a configuration bigger than a given one. We show that this problem is EXPSPACE-complete and can be solved in polynomial time if the protocol is partitioned into two sets of states, the states from which a process can request a rendez-vous and the ones from which it can answer one. We also prove that the problem of the existence of an execution bringing all the processes in a final state is undecidable in our context. These two problems can be solved in polynomial time with the classical rendez-vous semantics.

We study deviations by a group of agents in the three main types of matching markets: the house allocation, the marriage, and the roommates models. For a given instance, we call a matching $k$-stable if no other matching exists that is more beneficial to at least $k$ out of the $n$ agents. The concept generalizes the recently studied majority stability. We prove that whereas the verification of $k$-stability for a given matching is polynomial-time solvable in all three models, the complexity of deciding whether a $k$-stable matching exists depends on $\frac{k}{n}$ and is characteristic to each model.

We consider the problem of uncertainty quantification in change point regressions, where the signal can be piecewise polynomial of arbitrary but fixed degree. That is we seek disjoint intervals which, uniformly at a given confidence level, must each contain a change point location. We propose a procedure based on performing local tests at a number of scales and locations on a sparse grid, which adapts to the choice of grid in the sense that by choosing a sparser grid one explicitly pays a lower price for multiple testing. The procedure is fast as its computational complexity is always of the order $\mathcal{O} (n \log (n))$ where $n$ is the length of the data, and optimal in the sense that under certain mild conditions every change point is detected with high probability and the widths of the intervals returned match the mini-max localisation rates for the associated change point problem up to log factors. A detailed simulation study shows our procedure is competitive against state of the art algorithms for similar problems. Our procedure is implemented in the R package ChangePointInference which is available via //github.com/gaviosha/ChangePointInference.

Recent years have witnessed the widespread use of artificial intelligence (AI) algorithms and machine learning (ML) models. Despite their tremendous success, a number of vital problems like ML model brittleness, their fairness, and the lack of interpretability warrant the need for the active developments in explainable artificial intelligence (XAI) and formal ML model verification. The two major lines of work in XAI include feature selection methods, e.g. Anchors, and feature attribution techniques, e.g. LIME and SHAP. Despite their promise, most of the existing feature selection and attribution approaches are susceptible to a range of critical issues, including explanation unsoundness and out-of-distribution sampling. A recent formal approach to XAI (FXAI) although serving as an alternative to the above and free of these issues suffers from a few other limitations. For instance and besides the scalability limitation, the formal approach is unable to tackle the feature attribution problem. Additionally, a formal explanation despite being formally sound is typically quite large, which hampers its applicability in practical settings. Motivated by the above, this paper proposes a way to apply the apparatus of formal XAI to the case of feature attribution based on formal explanation enumeration. Formal feature attribution (FFA) is argued to be advantageous over the existing methods, both formal and non-formal. Given the practical complexity of the problem, the paper then proposes an efficient technique for approximating exact FFA. Finally, it offers experimental evidence of the effectiveness of the proposed approximate FFA in comparison to the existing feature attribution algorithms not only in terms of feature importance and but also in terms of their relative order.

For any two point sets $A,B \subset \mathbb{R}^d$ of size up to $n$, the Chamfer distance from $A$ to $B$ is defined as $\text{CH}(A,B)=\sum_{a \in A} \min_{b \in B} d_X(a,b)$, where $d_X$ is the underlying distance measure (e.g., the Euclidean or Manhattan distance). The Chamfer distance is a popular measure of dissimilarity between point clouds, used in many machine learning, computer vision, and graphics applications, and admits a straightforward $O(d n^2)$-time brute force algorithm. Further, the Chamfer distance is often used as a proxy for the more computationally demanding Earth-Mover (Optimal Transport) Distance. However, the \emph{quadratic} dependence on $n$ in the running time makes the naive approach intractable for large datasets. We overcome this bottleneck and present the first $(1+\epsilon)$-approximate algorithm for estimating the Chamfer distance with a near-linear running time. Specifically, our algorithm runs in time $O(nd \log (n)/\varepsilon^2)$ and is implementable. Our experiments demonstrate that it is both accurate and fast on large high-dimensional datasets. We believe that our algorithm will open new avenues for analyzing large high-dimensional point clouds. We also give evidence that if the goal is to \emph{report} a $(1+\varepsilon)$-approximate mapping from $A$ to $B$ (as opposed to just its value), then any sub-quadratic time algorithm is unlikely to exist.

We provide two families of algorithms to compute characteristic polynomials of endomorphisms and norms of isogenies of Drinfeld modules. Our algorithms work for Drinfeld modules of any rank, defined over any base curve. When the base curve is $\mathbb P^1_{\mathbb F_q}$, we do a thorough study of the complexity, demonstrating that our algorithms are, in many cases, the most asymptotically performant. The first family of algorithms relies on the correspondence between Drinfeld modules and Anderson motives, reducing the computation to linear algebra over a polynomial ring. The second family, available only for the Frobenius endomorphism, is based on a new formula expressing the characteristic polynomial of the Frobenius as a reduced norm in a central simple algebra.

Gaussian variational inference and the Laplace approximation are popular alternatives to Markov chain Monte Carlo that formulate Bayesian posterior inference as an optimization problem, enabling the use of simple and scalable stochastic optimization algorithms. However, a key limitation of both methods is that the solution to the optimization problem is typically not tractable to compute; even in simple settings the problem is nonconvex. Thus, recently developed statistical guarantees -- which all involve the (data) asymptotic properties of the global optimum -- are not reliably obtained in practice. In this work, we provide two major contributions: a theoretical analysis of the asymptotic convexity properties of variational inference with a Gaussian family and the maximum a posteriori (MAP) problem required by the Laplace approximation; and two algorithms -- consistent Laplace approximation (CLA) and consistent stochastic variational inference (CSVI) -- that exploit these properties to find the optimal approximation in the asymptotic regime. Both CLA and CSVI involve a tractable initialization procedure that finds the local basin of the optimum, and CSVI further includes a scaled gradient descent algorithm that provably stays locally confined to that basin. Experiments on nonconvex synthetic and real-data examples show that compared with standard variational and Laplace approximations, both CSVI and CLA improve the likelihood of obtaining the global optimum of their respective optimization problems.

The Chinese Remainder Theorem for the integers says that every system of congruence equations is solvable as long as the system satisfies an obvious necessary condition. This statement can be generalized in a natural way to arbitrary algebraic structures using the language of Universal Algebra. In this context, an algebra is a structure of a first-order language with no relation symbols, and a congruence on an algebra is an equivalence relation on its base set compatible with its fundamental operations. A tuple of congruences of an algebra is called a Chinese Remainder tuple if every system involving them is solvable. In this article we study the complexity of deciding whether a tuple of congruences of a finite algebra is a Chinese Remainder tuple. This problem, which we denote CRT, is easily seen to lie in coNP. We prove that it is actually coNP-complete and also show that it is tractable when restricted to several well-known classes of algebras, such as vector spaces and distributive lattices. The polynomial algorithms we exhibit are made possible by purely algebraic characterizations of Chinese Remainder tuples for algebras in these classes, which constitute interesting results in their own right. Among these, an elegant characterization of Chinese Remainder tuples of finite distributive lattices stands out. Finally, we address the restriction of CRT to an arbitrary equational class $\mathcal{V}$ generated by a two-element algebra. Here we establish an (almost) dichotomy by showing that, unless $\mathcal{V}$ is the class of semilattices, the problem is either coNP-complete or tractable.

We study the measure of order-competitive ratio introduced by Ezra et al. [2023] for online algorithms in Bayesian combinatorial settings. In our setting, a decision-maker observes a sequence of elements that are associated with stochastic rewards that are drawn from known priors, but revealed one by one in an online fashion. The decision-maker needs to decide upon the arrival of each element whether to select it or discard it (according to some feasibility constraint), and receives the associated rewards of the selected elements. The order-competitive ratio is defined as the worst-case ratio (over all distribution sequences) between the performance of the best order-unaware and order-aware algorithms, and quantifies the loss incurred due to the lack of knowledge of the arrival order. Ezra et al. [2023] showed how to design algorithms that achieve better approximations with respect to the new benchmark (order-competitive ratio) in the single-choice setting, which raises the natural question of whether the same can be achieved in combinatorial settings. In particular, whether it is possible to achieve a constant approximation with respect to the best online algorithm for downward-closed feasibility constraints, whether $\omega(1/n)$-approximation is achievable for general (non-downward-closed) feasibility constraints, or whether a convergence rate to $1$ of $o(1/\sqrt{k})$ is achievable for the multi-unit setting. We show, by devising novel constructions that may be of independent interest, that for all three scenarios, the asymptotic lower bounds with respect to the old benchmark, also hold with respect to the new benchmark.

北京阿比特科技有限公司