亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider deterministic algorithms for the well-known hidden subgroup problem ($\mathsf{HSP}$): for a finite group $G$ and a finite set $X$, given a function $f:G \to X$ and the promise that for any $g_1, g_2 \in G, f(g_1) = f(g_2)$ iff $g_1H=g_2H$ for a subgroup $H \le G$, the goal of the decision version is to determine whether $H$ is trivial or not, and the goal of the identification version is to identify $H$. An algorithm for the problem should query $f(g)$ for $g\in G$ at least as possible. Nayak asked whether there exist deterministic algorithms with $O(\sqrt{\frac{|G|}{|H|}})$ query complexity for $\mathsf{HSP}$. We answer this problem by proving the following results, which also extend the main results of Ref. [30], since here the algorithms do not rely on any prior knowledge of $H$. (i)When $G$ is a general finite Abelian group, there exist an algorithm with $O(\sqrt{\frac{|G|}{|H|}})$ queries to decide the triviality of $H$ and an algorithm to identify $H$ with $O(\sqrt{\frac{|G|}{|H|}\log |H|}+\log |H|)$ queries. (ii)In general there is no deterministic algorithm for the identification version of $\mathsf{HSP}$ with query complexity of $O(\sqrt{\frac{|G|}{|H|}})$, since there exists an instance of $\mathsf{HSP}$ that needs $\omega(\sqrt{\frac{|G|}{|H|}})$ queries to identify $H$. $f(x)$ is said to be $\omega(g(x))$ if for every positive constant $C$, there exists a positive constant $N$ such that for $x>N$, $f(x)\ge C\cdot g(x)$, which means $g$ is a strict lower bound for $f$. On the other hand, there exist instances of $\mathsf{HSP}$ with query complexity far smaller than $O(\sqrt{\frac{|G|}{|H|}})$, whose query complexity is $O(\log \frac{|G|}{|H|})$ and even $O(1)$.

相關內容

We consider the dichotomy conjecture for consistent query answering under primary key constraints stating that for every fixed Boolean conjunctive query q, testing whether it is certain over all repairs of a given inconsistent database is either polynomial time or coNP-complete. This conjecture has been verified for self-join-free and path queries. We propose a simple inflationary fixpoint algorithm for consistent query answering which, for a given database, naively computes a set $\Delta$ of subsets of database repairs with at most $k$ facts, where $k$ is the size of the query $q$. The algorithm runs in polynomial time and can be formally defined as: 1. Initialize $\Delta$ with all sets $S$ of at most $k$ facts such that $S$ satisfies $q$. 2. Add any set $S$ of at most $k$ facts to $\Delta$ if there exists a block $B$ (ie, a maximal set of facts sharing the same key) such that for every fact $a$ of $B$ there is a set $S' \in \Delta$ contained in $(S \cup \{a\})$. The algorithm answers "$q$ is certain" iff $\Delta$ eventually contains the empty set. The algorithm correctly computes certain answers when the query $q$ falls in the polynomial time cases for self-join-free queries and path queries. For arbitrary queries, the algorithm is an under-approximation: The query is guaranteed to be certain if the algorithm claims so. However, there are polynomial time certain queries (with self-joins) which are not identified as such by the algorithm.

In this paper, some preliminaries about signal flow graph, linear time-invariant system on F(z) and computational complexity are first introduced in detail. In order to synthesize the necessary and sufficient condition on F(z) for a general 2-path problem, the sufficient condition on F(z) or R and necessary conditions on F(z) for a general 2-path problem are secondly analyzed respectively. Moreover, an equivalent sufficient and necessary condition on R whether there exists a general 2-path is deduced in detail. Finally, the computational complexity of the algorithm for this equivalent sufficient and necessary condition is introduced so that it means that the general 2-path problem is a P problem.

Endoluminal reconstruction using flow diverters represents a novel paradigm for the minimally invasive treatment of intracranial aneurysms. The configuration assumed by these very dense braided stents once deployed within the parent vessel is not easily predictable and medical volumetric images alone may be insufficient to plan the treatment satisfactorily. Therefore, here we propose a fast and accurate machine learning and reduced order modelling framework, based on finite element simulations, to assist practitioners in the planning and interventional stages. It consists of a first classification step to determine a priori whether a simulation will be successful (good conformity between stent and vessel) or not from a clinical perspective, followed by a regression step that provides an approximated solution of the deployed stent configuration. The latter is achieved using a non-intrusive reduced order modelling scheme that combines the proper orthogonal decomposition algorithm and Gaussian process regression. The workflow was validated on an idealised intracranial artery with a saccular aneurysm and the effect of six geometrical and surgical parameters on the outcome of stent deployment was studied. The two-step workflow allows the classification of deployment conditions with up to 95% accuracy and real-time prediction of the stent deployed configuration with an average prediction error never greater than the spatial resolution of 3D rotational angiography (0.15 mm). These results are promising as they demonstrate the ability of these techniques to achieve simulations within a few milliseconds while retaining the mechanical realism and predictability of the stent deployed configuration.

Hypergraphs are a powerful abstraction for modeling high-order relations, which are ubiquitous in many fields. A hypergraph consists of nodes and hyperedges (i.e., subsets of nodes); and there have been a number of attempts to extend the notion of $k$-cores, which proved useful with numerous applications for pairwise graphs, to hypergraphs. However, the previous extensions are based on an unrealistic assumption that hyperedges are fragile, i.e., a high-order relation becomes obsolete as soon as a single member leaves it. In this work, we propose a new substructure model, called ($k$, $t$)-hypercore, based on the assumption that high-order relations remain as long as at least $t$ fraction of the members remain. Specifically, it is defined as the maximal subhypergraph where (1) every node has degree at least $k$ in it and (2) at least $t$ fraction of the nodes remain in every hyperedge. We first prove that, given $t$ (or $k$), finding the ($k$, $t$)-hypercore for every possible $k$ (or $t$) can be computed in time linear w.r.t the sum of the sizes of hyperedges. Then, we demonstrate that real-world hypergraphs from the same domain share similar ($k$, $t$)-hypercore structures, which capture different perspectives depending on $t$. Lastly, we show the successful applications of our model in identifying influential nodes, dense substructures, and vulnerability in hypergraphs.

In this paper, we consider stochastic versions of three classical growth models given by ordinary differential equations (ODEs). Indeed we use stochastic versions of Von Bertalanffy, Gompertz, and Logistic differential equations as models. We assume that each stochastic differential equation (SDE) has some crucial parameters in the drift to be estimated and we use the Maximum Likelihood Estimator (MLE) to estimate them. For estimating the diffusion parameter, we use the MLE for two cases and the quadratic variation of the data for one of the SDEs. We apply the Akaike information criterion (AIC) to choose the best model for the simulated data. We consider that the AIC is a function of the drift parameter. We present a simulation study to validate our selection method. The proposed methodology could be applied to datasets with continuous and discrete observations, but also with highly sparse data. Indeed, we can use this method even in the extreme case where we have observed only one point for each path, under the condition that we observed a sufficient number of trajectories. For the last two cases, the data can be viewed as incomplete observations of a model with a tractable likelihood function; then, we propose a version of the Expectation Maximization (EM) algorithm to estimate these parameters. This type of datasets typically appears in fishery, for instance.

We consider leader election in clique networks, where $n$ nodes are connected by point-to-point communication links. For the synchronous clique under simultaneous wake-up, i.e., where all nodes start executing the algorithm in round $1$, we show a tradeoff between the number of messages and the amount of time. More specifically, we show that any deterministic algorithm with a message complexity of $n f(n)$ requires $\Omega\left(\frac{\log n}{\log f(n)+1}\right)$ rounds, for $f(n) = \Omega(\log n)$. Our result holds even if the node IDs are chosen from a relatively small set of size $\Theta(n\log n)$, as we are able to avoid using Ramsey's theorem. We also give an upper bound that improves over the previously-best tradeoff. Our second contribution for the synchronous clique under simultaneous wake-up is to show that $\Omega(n\log n)$ is in fact a lower bound on the message complexity that holds for any deterministic algorithm with a termination time $T(n)$. We complement this result by giving a simple deterministic algorithm that achieves leader election in sublinear time while sending only $o(n\log n)$ messages, if the ID space is of at most linear size. We also show that Las Vegas algorithms (that never fail) require $\Theta(n)$ messages. For the synchronous clique under adversarial wake-up, we show that $\Omega(n^{3/2})$ is a tight lower bound for randomized $2$-round algorithms. Finally, we turn our attention to the asynchronous clique: Assuming adversarial wake-up, we give a randomized algorithm that achieves a message complexity of $O(n^{1 + 1/k})$ and an asynchronous time complexity of $k+8$. For simultaneous wake-up, we translate the deterministic tradeoff algorithm of Afek and Gafni to the asynchronous model, thus partially answering an open problem they pose.

In this paper, I try to tame "Basu's elephants" (data with extreme selection on observables). I propose new practical large-sample and finite-sample methods for estimating and inferring heterogeneous causal effects (under unconfoundedness) in the empirically relevant context of limited overlap. I develop a general principle called "Stable Probability Weighting" (SPW) that can be used as an alternative to the widely used Inverse Probability Weighting (IPW) technique, which relies on strong overlap. I show that IPW (or its augmented version), when valid, is a special case of the more general SPW (or its doubly robust version), which adjusts for the extremeness of the conditional probabilities of the treatment states. The SPW principle can be implemented using several existing large-sample parametric, semiparametric, and nonparametric procedures for conditional moment models. In addition, I provide new finite-sample results that apply when unconfoundedness is plausible within fine strata. Since IPW estimation relies on the problematic reciprocal of the estimated propensity score, I develop a "Finite-Sample Stable Probability Weighting" (FPW) set-estimator that is unbiased in a sense. I also propose new finite-sample inference methods for testing a general class of weak null hypotheses. The associated computationally convenient methods, which can be used to construct valid confidence sets and to bound the finite-sample confidence distribution, are of independent interest. My large-sample and finite-sample frameworks extend to the setting of multivalued treatments.

In order to apply canonical labelling of graphs and isomorphism checking in interactive theorem provers, these checking algorithms must either be mechanically verified or their results must be verifiable by independent checkers. We analyze a state-of-the-art algorithm for canonical labelling of graphs (described by McKay and Piperno) and formulate it in terms of a formal proof system. We provide an implementation that can export a proof that the obtained graph is the canonical form of a given graph. Such proofs are then verified by our independent checker and can be used to confirm that two given graphs are not isomorphic.

Consider a connected graph $G$ and let $T$ be a spanning tree of $G$. Every edge $e \in G-T$ induces a cycle in $T \cup \{e\}$. The intersection of two distinct such cycles is the set of edges of $T$ that belong to both cycles. The MSTCI problem consists in finding a spanning tree that has the least number of such non-empty intersections and the instersection number is the number of non-empty intersections of a solution. In this article we consider three aspects of the problem in a general context (i.e. for arbitrary connected graphs). The first presents two lower bounds of the intersection number. The second compares the intersection number of graphs that differ in one edge. The last is an attempt to generalize a recent result for graphs with a universal vertex.

We consider a slotted-time system with a transmitter-receiver pair. In the system, a transmitter observes a dynamic source and sends updates to a remote receiver through a communication channel. We assume that the channel is error-free but suffers a random delay. Moreover, when an update has been transmitted for too long, the transmission will be terminated immediately, and the update will be discarded. We assume the maximum transmission time is predetermined and is not controlled by the transmitter. The receiver will maintain estimates of the current state of the dynamic source using the received updates. In this paper, we adopt the Age of Incorrect Information (AoII) as the performance metric and investigate the problem of optimizing the transmitter's action in each time slot to minimize AoII. We first characterize the optimization problem using Markov Decision Process and evaluate the performance of some canonical transmission policies. Then, by leveraging the policy improvement theorem, we prove that, under a simple and easy-to-verify condition, the optimal policy for the transmitter is the one that initiates a transmission whenever the channel is idle and AoII is not zero. Lastly, we take the case where the transmission time is geometrically distributed as an example. For this example, we verify the condition numerically and provide numerical results that highlight the performance of the optimal policy.

北京阿比特科技有限公司