亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In fixed budget bandit identification, an algorithm sequentially observes samples from several distributions up to a given final time. It then answers a query about the set of distributions. A good algorithm will have a small probability of error. While that probability decreases exponentially with the final time, the best attainable rate is not known precisely for most identification tasks. We show that if a fixed budget task admits a complexity, defined as a lower bound on the probability of error which is attained by a single algorithm on all bandit problems, then that complexity is determined by the best non-adaptive sampling procedure for that problem. We show that there is no such complexity for several fixed budget identification tasks including Bernoulli best arm identification with two arms: there is no single algorithm that attains everywhere the best possible rate.

相關內容

Recently proposed Generalized Time-domain Velocity Vector (GTVV) is a generalization of relative room impulse response in spherical harmonic (aka Ambisonic) domain that allows for blind estimation of early-echo parameters: the directions and relative delays of individual reflections. However, the derived closed-form expression of GTVV mandates few assumptions to hold, most important being that the impulse response of the reference signal needs to be a minimum-phase filter. In practice, the reference is obtained by spatial filtering towards the Direction-of-Arrival of the source, and the aforementioned condition is bounded by the performance of the applied beamformer (and thus, by the Ambisonic array order). In the present work, we suggest to circumvent this problem by properly modelling the GTVV time series, which permits not only to relax the initial assumptions, but also to extract the information therein is a more consistent and efficient manner, entering the realm of blind system identification. Experiments using measured room impulse responses confirm the effectiveness of the proposed approach.

Beame et al. [ITCS 2018 & TALG 2021] introduced and used the Bipartite Independent Set (BIS) and Independent Set (IS) oracle access to an unknown, simple, unweighted and undirected graph and solved the edge estimation problem. The introduction of this oracle set forth a series of works in a short span of time that either solved open questions mentioned by Beame et al. or were generalizations of their work as in Dell and Lapinskas [STOC 2018], Dell, Lapinskas and Meeks [SODA 2020], Bhattacharya et al. [ISAAC 2019 & Theory Comput. Syst. 2021], and Chen et al. [SODA 2020]. Edge estimation using BIS can be done using polylogarithmic queries, while IS queries need sub-linear but more than polylogarithmic queries. Chen et al. improved Beame et al.'s upper bound result for edge estimation using IS and also showed an almost matching lower bound. Beame et al. in their introductory work asked a few open questions out of which one was on estimating structures of higher order than edges, like triangles and cliques, using BIS queries. In this work, we completely resolve the query complexity of estimating triangles using BIS oracle. While doing so, we prove a lower bound for an even stronger query oracle called Edge Emptiness (EE) oracle, recently introduced by Assadi, Chakrabarty and Khanna [ESA 2021] to test graph connectivity.

We study the basic statistical problem of testing whether normally distributed $n$-dimensional data has been truncated, i.e. altered by only retaining points that lie in some unknown truncation set $S \subseteq \mathbb{R}^n$. As our main algorithmic results, (1) We give a computationally efficient $O(n)$-sample algorithm that can distinguish the standard normal distribution $N(0,I_n)$ from $N(0,I_n)$ conditioned on an unknown and arbitrary convex set $S$. (2) We give a different computationally efficient $O(n)$-sample algorithm that can distinguish $N(0,I_n)$ from $N(0,I_n)$ conditioned on an unknown and arbitrary mixture of symmetric convex sets. These results stand in sharp contrast with known results for learning or testing convex bodies with respect to the normal distribution or learning convex-truncated normal distributions, where state-of-the-art algorithms require essentially $n^{\sqrt{n}}$ samples. An easy argument shows that no finite number of samples suffices to distinguish $N(0,I_n)$ from an unknown and arbitrary mixture of general (not necessarily symmetric) convex sets, so no common generalization of results (1) and (2) above is possible. We also prove that any algorithm (computationally efficient or otherwise) that can distinguish $N(0,I_n)$ from $N(0,I_n)$ conditioned on an unknown symmetric convex set must use $\Omega(n)$ samples. This shows that the sample complexity of each of our algorithms is optimal up to a constant factor.

Spanning two decades, the Encyclopaedia of DNA Elements (ENCODE) is a collaborative research project that aims to identify all the functional elements in the human and mouse genomes. To best serve the scientific community, all data generated by the consortium is shared through a web-portal (//www.encodeproject.org/) with no access restrictions. The fourth and final phase of the project added a diverse set of new samples (including those associated with human disease), and a wide range of new assays aimed at detection, characterization and validation of functional genomic elements. The ENCODE data portal hosts results from over 23,000 functional genomics experiments, over 800 functional elements characterization experiments (including in vivo transgenic enhancer assays, reporter assays and CRISPR screens) along with over 60,000 results of computational and integrative analyses (including imputations, predictions and genome annotations). The ENCODE Data Coordination Center (DCC) is responsible for development and maintenance of the data portal, along with the implementation and utilisation of the ENCODE uniform processing pipelines to generate uniformly processed data. Here we report recent updates to the data portal. Specifically, we have completely redesigned the home page, improved search interface, added several new pages to highlight collections of biologically related data (deeply profiled cell lines, immune cells, Alzheimer's Disease, RNA-Protein interactions, degron matrix and a matrix of experiments organised by human donors), added single-cell experiments, and enhanced the cart interface for visualisation and download of user-selected datasets.

The weight distribution of error correction codes is a critical determinant of their error-correcting performance, making enumeration of utmost importance. In the case of polar codes, the minimum weight $\wm$ (which is equal to minimum distance $d$) is the only weight for which an explicit enumerator formula is currently available. Having closed-form weight enumerators for polar codewords with weights greater than the minimum weight not only simplifies the enumeration process but also provides valuable insights towards constructing better polar-like codes. In this paper, we contribute towards understanding the algebraic structure underlying higher weights by analyzing Minkowski sums of orbits. Our approach builds upon the lower triangular affine (LTA) group of decreasing monomial codes. Specifically, we propose a closed-form expression for the enumeration of codewords with weight $1.5\wm$. Our simulations demonstrate the potential for extending this method to higher weights.

In this paper we discuss potentially practical ways to produce expander graphs with good spectral properties and a compact description. We focus on several classes of uniform and bipartite expander graphs defined as random Schreier graphs of the general linear group over the finite field of size two. We perform numerical experiments and show that such constructions produce spectral expanders that can be useful for practical applications. To find a theoretical explanation of the observed experimental results, we used the method of moments to prove upper bounds for the expected second largest eigenvalue of the random Schreier graphs used in our constructions. We focus on bounds for which it is difficult to study the asymptotic behaviour but it is possible to compute non-trivial conclusions for relatively small graphs with parameters from our numerical experiments (e.g., with less than 2^200 vertices and degree at least logarithmic in the number of vertices).

Quantum networks constitute a major part of quantum technologies. They will boost distributed quantum computing drastically by providing a scalable modular architecture of quantum chips, or by establishing an infrastructure for measurement based quantum computing. Moreover, they will provide the backbone of the future quantum internet, allowing for high margins of security. Interestingly, the advantages that the quantum networks would provide for communications, rely on entanglement distribution, which suffers from high latency in protocols based on Bell pair distribution and bipartite entanglement swapping. Moreover, the designed algorithms for multipartite entanglement routing suffer from intractability issues making them unsolvable exactly in polynomial time. In this paper, we investigate a new approach for graph states distribution in quantum networks relying inherently on local quantum coding -- LQC -- isometries and on multipartite states transfer. Additionally, single-shot bounds for stabilizer states distribution are provided. Analogously to network coding, these bounds are shown to be achievable if appropriate isometries/stabilizer codes in relay nodes are chosen, which induces a lower latency entanglement distribution. As a matter of fact, the advantages of the protocol for different figures of merit of the network are provided.

Discrete event systems (DES) have been deeply developed and applied in practice, but state complexity in DES still is an important problem to be better solved with innovative methods. With the development of quantum computing and quantum control, a natural problem is to simulate DES by means of quantum computing models and to establish {\it quantum DES} (QDES). The motivation is twofold: on the one hand, QDES have potential applications when DES are simulated and processed by quantum computers, where quantum systems are employed to simulate the evolution of states driven by discrete events, and on the other hand, QDES may have essential advantages over DES concerning state complexity for imitating some practical problems. So, the goal of this paper is to establish a basic framework of QDES by using {\it quantum finite automata} (QFA) as the modelling formalisms, and the supervisory control theorems of QDES are established and proved. Then we present a polynomial-time algorithm to decide whether or not the controllability condition holds. In particular, we construct a number of new examples of QFA to illustrate the supervisory control of QDES and to verify the essential advantages of QDES over classical DES in state complexity.

We introduce a family of graph parameters, called induced multipartite graph parameters, and study their computational complexity. First, we consider the following decision problem: an instance is an induced multipartite graph parameter $p$ and a given graph $G$, and for natural numbers $k\geq2$ and $\ell$, we must decide whether the maximum value of $p$ over all induced $k$-partite subgraphs of $G$ is at most $\ell$. We prove that this problem is W[1]-hard. Next, we consider a variant of this problem, where we must decide whether the given graph $G$ contains a sufficiently large induced $k$-partite subgraph $H$ such that $p(H)\leq\ell$. We show that for certain parameters this problem is para-NP-hard, while for others it is fixed-parameter tractable.

With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.

北京阿比特科技有限公司