亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We determine the computational power of isometric tensor network states (isoTNS), a variational ansatz originally developed to numerically find and compute properties of gapped ground states and topological states in two dimensions. By mapping 2D isoTNS to 1+1D unitary quantum circuits, we find that computing local expectation values in isoTNS is $\textsf{BQP}$-complete. We then introduce injective isoTNS, which are those isoTNS that are the unique ground states of frustration-free Hamiltonians, and which are characterized by an injectivity parameter $\delta\in(0,1/D]$, where $D$ is the bond dimension of the isoTNS. We show that injectivity necessarily adds depolarizing noise to the circuit at a rate $\eta=\delta^2D^2$. We show that weakly injective isoTNS (small $\delta$) are still $\textsf{BQP}$-complete, but that there exists an efficient classical algorithm to compute local expectation values in strongly injective isoTNS ($\eta\geq0.41$). Sampling from isoTNS corresponds to monitored quantum dynamics and we exhibit a family of isoTNS that undergo a phase transition from a hard regime to an easy phase where the monitored circuit can be sampled efficiently. Our results can be used to design provable algorithms to contract isoTNS. Our mapping between ground states of certain frustration-free Hamiltonians to open circuit dynamics in one dimension fewer may be of independent interest.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國(guo)際(ji)網(wang)絡會議。 Publisher:IFIP。 SIT:

Statistical analysis of bipartite networks frequently requires randomly sampling from the set of all bipartite networks with the same degree sequence as an observed network. Trade algorithms offer an efficient way to generate samples of bipartite networks by incrementally `trading' the positions of some of their edges. However, it is difficult to know how many such trades are required to ensure that the sample is random. I propose a stopping rule that focuses on the distance between sampled networks and the observed network, and stops performing trades when this distribution stabilizes. Analyses demonstrate that, for over 300 different degree sequences, using this stopping rule ensures a random sample with a high probability, and that it is practical for use in empirical applications.

The multiscale hybrid-mixed (MHM) method consists of a multi-level strategy to approximate the solution of boundary value problems with heterogeneous coefficients. In this context, we propose a family of low-order finite elements for the linear elasticity problem which are free from Poisson locking. The finite elements rely on face degrees of freedom associated with multiscale bases obtained from local Neumann problems with piecewise polynomial interpolations on faces. We establish sufficient refinement levels on the fine-scale mesh such that the MHM method is well-posed, optimally convergent under local regularity conditions, and locking-free. Two-dimensional numerical tests assess theoretical results.

Many classical inferential approaches fail to hold when interference exists among the population units. This amounts to the treatment status of one unit affecting the potential outcome of other units in the population. Testing for such spillover effects in this setting makes the null hypothesis non-sharp. An interesting approach to tackling the non-sharp nature of the null hypothesis in this setup is constructing conditional randomization tests such that the null is sharp on the restricted population. In randomized experiments, conditional randomized tests hold finite sample validity. Such approaches can pose computational challenges as finding these appropriate sub-populations based on experimental design can involve solving an NP-hard problem. In this paper, we view the network amongst the population as a random variable instead of being fixed. We propose a new approach that builds a conditional quasi-randomization test. Our main idea is to build the (non-sharp) null distribution of no spillover effects using random graph null models. We show that our method is exactly valid in finite-samples under mild assumptions. Our method displays enhanced power over other methods, with substantial improvement in complex experimental designs. We highlight that the method reduces to a simple permutation test, making it easy to implement in practice. We conduct a simulation study to verify the finite-sample validity of our approach and illustrate our methodology to test for interference in a weather insurance adoption experiment run in rural China.

Regularization of inverse problems is of paramount importance in computational imaging. The ability of neural networks to learn efficient image representations has been recently exploited to design powerful data-driven regularizers. While state-of-the-art plug-and-play methods rely on an implicit regularization provided by neural denoisers, alternative Bayesian approaches consider Maximum A Posteriori (MAP) estimation in the latent space of a generative model, thus with an explicit regularization. However, state-of-the-art deep generative models require a huge amount of training data compared to denoisers. Besides, their complexity hampers the optimization involved in latent MAP derivation. In this work, we first propose to use compressive autoencoders instead. These networks, which can be seen as variational autoencoders with a flexible latent prior, are smaller and easier to train than state-of-the-art generative models. As a second contribution, we introduce the Variational Bayes Latent Estimation (VBLE) algorithm, which performs latent estimation within the framework of variational inference. Thanks to a simple yet efficient parameterization of the variational posterior, VBLE allows for fast and easy (approximate) posterior sampling. Experimental results on image datasets BSD and FFHQ demonstrate that VBLE reaches similar performance than state-of-the-art plug-and-play methods, while being able to quantify uncertainties faster than other existing posterior sampling techniques.

This paper investigates the problem of informative path planning for a mobile robotic sensor network in spatially temporally distributed mapping. The robots are able to gather noisy measurements from an area of interest during their movements to build a Gaussian Process (GP) model of a spatio-temporal field. The model is then utilized to predict the spatio-temporal phenomenon at different points of interest. To spatially and temporally navigate the group of robots so that they can optimally acquire maximal information gains while their connectivity is preserved, we propose a novel multistep prediction informative path planning optimization strategy employing our newly defined local cost functions. By using the dual decomposition method, it is feasible and practical to effectively solve the optimization problem in a distributed manner. The proposed method was validated through synthetic experiments utilizing real-world data sets.

Discovering a suitable neural network architecture for modeling complex dynamical systems poses a formidable challenge, often involving extensive trial and error and navigation through a high-dimensional hyper-parameter space. In this paper, we discuss a systematic approach to constructing neural architectures for modeling a subclass of dynamical systems, namely, Linear Time-Invariant (LTI) systems. We use a variant of continuous-time neural networks in which the output of each neuron evolves continuously as a solution of a first-order or second-order Ordinary Differential Equation (ODE). Instead of deriving the network architecture and parameters from data, we propose a gradient-free algorithm to compute sparse architecture and network parameters directly from the given LTI system, leveraging its properties. We bring forth a novel neural architecture paradigm featuring horizontal hidden layers and provide insights into why employing conventional neural architectures with vertical hidden layers may not be favorable. We also provide an upper bound on the numerical errors of our neural networks. Finally, we demonstrate the high accuracy of our constructed networks on three numerical examples.

Lloyd Shapley's cooperative value allocation theory stands as a central concept in game theory, extensively utilized across various domains to distribute resources, evaluate individual contributions, and ensure fairness. The Shapley value formula and his four axioms that characterize it form the foundation of the theory. Traditionally, the Shapley value is assigned under the assumption that all players in a cooperative game will ultimately form the grand coalition. In this paper, we reinterpret the Shapley value as an expectation of a certain stochastic path integral, with each path representing a general coalition formation process. As a result, the value allocation is naturally extended to all partial coalition states. In addition, we provide a set of five properties that extend the Shapley axioms and characterize the stochastic path integral. Finally, by integrating Hodge calculus, stochastic processes, and path integration of edge flows on graphs, we expand the cooperative value allocation theory beyond the standard coalition game structure to encompass a broader range of cooperative network configurations.

We propose a method for obtaining parsimonious decompositions of networks into higher order interactions which can take the form of arbitrary motifs.The method is based on a class of analytically solvable generative models, where vertices are connected via explicit copies of motifs, which in combination with non-parametric priors allow us to infer higher order interactions from dyadic graph data without any prior knowledge on the types or frequencies of such interactions. Crucially, we also consider 'degree--corrected' models that correctly reflect the degree distribution of the network and consequently prove to be a better fit for many real world--networks compared to non-degree corrected models. We test the presented approach on simulated data for which we recover the set of underlying higher order interactions to a high degree of accuracy. For empirical networks the method identifies concise sets of atomic subgraphs from within thousands of candidates that cover a large fraction of edges and include higher order interactions of known structural and functional significance. The method not only produces an explicit higher order representation of the network but also a fit of the network to analytically tractable models opening new avenues for the systematic study of higher order network structures.

We endeavour to estimate numerous multi-dimensional means of various probability distributions on a common space based on independent samples. Our approach involves forming estimators through convex combinations of empirical means derived from these samples. We introduce two strategies to find appropriate data-dependent convex combination weights: a first one employing a testing procedure to identify neighbouring means with low variance, which results in a closed-form plug-in formula for the weights, and a second one determining weights via minimization of an upper confidence bound on the quadratic risk.Through theoretical analysis, we evaluate the improvement in quadratic risk offered by our methods compared to the empirical means. Our analysis focuses on a dimensional asymptotics perspective, showing that our methods asymptotically approach an oracle (minimax) improvement as the effective dimension of the data increases.We demonstrate the efficacy of our methods in estimating multiple kernel mean embeddings through experiments on both simulated and real-world datasets.

Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.

北京阿比特科技有限公司