亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The (extended) Binary Value Principle (eBVP: $\sum_{i=1}^n x_i2^{i-1} = -k$ for $k>0$ and $x^2_i=x_i$) has received a lot of attention recently, several lower bounds have been proved for it (Alekseev et al 2020, Alekseev 2021, Part and Tzameret 2021). Also it has been shown (Alekseev et al 2020) that the probabilistically verifiable Ideal Proof System (IPS) (Grochow and Pitassi 2018) together with eBVP polynomially simulates a similar semialgebraic proof system. In this paper we consider Polynomial Calculus with the algebraic version of Tseitin's extension rule (Ext-PC). Contrary to IPS, this is a Cook--Reckhow proof system. We show that in this context eBVP still allows to simulate similar semialgebraic systems. We also prove that it allows to simulate the Square Root Rule (Grigoriev and Hirsch 2003), which is absolutely unclear in the context of ordinary Polynomial Calculus. On the other hand, we demonstrate that eBVP probably does not help in proving exponential lower bounds for Boolean tautologies: we show that an Ext-PC (even with the Square Root Rule) derivation of any such tautology from eBVP must be of exponential size.

相關內容

迄今為止,產品設計師最友好的交互動畫軟件。

Location-aware networks will introduce new services and applications for modern convenience, surveillance, and public safety. In this paper, we consider the problem of cooperative localization in a wireless network where the position of certain anchor nodes can be controlled. We introduce an active planning method that aims at moving the anchors such that the information gain of future measurements is maximized. In the control layer of the proposed method, control inputs are calculated by minimizing the traces of approximate inverse Bayesian Fisher information matrixes (FIMs). The estimation layer computes estimates of the agent states and provides Gaussian representations of marginal posteriors of agent positions to the control layer for approximate Bayesian FIM computations. Based on a cost function that accumulates Bayesian FIM contributions over a sliding window of discrete future timesteps, a receding horizon (RH) control is performed. Approximations that make it possible to solve the resulting tree-search problem efficiently are also discussed. A numerical case study demonstrates the intelligent behavior of a single controlled anchor in a 3-D scenario and the resulting significantly improved localization accuracy.

The paper considers the SUPPORTED model of distributed computing introduced by Schmid and Suomela [HotSDN'13], generalizing the LOCAL and CONGEST models. In this framework, multiple instances of the same problem, differing from each other by the subnetwork to which they apply, recur over time, and need to be solved efficiently online. To do that, one may rely on an initial preprocessing phase for computing some useful information. This preprocessing phase makes it possible, in some cases, to overcome locality-based time lower bounds. A first contribution of the current paper is expanding the spectrum of problem types to which the SUPPORTED model applies. In addition to subnetwork-defined recurrent problems, we introduce also recurrent problems of two additional types: (i) instances defined by partial client sets, and (ii) instances defined by partially fixed outputs. Our second contribution is illustrating the versatility of the SUPPORTED framework by examining recurrent variants of three classical graph problems. The first problem is Minimum Client Dominating Set (CDS), a recurrent version of the classical dominating set problem with each recurrent instance requiring us to dominate a partial client set. We provide a constant time approximation scheme for CDS on trees and planar graphs. The second problem is Color Completion (CC), a recurrent version of the coloring problem in which each recurrent instance comes with a partially fixed coloring (of some of the vertices) that must be completed. We study the minimum number of new colors and the minimum total number of colors necessary for completing this task. The third problem we study is a recurrent version of Locally Checkable Labellings (LCL) on paths of length $n$. We show that such problems have complexities that are either $\Theta(1)$ or $\Theta(n)$, extending the results of Foerster et al. [INFOCOM'19].

We consider the problem of estimating a multivariate function $f_0$ of bounded variation (BV), from noisy observations $y_i = f_0(x_i) + z_i$ made at random design points $x_i \in \mathbb{R}^d$, $i=1,\ldots,n$. We study an estimator that forms the Voronoi diagram of the design points, and then solves an optimization problem that regularizes according to a certain discrete notion of total variation (TV): the sum of weighted absolute differences of parameters $\theta_i,\theta_j$ (which estimate the function values $f_0(x_i),f_0(x_j)$) at all neighboring cells $i,j$ in the Voronoi diagram. This is seen to be equivalent to a variational optimization problem that regularizes according to the usual continuum (measure-theoretic) notion of TV, once we restrict the domain to functions that are piecewise constant over the Voronoi diagram. The regression estimator under consideration hence performs (shrunken) local averaging over adaptively formed unions of Voronoi cells, and we refer to it as the Voronoigram, following the ideas in Koenker (2005), and drawing inspiration from Tukey's regressogram (Tukey, 1961). Our contributions in this paper span both the conceptual and theoretical frontiers: we discuss some of the unique properties of the Voronoigram in comparison to TV-regularized estimators that use other graph-based discretizations; we derive the asymptotic limit of the Voronoi TV functional; and we prove that the Voronoigram is minimax rate optimal (up to log factors) for estimating BV functions that are essentially bounded.

Graph clustering is a fundamental problem in unsupervised learning, with numerous applications in computer science and in analysing real-world data. In many real-world applications, we find that the clusters have a significant high-level structure. This is often overlooked in the design and analysis of graph clustering algorithms which make strong simplifying assumptions about the structure of the graph. This thesis addresses the natural question of whether the structure of clusters can be learned efficiently and describes four new algorithmic results for learning such structure in graphs and hypergraphs. All of the presented theoretical results are extensively evaluated on both synthetic and real-word datasets of different domains, including image classification and segmentation, migration networks, co-authorship networks, and natural language processing. These experimental results demonstrate that the newly developed algorithms are practical, effective, and immediately applicable for learning the structure of clusters in real-world data.

The expressive and computationally inexpensive bipartite Graph Neural Networks (GNN) have been shown to be an important component of deep learning based Mixed-Integer Linear Program (MILP) solvers. Recent works have demonstrated the effectiveness of such GNNs in replacing the branching (variable selection) heuristic in branch-and-bound (B&B) solvers. These GNNs are trained, offline and on a collection of MILPs, to imitate a very good but computationally expensive branching heuristic, strong branching. Given that B&B results in a tree of sub-MILPs, we ask (a) whether there are strong dependencies exhibited by the target heuristic among the neighboring nodes of the B&B tree, and (b) if so, whether we can incorporate them in our training procedure. Specifically, we find that with the strong branching heuristic, a child node's best choice was often the parent's second-best choice. We call this the "lookback" phenomenon. Surprisingly, the typical branching GNN of Gasse et al. (2019) often misses this simple "answer". To imitate the target behavior more closely by incorporating the lookback phenomenon in GNNs, we propose two methods: (a) target smoothing for the standard cross-entropy loss function, and (b) adding a Parent-as-Target (PAT) Lookback regularizer term. Finally, we propose a model selection framework to incorporate harder-to-formulate objectives such as solving time in the final models. Through extensive experimentation on standard benchmark instances, we show that our proposal results in up to 22% decrease in the size of the B&B tree and up to 15% improvement in the solving times.

Sparse principal component analysis (SPCA) has been widely used for dimensionality reduction and feature extraction in high-dimensional data analysis. Despite there are many methodological and theoretical developments in the past two decades, the theoretical guarantees of the popular SPCA algorithm proposed by Zou, Hastie & Tibshirani (2006) based on the elastic net are still unknown. We aim to close this important theoretical gap in this paper. We first revisit the SPCA algorithm of Zou et al. (2006) and present our implementation. Also, we study a computationally more efficient variant of the SPCA algorithm in Zou et al. (2006) that can be considered as the limiting case of SPCA. We provide the guarantees of convergence to a stationary point for both algorithms. We prove that, under a sparse spiked covariance model, both algorithms can recover the principal subspace consistently under mild regularity conditions. We show that their estimation error bounds match the best available bounds of existing works or the minimax rates up to some logarithmic factors. Moreover, we demonstrate the numerical performance of both algorithms in simulation studies.

In 1982, Papadimitriou and Yannakakis introduced the Exact Matching (EM) problem where given an edge colored graph, with colors red and blue, and an integer $k$, the goal is to decide whether or not the graph contains a perfect matching with exactly $k$ red edges. Although they conjectured it to be $\textbf{NP}$-complete, soon after it was shown to be solvable in randomized polynomial time in the seminal work of Mulmuley et al., placing it in the complexity class $\textbf{RP}$. Since then, all attempts at finding a deterministic algorithm for EM have failed, thus leaving it as one of the few natural combinatorial problems in $\textbf{RP}$ but not known to be contained in $\textbf{P}$, and making it an interesting instance for testing the hypothesis $\textbf{RP}=\textbf{P}$. Progress has been lacking even on very restrictive classes of graphs despite the problem being quite well known as evidenced by the number of works citing it. In this paper we aim to gain more insight into EM by studying a new optimization problem we call Top-k Perfect Matching (TkPM) which we show to be polynomially equivalent to EM. By virtue of being an optimization problem, it is more natural to approximate TkPM so we provide approximation algorithms for it. Some of the approximation algorithms rely on a relaxation of EM on bipartite graphs where the output is required to be a perfect matching with a number of red edges differing from $k$ by at most $k/2$, which is of independent interest and generalizes to the Exact Weight Perfect Matching (EWPM) problem. We also consider parameterized algorithms and show that TkPM can be solved in FPT time parameterized by $k$ and the independence number of the graph. This result again relies on new tools developed for EM which are also of independent interest.

The rapid recent progress in machine learning (ML) has raised a number of scientific questions that challenge the longstanding dogma of the field. One of the most important riddles is the good empirical generalization of overparameterized models. Overparameterized models are excessively complex with respect to the size of the training dataset, which results in them perfectly fitting (i.e., interpolating) the training data, which is usually noisy. Such interpolation of noisy data is traditionally associated with detrimental overfitting, and yet a wide range of interpolating models -- from simple linear models to deep neural networks -- have recently been observed to generalize extremely well on fresh test data. Indeed, the recently discovered double descent phenomenon has revealed that highly overparameterized models often improve over the best underparameterized model in test performance. Understanding learning in this overparameterized regime requires new theory and foundational empirical studies, even for the simplest case of the linear model. The underpinnings of this understanding have been laid in very recent analyses of overparameterized linear regression and related statistical learning tasks, which resulted in precise analytic characterizations of double descent. This paper provides a succinct overview of this emerging theory of overparameterized ML (henceforth abbreviated as TOPML) that explains these recent findings through a statistical signal processing perspective. We emphasize the unique aspects that define the TOPML research area as a subfield of modern ML theory and outline interesting open questions that remain.

This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers.

Deep neural networks have revolutionized many machine learning tasks in power systems, ranging from pattern recognition to signal processing. The data in these tasks is typically represented in Euclidean domains. Nevertheless, there is an increasing number of applications in power systems, where data are collected from non-Euclidean domains and represented as the graph-structured data with high dimensional features and interdependency among nodes. The complexity of graph-structured data has brought significant challenges to the existing deep neural networks defined in Euclidean domains. Recently, many studies on extending deep neural networks for graph-structured data in power systems have emerged. In this paper, a comprehensive overview of graph neural networks (GNNs) in power systems is proposed. Specifically, several classical paradigms of GNNs structures (e.g., graph convolutional networks, graph recurrent neural networks, graph attention networks, graph generative networks, spatial-temporal graph convolutional networks, and hybrid forms of GNNs) are summarized, and key applications in power systems such as fault diagnosis, power prediction, power flow calculation, and data generation are reviewed in detail. Furthermore, main issues and some research trends about the applications of GNNs in power systems are discussed.

北京阿比特科技有限公司