亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Kelly's theorem states that a set of $n$ points affinely spanning $\mathbb{C}^3$ must determine at least one ordinary complex line (a line passing through exactly two of the points). Our main theorem shows that such sets determine at least $3n/2$ ordinary lines, unless the configuration has $n-1$ points in a plane and one point outside the plane (in which case there are at least $n-1$ ordinary lines). In addition, when at most $2n/3$ points are contained in any plane, we prove a theorem giving stronger bounds that take advantage of the existence of lines with 4 and more points (in the spirit of Melchior's and Hirzebruch's inequalities). Furthermore, when the points span 4 or more dimensions, with at most $2n/3$ points contained in any three dimensional affine subspace, we show that there must be a quadratic number of ordinary lines.

相關內容

Workflow nets are a popular variant of Petri nets that allow for algorithmic formal analysis of business processes. The central decision problems concerning workflow nets deal with soundness, where the initial and final configurations are specified. Intuitively, soundness states that from every reachable configuration one can reach the final configuration. We settle the widely open complexity of the three main variants of soundness: classical, structural and generalised soundness. The first two are EXPSPACE-complete, and, surprisingly, the latter is PSPACE-complete, thus computationally simpler.

The numerical solution of a linear Schr\"odinger equation in the semiclassical regime is very well understood in a torus $\mathbb{T}^d$. A raft of modern computational methods are precise and affordable, while conserving energy and resolving high oscillations very well. This, however, is far from the case with regard to its solution in $\mathbb{R}^d$, a setting more suitable for many applications. In this paper we extend the theory of splitting methods to this end. The main idea is to derive the solution using a spectral method from a combination of solutions of the free Schr\"odinger equation and of linear scalar ordinary differential equations, in a symmetric Zassenhaus splitting method. This necessitates detailed analysis of certain orthonormal spectral bases on the real line and their evolution under the free Schr\"odinger operator.

In this paper, we study the \emph{type graph}, namely a bipartite graph induced by a joint type. We investigate the maximum edge density of induced bipartite subgraphs of this graph having a number of vertices on each side on an exponential scale in the length $n$ of the type. This can be seen as an isoperimetric problem. We provide asymptotically sharp bounds for the exponent of the maximum edge density as the length of the type goes to infinity. We also study the biclique rate region of the type graph, which is defined as the set of $\left(R_{1},R_{2}\right)$ such that there exists a biclique of the type graph which has respectively $e^{nR_{1}}$ and $e^{nR_{2}}$ vertices on the two sides. We provide asymptotically sharp bounds for the biclique rate region as well. We then apply our results and proof ideas to noninteractive simulation problems. We completely characterize the exponents of maximum and minimum joint probabilities when the marginal probabilities vanish exponentially fast with given exponents. These results can be seen as strong small-set expansion theorems. We extend the noninteractive simulation problem by replacing Boolean functions with arbitrary nonnegative functions, and obtain new hypercontractivity inequalities which are stronger than the common hypercontractivity inequalities. Furthermore, as an application of our results, a new outer bound for the zero-error capacity region of the binary adder channel is provided, which improves the previously best known bound, due to Austrin, Kaski, Koivisto, and Nederlof. Our proofs in this paper are based on the method of types, linear algebra, and coupling techniques.

This article grew out of my Master's thesis at the Faculty of Mathematics and Information Science at Ruprecht-Karls-Universit\"at Heidelberg under the supervision of PD Dr. Andreas Ott. Following the work of G. Carlsson and A. Zomorodian on the theory of multidimensional persistence in 2007 and 2009, the main goal of this article is to give a complete classification and parameterization for the algebraic objects corresponding to the homology of a multifiltered simplicial complex. As in the work of G. Carlsson and A. Zomorodian, this classification and parameterization result is then used to show that it is only possible to obtain a discrete and complete invariant for these algebraic objects in the case of one-dimensional persistence, and that it is impossible to obtain the same in dimensions greater than one.

In this paper, we investigate the question: Given a small number of datapoints, for example N = 30, how tight can PAC-Bayes and test set bounds be made? For such small datasets, test set bounds adversely affect generalisation performance by withholding data from the training procedure. In this setting, PAC-Bayes bounds are especially attractive, due to their ability to use all the data to simultaneously learn a posterior and bound its generalisation risk. We focus on the case of i.i.d. data with a bounded loss and consider the generic PAC-Bayes theorem of Germain et al. While their theorem is known to recover many existing PAC-Bayes bounds, it is unclear what the tightest bound derivable from their framework is. For a fixed learning algorithm and dataset, we show that the tightest possible bound coincides with a bound considered by Catoni; and, in the more natural case of distributions over datasets, we establish a lower bound on the best bound achievable in expectation. Interestingly, this lower bound recovers the Chernoff test set bound if the posterior is equal to the prior. Moreover, to illustrate how tight these bounds can be, we study synthetic one-dimensional classification tasks in which it is feasible to meta-learn both the prior and the form of the bound to numerically optimise for the tightest bounds possible. We find that in this simple, controlled scenario, PAC-Bayes bounds are competitive with comparable, commonly used Chernoff test set bounds. However, the sharpest test set bounds still lead to better guarantees on the generalisation error than the PAC-Bayes bounds we consider.

In this paper we look at the problem of adjacency labeling of graphs. Given a family of undirected graphs the problem is to determine an encoding-decoding scheme for each member of the family such that we can decode the adjacency information of any pair of vertices only from their encoded labels. Further, we want the length of each label to be short (logarithmic in $n$, the number of vertices) and the encoding-decoding scheme to be computationally efficient. We proposed a simple tree-decomposition based encoding scheme and used it give an adjacency labeling of size $O(k \log k \log n)$-bits. Here $k$ is the clique-width of the graph family. We also extend the result to a certain family of $k$-probe graphs.

This tutorial reviews the main steps of the principal component analysis of a multivariate data set and its subsequent dimensional reduction on the grounds of identified dominant principal components. The underlying computations are demonstrated and performed by means of a script written in the statistical software package R.

With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.

Importance sampling is one of the most widely used variance reduction strategies in Monte Carlo rendering. In this paper, we propose a novel importance sampling technique that uses a neural network to learn how to sample from a desired density represented by a set of samples. Our approach considers an existing Monte Carlo rendering algorithm as a black box. During a scene-dependent training phase, we learn to generate samples with a desired density in the primary sample space of the rendering algorithm using maximum likelihood estimation. We leverage a recent neural network architecture that was designed to represent real-valued non-volume preserving ('Real NVP') transformations in high dimensional spaces. We use Real NVP to non-linearly warp primary sample space and obtain desired densities. In addition, Real NVP efficiently computes the determinant of the Jacobian of the warp, which is required to implement the change of integration variables implied by the warp. A main advantage of our approach is that it is agnostic of underlying light transport effects, and can be combined with many existing rendering techniques by treating them as a black box. We show that our approach leads to effective variance reduction in several practical scenarios.

We consider the task of learning the parameters of a {\em single} component of a mixture model, for the case when we are given {\em side information} about that component, we call this the "search problem" in mixture models. We would like to solve this with computational and sample complexity lower than solving the overall original problem, where one learns parameters of all components. Our main contributions are the development of a simple but general model for the notion of side information, and a corresponding simple matrix-based algorithm for solving the search problem in this general setting. We then specialize this model and algorithm to four common scenarios: Gaussian mixture models, LDA topic models, subspace clustering, and mixed linear regression. For each one of these we show that if (and only if) the side information is informative, we obtain parameter estimates with greater accuracy, and also improved computation complexity than existing moment based mixture model algorithms (e.g. tensor methods). We also illustrate several natural ways one can obtain such side information, for specific problem instances. Our experiments on real data sets (NY Times, Yelp, BSDS500) further demonstrate the practicality of our algorithms showing significant improvement in runtime and accuracy.

北京阿比特科技有限公司