亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the query complexity of slices of Boolean functions. Among other results we show that there exists a Boolean function for which we need to query all but 7 input bits to compute its value, even if we know beforehand that the number of 0's and 1's in the input are the same, i.e. when our input is from the middle slice. This answers a question of Byramji. Our proof is non-constructive, but we also propose a concrete candidate function that might have the above property. Our results are related to certain natural discrepancy type questions that -- somewhat surprisingly -- have not been studied before.

相關內容

We study the connection between the concavity properties of a measure $\nu$ and the convexity properties of the associated relative entropy $D(\cdot \Vert \nu)$ on Wasserstein space. As a corollary we prove a new dimensional Brunn-Minkowski inequality for centered star-shaped bodies, when the measure $\nu$ is log-concave with a p-homogeneous potential (such as the Gaussian measure). Our method allows us to go beyond the usual convexity assumption on the sets that appears essential for the standard differential-geometric technique in this area. We then take a finer look at the convexity properties of the Gaussian relative entropy, which yields new functional inequalities. First we obtain curvature and dimensional reinforcements to Otto--Villani's ``HWI'' inequality in the Gauss space, when restricted to even strongly log-concave measures. As corollaries, we obtain improved versions of Gross' logarithmic Sobolev inequality and Talgrand's transportation cost inequality in this setting.

Weights are geometrical degrees of freedom that allow to generalise Lagrangian finite elements. They are defined through integrals over specific supports, well understood in terms of differential forms and integration, and lie within the framework of finite element exterior calculus. In this work we exploit this formalism with the target of identifying supports that are appealing for finite element approximation. To do so, we study the related parametric matrix-sequences, with the matrix order tending to infinity as the mesh size tends to zero. We describe the conditioning and the spectral global behavior in terms of the standard Toeplitz machinery and GLT theory, leading to the identification of the optimal choices for weights. Moreover, we propose and test ad hoc preconditioners, in dependence of the discretization parameters and in connection with conjugate gradient method. The model problem we consider is a onedimensional Laplacian, both with constant and non constant coefficients. Numerical visualizations and experimental tests are reported and critically discussed, demonstrating the advantages of weights-induced bases over standard Lagrangian ones. Open problems and future steps are listed in the conclusive section, especially regarding the multidimensional case.

We construct a Convolution Quadrature (CQ) scheme for the quasilinear subdiffusion equation and supply it with the fast and oblivious implementation. In particular we find a condition for the CQ to be admissible and discretize the spatial part of the equation with the Finite Element Method. We prove the unconditional stability and convergence of the scheme and find a bound on the error. As a passing result, we also obtain a discrete Gronwall inequality for the CQ, which is a crucial ingredient of our convergence proof based on the energy method. The paper is concluded with numerical examples verifying convergence and computation time reduction when using fast and oblivious quadrature.

We propose three test criteria each of which is appropriate for testing, respectively, the equivalence hypotheses of symmetry, of homogeneity, and of independence, with multivariate data. All quantities have the common feature of involving weighted--type distances between characteristic functions and are convenient from the computational point of view if the weight function is properly chosen. The asymptotic behavior of the tests under the null hypothesis is investigated, and numerical studies are conducted in order to examine the performance of the criteria in finite samples.

We consider injective first-order interpretations that input and output trees of bounded height. The corresponding functions have polynomial output size, since a first-order interpretation can use a k-tuple of input nodes to represent a single output node. We prove that the equivalence problem for such functions is decidable, i.e. given two such interpretations, one can decide whether, for every input tree, the two output trees are isomorphic. We also give a calculus of typed functions and combinators which derives exactly injective first-order interpretations for unordered trees of bounded height. The calculus is based on a type system, where the type constructors are products, coproducts and a monad of multisets. Thanks to our results about tree-to-tree interpretations, the equivalence problem is decidable for this calculus. As an application, we show that the equivalence problem is decidable for first-order interpretations between classes of graphs that have bounded tree-depth. In all cases studied in this paper, first-order logic and MSO have the same expressive power, and hence all results apply also to MSO interpretations.

In quantum mechanics, the Rosen-Zener model represents a two-level quantum system. Its generalization to multiple degenerate sets of states leads to larger non-autonomous linear system of ordinary differential equations (ODEs). We propose a new method for computing the solution operator of this system of ODEs. This new method is based on a recently introduced expression of the solution in terms of an infinite matrix equation, which can be efficiently approximated by combining truncation, fixed point iterations, and low-rank approximation. This expression is possible thanks to the so-called $\star$-product approach for linear ODEs. In the numerical experiments, the new method's computing time scales linearly with the model's size. We provide a first partial explanation of this linear behavior.

Although the vectorization operation is known and well-defined, it is only defined for 2-D matrices, and its inverse isn't as well-popularized. This work proposes to generalize the vectorization to higher dimensions, and define mathematically its inverse operation.

We study the design of embeddings into Euclidean space with outliers. Given a metric space $(X,d)$ and an integer $k$, the goal is to embed all but $k$ points in $X$ (called the ``outliers") into $\ell_2$ with the smallest possible distortion $c$. Finding the optimal distortion $c$ for a given outlier set size $k$, or alternately the smallest $k$ for a given target distortion $c$ are both NP-hard problems. In fact, it is UGC-hard to approximate $k$ to within a factor smaller than $2$ even when the metric sans outliers is isometrically embeddable into $\ell_2$. We consider bi-criteria approximations. Our main result is a polynomial time algorithm that approximates the outlier set size to within an $O(\log^2 k)$ factor and the distortion to within a constant factor. The main technical component in our result is an approach for constructing Lipschitz extensions of embeddings into Banach spaces (such as $\ell_p$ spaces). We consider a stronger version of Lipschitz extension that we call a \textit{nested composition of embeddings}: given a low distortion embedding of a subset $S$ of the metric space $X$, our goal is to extend this embedding to all of $X$ such that the distortion over $S$ is preserved, whereas the distortion over the remaining pairs of points in $X$ is bounded by a function of the size of $X\setminus S$. Prior work on Lipschitz extension considers settings where the size of $X$ is potentially much larger than that of $S$ and the expansion bounds depend on $|S|$. In our setting, the set $S$ is nearly all of $X$ and the remaining set $X\setminus S$, a.k.a. the outliers, is small. We achieve an expansion bound that is logarithmic in $|X\setminus S|$.

Sparse polynomial approximation has become indispensable for approximating smooth, high- or infinite-dimensional functions from limited samples. This is a key task in computational science and engineering, e.g., surrogate modelling in uncertainty quantification where the function is the solution map of a parametric or stochastic differential equation (DE). Yet, sparse polynomial approximation lacks a complete theory. On the one hand, there is a well-developed theory of best $s$-term polynomial approximation, which asserts exponential or algebraic rates of convergence for holomorphic functions. On the other, there are increasingly mature methods such as (weighted) $\ell^1$-minimization for computing such approximations. While the sample complexity of these methods has been analyzed with compressed sensing, whether they achieve best $s$-term approximation rates is not fully understood. Furthermore, these methods are not algorithms per se, as they involve exact minimizers of nonlinear optimization problems. This paper closes these gaps. Specifically, we consider the following question: are there robust, efficient algorithms for computing approximations to finite- or infinite-dimensional, holomorphic and Hilbert-valued functions from limited samples that achieve best $s$-term rates? We answer this affirmatively by introducing algorithms and theoretical guarantees that assert exponential or algebraic rates of convergence, along with robustness to sampling, algorithmic, and physical discretization errors. We tackle both scalar- and Hilbert-valued functions, this being key to parametric or stochastic DEs. Our results involve significant developments of existing techniques, including a novel restarted primal-dual iteration for solving weighted $\ell^1$-minimization problems in Hilbert spaces. Our theory is supplemented by numerical experiments demonstrating the efficacy of these algorithms.

Most state-of-the-art machine learning techniques revolve around the optimisation of loss functions. Defining appropriate loss functions is therefore critical to successfully solving problems in this field. We present a survey of the most commonly used loss functions for a wide range of different applications, divided into classification, regression, ranking, sample generation and energy based modelling. Overall, we introduce 33 different loss functions and we organise them into an intuitive taxonomy. Each loss function is given a theoretical backing and we describe where it is best used. This survey aims to provide a reference of the most essential loss functions for both beginner and advanced machine learning practitioners.

小貼士
登錄享
相關主題
北京阿比特科技有限公司