亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Tensor parameters that are amortized or regularized over large tensor powers, often called "asymptotic" tensor parameters, play a central role in several areas including algebraic complexity theory (constructing fast matrix multiplication algorithms), quantum information (entanglement cost and distillable entanglement), and additive combinatorics (bounds on cap sets, sunflower-free sets, etc.). Examples are the asymptotic tensor rank, asymptotic slice rank and asymptotic subrank. Recent works (Costa-Dalai, Blatter-Draisma-Rupniewski, Christandl-Gesmundo-Zuiddam) have investigated notions of discreteness (no accumulation points) or "gaps" in the values of such tensor parameters. We prove a general discreteness theorem for asymptotic tensor parameters of order-three tensors and use this to prove that (1) over any finite field, the asymptotic subrank and the asymptotic slice rank have no accumulation points, and (2) over the complex numbers, the asymptotic slice rank has no accumulation points. Central to our approach are two new general lower bounds on the asymptotic subrank of tensors, which measures how much a tensor can be diagonalized. The first lower bound says that the asymptotic subrank of any concise three-tensor is at least the cube-root of the smallest dimension. The second lower bound says that any three-tensor that is "narrow enough" (has one dimension much smaller than the other two) has maximal asymptotic subrank. Our proofs rely on new lower bounds on the maximum rank in matrix subspaces that are obtained by slicing a three-tensor in the three different directions. We prove that for any concise tensor the product of any two such maximum ranks must be large, and as a consequence there are always two distinct directions with large max-rank.

相關內容

In this work, we propose a novel preconditioned Krylov subspace method for solving an optimal control problem of wave equations, after explicitly identifying the asymptotic spectral distribution of the involved sequence of linear coefficient matrices from the optimal control problem. Namely, we first show that the all-at-once system stemming from the wave control problem is associated to a structured coefficient matrix-sequence possessing an eigenvalue distribution. Then, based on such a spectral distribution of which the symbol is explicitly identified, we develop an ideal preconditioner and two parallel-in-time preconditioners for the saddle point system composed of two block Toeplitz matrices. For the ideal preconditioner, we show that the eigenvalues of the preconditioned matrix-sequence all belong to the set $\left(-\frac{3}{2},-\frac{1}{2}\right)\bigcup \left(\frac{1}{2},\frac{3}{2}\right)$ well separated from zero, leading to mesh-independent convergence when the minimal residual method is employed. The proposed {parallel-in-time} preconditioners can be implemented efficiently using fast Fourier transforms or discrete sine transforms, and their effectiveness is theoretically shown in the sense that the eigenvalues of the preconditioned matrix-sequences are clustered around $\pm 1$, which leads to rapid convergence. When these parallel-in-time preconditioners are not fast diagonalizable, we further propose modified versions which can be efficiently inverted. Several numerical examples are reported to verify our derived localization and spectral distribution result and to support the effectiveness of our proposed preconditioners and the related advantages with respect to the relevant literature.

Representation learning plays a crucial role in automated feature selection, particularly in the context of high-dimensional data, where non-parametric methods often struggle. In this study, we focus on supervised learning scenarios where the pertinent information resides within a lower-dimensional linear subspace of the data, namely the multi-index model. If this subspace were known, it would greatly enhance prediction, computation, and interpretation. To address this challenge, we propose a novel method for linear feature learning with non-parametric prediction, which simultaneously estimates the prediction function and the linear subspace. Our approach employs empirical risk minimisation, augmented with a penalty on function derivatives, ensuring versatility. Leveraging the orthogonality and rotation invariance properties of Hermite polynomials, we introduce our estimator, named RegFeaL. By utilising alternative minimisation, we iteratively rotate the data to improve alignment with leading directions and accurately estimate the relevant dimension in practical settings. We establish that our method yields a consistent estimator of the prediction function with explicit rates. Additionally, we provide empirical results demonstrating the performance of RegFeaL in various experiments.

We study the kernelization complexity of structural parameterizations of the Vertex Cover problem. Here, the goal is to find a polynomial-time preprocessing algorithm that can reduce any instance $(G,k)$ of the Vertex Cover problem to an equivalent one, whose size is polynomial in the size of a pre-determined complexity parameter of $G$. A long line of previous research deals with parameterizations based on the number of vertex deletions needed to reduce $G$ to a member of a simple graph class $\mathcal{F}$, such as forests, graphs of bounded tree-depth, and graphs of maximum degree two. We set out to find the most general graph classes $\mathcal{F}$ for which Vertex Cover parameterized by the vertex-deletion distance of the input graph to $\mathcal{F}$, admits a polynomial kernelization. We give a complete characterization of the minor-closed graph families $\mathcal{F}$ for which such a kernelization exists. We introduce a new graph parameter called bridge-depth, and prove that a polynomial kernelization exists if and only if $\mathcal{F}$ has bounded bridge-depth. The proof is based on an interesting connection between bridge-depth and the size of minimal blocking sets in graphs, which are vertex sets whose removal decreases the independence number.

In this paper, we derive results about the limiting distribution of the empirical magnetization vector and the maximum likelihood (ML) estimates of the natural parameters in the tensor Curie-Weiss Potts model. Our results reveal surprisingly new phase transition phenomena including the existence of a smooth curve in the interior of the parameter plane on which the magnetization vector and the ML estimates have mixture limiting distributions, the latter comprising of both continuous and discrete components, and a surprising superefficiency phenomenon of the ML estimates, which stipulates an $N^{-3/4}$ rate of convergence of the estimates to some non-Gaussian distribution at certain special points of one type and an $N^{-5/6}$ rate of convergence to some other non-Gaussian distribution at another special point of a different type. The last case can arise only for one particular value of the tuple of the tensor interaction order and the number of colors. These results are then used to derive asymptotic confidence intervals for the natural parameters at all points where consistent estimation is possible.

The proximal Galerkin finite element method is a high-order, nonlinear numerical method that preserves the geometric and algebraic structure of bound constraints in infinite-dimensional function spaces. This paper introduces the proximal Galerkin method and applies it to solve free-boundary problems, enforce discrete maximum principles, and develop scalable, mesh-independent algorithms for optimal design. The paper begins with a derivation of the latent variable proximal point (LVPP) method: an unconditionally stable alternative to the interior point method. LVPP is an infinite-dimensional optimization algorithm that may be viewed as having an adaptive (Bayesian) barrier function that is updated with a new informative prior at each (outer loop) optimization iteration. One of the main benefits of this algorithm is witnessed when analyzing the classical obstacle problem. Therein, we find that the original variational inequality can be replaced by a sequence of semilinear partial differential equations (PDEs) that are readily discretized and solved with, e.g., high-order finite elements. Throughout this work, we arrive at several unexpected contributions that may be of independent interest. These include (1) a semilinear PDE we refer to as the entropic Poisson equation; (2) an algebraic/geometric connection between high-order positivity-preserving discretizations and infinite-dimensional Lie groups; and (3) a gradient-based, bound-preserving algorithm for two-field density-based topology optimization. The complete latent variable proximal Galerkin methodology combines ideas from nonlinear programming, functional analysis, tropical algebra, and differential geometry and can potentially lead to new synergies among these areas as well as within variational and numerical analysis.

The Independent Cutset problem asks whether there is a set of vertices in a given graph that is both independent and a cutset. Such a problem is $\textsf{NP}$-complete even when the input graph is planar and has maximum degree five. In this paper, we first present a $\mathcal{O}^*(1.4423^{n})$-time algorithm for the problem. We also show how to compute a minimum independent cutset (if any) in the same running time. Since the property of having an independent cutset is MSO$_1$-expressible, our main results are concerned with structural parameterizations for the problem considering parameters that are not bounded by a function of the clique-width of the input. We present $\textsf{FPT}$-time algorithms for the problem considering the following parameters: the dual of the maximum degree, the dual of the solution size, the size of a dominating set (where a dominating set is given as an additional input), the size of an odd cycle transversal, the distance to chordal graphs, and the distance to $P_5$-free graphs. We close by introducing the notion of $\alpha$-domination, which allows us to identify more fixed-parameter tractable and polynomial-time solvable cases.

Low rank matrix approximations appear in a number of scientific computing applications. We consider the Nystr\"{o}m method for approximating a positive semidefinite matrix $A$. In the case that $A$ is very large or its entries can only be accessed once, a single-pass version may be necessary. In this work, we perform a complete rounding error analysis of the single-pass Nystr\"{o}m method in two precisions, where the computation of the expensive matrix product with $A$ is assumed to be performed in the lower of the two precisions. Our analysis gives insight into how the sketching matrix and shift should be chosen to ensure stability, implementation aspects which have been commented on in the literature but not yet rigorously justified. We further develop a heuristic to determine how to pick the lower precision, which confirms the general intuition that the lower the desired rank of the approximation, the lower the precision we can use without detriment. We also demonstrate that our mixed precision Nystr\"{o}m method can be used to inexpensively construct limited memory preconditioners for the conjugate gradient method and derive a bound the condition number of the resulting preconditioned coefficient matrix. We present numerical experiments on a set of matrices with various spectral decays and demonstrate the utility of our mixed precision approach.

In this paper, we introduce a type of tensor neural network. For the first time, we propose its numerical integration scheme and prove the computational complexity to be the polynomial scale of the dimension. Based on the tensor product structure, we develop an efficient numerical integration method by using fixed quadrature points for the functions of the tensor neural network. The corresponding machine learning method is also introduced for solving high-dimensional problems. Some numerical examples are also provided to validate the theoretical results and the numerical algorithm.

The present work presents a stable POD-Galerkin based reduced-order model (ROM) for two-dimensional Rayleigh-B\'enard convection in a square geometry for three Rayleigh numbers: 10^4 (steady state), 3*10^5 (periodic), and 6*10^6 (chaotic). Stability is obtained through a particular (staggered-grid) FOM discretization that leads to a ROM that is pressure-free and has skew-symmetric (energy-conserving) convective terms. This yields long-time stable solutions without requiring stabilizing mechanisms, even outside the training data range. The ROM's stability is validated for the different test cases by investigating the Nusselt and Reynolds number time series and the mean and variance of the vertical temperature profile. In general, these quantities converge to the FOM when increasing the number of modes, and turn out to be a good measure of accuracy for the non-chaotic cases. However, for the chaotic case, convergence with increasing numbers of modes is not evident, and additional measures are required to represent the effect of the smallest (neglected) scales.

In inverse problems, one attempts to infer spatially variable functions from indirect measurements of a system. To practitioners of inverse problems, the concept of "information" is familiar when discussing key questions such as which parts of the function can be inferred accurately and which cannot. For example, it is generally understood that we can identify system parameters accurately only close to detectors, or along ray paths between sources and detectors, because we have "the most information" for these places. Although referenced in many publications, the "information" that is invoked in such contexts is not a well understood and clearly defined quantity. Herein, we present a definition of information density that is based on the variance of coefficients as derived from a Bayesian reformulation of the inverse problem. We then discuss three areas in which this information density can be useful in practical algorithms for the solution of inverse problems, and illustrate the usefulness in one of these areas -- how to choose the discretization mesh for the function to be reconstructed -- using numerical experiments.

北京阿比特科技有限公司