亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The class of Basic Feasible Functionals BFF is the second-order counterpart of the class of first-order functions computable in polynomial time. We present several implicit characterizations of BFF based on a typed programming language of terms. These terms may perform calls to non-recursive imperative procedures. The type discipline has two layers: the terms follow a standard simply-typed discipline and the procedures follow a standard tier-based type discipline. BFF consists exactly of the second-order functionals that are computed by typable and terminating programs. The completeness of this characterization surprisingly still holds in the absence of lambda-abstraction. Moreover, the termination requirement can be specified as a completeness-preserving instance, which can be decided in time quadratic in the size of the program. As typing is decidable in polynomial time, we obtain the first tractable (i.e., decidable in polynomial time), sound, complete, and implicit characterization of BFF, thus solving a problem opened for more than 20 years.

相關內容

We present a manifold-based autoencoder method for learning nonlinear dynamics in time, notably partial differential equations (PDEs), in which the manifold latent space evolves according to Ricci flow. This can be accomplished by simulating Ricci flow in a physics-informed setting, and manifold quantities can be matched so that Ricci flow is empirically achieved. With our methodology, the manifold is learned as part of the training procedure, so ideal geometries may be discerned, while the evolution simultaneously induces a more accommodating latent representation over static methods. We present our method on a range of numerical experiments consisting of PDEs that encompass desirable characteristics such as periodicity and randomness, remarking error on in-distribution and extrapolation scenarios.

Neural operator architectures employ neural networks to approximate operators mapping between Banach spaces of functions; they may be used to accelerate model evaluations via emulation, or to discover models from data. Consequently, the methodology has received increasing attention over recent years, giving rise to the rapidly growing field of operator learning. The first contribution of this paper is to prove that for general classes of operators which are characterized only by their $C^r$- or Lipschitz-regularity, operator learning suffers from a curse of dimensionality, defined precisely here in terms of representations of the infinite-dimensional input and output function spaces. The result is applicable to a wide variety of existing neural operators, including PCA-Net, DeepONet and the FNO. The second contribution of the paper is to prove that the general curse of dimensionality can be overcome for solution operators defined by the Hamilton-Jacobi equation; this is achieved by leveraging additional structure in the underlying solution operator, going beyond regularity. To this end, a novel neural operator architecture is introduced, termed HJ-Net, which explicitly takes into account characteristic information of the underlying Hamiltonian system. Error and complexity estimates are derived for HJ-Net which show that this architecture can provably beat the curse of dimensionality related to the infinite-dimensional input and output function spaces.

We describe a `discretize-then-relax' strategy to globally minimize integral functionals over functions $u$ in a Sobolev space subject to Dirichlet boundary conditions. The strategy applies whenever the integral functional depends polynomially on $u$ and its derivatives, even if it is nonconvex. The `discretize' step uses a bounded finite element scheme to approximate the integral minimization problem with a convergent hierarchy of polynomial optimization problems over a compact feasible set, indexed by the decreasing size $h$ of the finite element mesh. The `relax' step employs sparse moment-sum-of-squares relaxations to approximate each polynomial optimization problem with a hierarchy of convex semidefinite programs, indexed by an increasing relaxation order $\omega$. We prove that, as $\omega\to\infty$ and $h\to 0$, solutions of such semidefinite programs provide approximate minimizers that converge in a suitable sense (including in certain $L^p$ norms) to the global minimizer of the original integral functional if it is unique. We also report computational experiments showing that our numerical strategy works well even when technical conditions required by our theoretical analysis are not satisfied.

Program synthesis with Genetic Programming searches for a correct program that satisfies the input specification, which is usually provided as input-output examples. One particular challenge is how to effectively handle loops and recursion avoiding programs that never terminate. A helpful abstraction that can alleviate this problem is the employment of Recursion Schemes that generalize the combination of data production and consumption. Recursion Schemes are very powerful as they allow the construction of programs that can summarize data, create sequences, and perform advanced calculations. The main advantage of writing a program using Recursion Schemes is that the programs are composed of well defined templates with only a few parts that need to be synthesized. In this paper we make an initial study of the benefits of using program synthesis with fold and unfold templates, and outline some preliminary experimental results. To highlight the advantages and disadvantages of this approach, we manually solved the entire GPSB benchmark using recursion schemes, highlighting the parts that should be evolved compared to alternative implementations. We noticed that, once the choice of which recursion scheme is made, the synthesis process can be simplified as each of the missing parts of the template are reduced to simpler functions, which are further constrained by their own input and output types.

Data depth functions have been intensively studied for normed vector spaces. However, a discussion on depth functions on data where one specific data structure cannot be presupposed is lacking. In this article, we introduce a notion of depth functions for data types that are not given in statistical standard data formats and therefore we do not have one specific data structure. We call such data in general non-standard data. To achieve this, we represent the data via formal concept analysis which leads to a unified data representation. Besides introducing depth functions for non-standard data using formal concept analysis, we give a systematic basis by introducing structural properties. Furthermore, we embed the generalised Tukey depth into our concept of data depth and analyse it using the introduced structural properties. Thus, this article provides the mathematical formalisation of centrality and outlyingness for non-standard data and therefore increases the spaces centrality is currently discussed. In particular, it gives a basis to define further depth functions and statistical inference methods for non-standard data.

A family of symmetric matrices $A_1,\ldots, A_d$ is SDC (simultaneous diagonalization by congruence) if there is an invertible matrix $X$ such that every $X^T A_k X$ is diagonal. In this work, a novel randomized SDC (RSDC) algorithm is proposed that reduces SDC to a generalized eigenvalue problem by considering two (random) linear combinations of the family. We establish exact recovery: RSDC achieves diagonalization with probability $1$ if the family is exactly SDC. Under a mild regularity assumption, robust recovery is also established: Given a family that is $\epsilon$-close to SDC then RSDC diagonalizes, with high probability, the family up to an error of norm $\mathcal{O}(\epsilon)$. Under a positive definiteness assumption, which often holds in applications, stronger results are established, including a bound on the condition number of the transformation matrix. For practical use, we suggest to combine RSDC with an optimization algorithm. The performance of the resulting method is verified for synthetic data, image separation and EEG analysis tasks. It turns out that our newly developed method outperforms existing optimization-based methods in terms of efficiency while achieving a comparable level of accuracy.

We propose a high order numerical scheme for time-dependent first order Hamilton--Jacobi--Bellman equations. In particular we propose to combine a semi-Lagrangian scheme with a Central Weighted Non-Oscillatory reconstruction. We prove a convergence result in the case of state- and time-independent Hamiltonians. Numerical simulations are presented in space dimensions one and two, also for more general state- and time-dependent Hamiltonians, demonstrating superior performance in terms of CPU time gain compared with a semi-Lagrangian scheme coupled with Weighted Non-Oscillatory reconstructions.

We introduce a fine-grained framework for uncertainty quantification of predictive models under distributional shifts. This framework distinguishes the shift in covariate distributions from that in the conditional relationship between the outcome ($Y$) and the covariates ($X$). We propose to reweight the training samples to adjust for an identifiable covariate shift while protecting against worst-case conditional distribution shift bounded in an $f$-divergence ball. Based on ideas from conformal inference and distributionally robust learning, we present an algorithm that outputs (approximately) valid and efficient prediction intervals in the presence of distributional shifts. As a use case, we apply the framework to sensitivity analysis of individual treatment effects with hidden confounding. The proposed methods are evaluated in simulation studies and three real data applications, demonstrating superior robustness and efficiency compared with existing benchmarks.

Regular resolution is a refinement of the resolution proof system requiring that no variable be resolved on more than once along any path in the proof. It is known that there exist sequences of formulas that require exponential-size proofs in regular resolution while admitting polynomial-size proofs in resolution. Thus, with respect to the usual notion of simulation, regular resolution is separated from resolution. An alternative, and weaker, notion for comparing proof systems is that of an "effective simulation," which allows the translation of the formula along with the proof when moving between proof systems. We prove that regular resolution is equivalent to resolution under effective simulations. As a corollary, we recover in a black-box fashion a recent result on the hardness of automating regular resolution.

Randomized matrix algorithms have become workhorse tools in scientific computing and machine learning. To use these algorithms safely in applications, they should be coupled with posterior error estimates to assess the quality of the output. To meet this need, this paper proposes two diagnostics: a leave-one-out error estimator for randomized low-rank approximations and a jackknife resampling method to estimate the variance of the output of a randomized matrix computation. Both of these diagnostics are rapid to compute for randomized low-rank approximation algorithms such as the randomized SVD and randomized Nystr\"om approximation, and they provide useful information that can be used to assess the quality of the computed output and guide algorithmic parameter choices.

北京阿比特科技有限公司