亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The quaternion offset linear canonical transform (QOLCT) which is time shifted and frequency modulated version of the quaternion linear canonical transform (QLCT) provides a more general framework of most existing signal processing tools. For the generalized QOLCT, the classical Heisenbergs and Liebs uncertainty principles have been studied recently. In this paper, we first define the shorttime quaternion offset linear canonical transform (STQOLCT) and drive its relationship with the quaternion Fourier transform (QFT). The crux of the paper lies in the generalization of several well known uncertainty principles for the STQOLCT, including Donoho Starks uncertainty principle, Hardys uncertainty principle, Beurlings uncertainty principle, and Logarithmic uncertainty principle.

相關內容

迄今為(wei)止,產品設計(ji)師最友好的交互動畫軟件。

In this article, we propose two numerical methods, the Gaussian Process (GP) method and the Fourier Features (FF) algorithm, to solve mean field games (MFGs). The GP algorithm approximates the solution of a MFG with maximum a posteriori probability estimators of GPs conditioned on the partial differential equation (PDE) system of the MFG at a finite number of sample points. The main bottleneck of the GP method is to compute the inverse of a square gram matrix, whose size is proportional to the number of sample points. To improve the performance, we introduce the FF method, whose insight comes from the recent trend of approximating positive definite kernels with random Fourier features. The FF algorithm seeks approximated solutions in the space generated by sampled Fourier features. In the FF method, the size of the matrix to be inverted depends only on the number of Fourier features selected, which is much less than the size of sample points. Hence, the FF method reduces the precomputation time, saves the memory, and achieves comparable accuracy to the GP method. We give the existence and the convergence proofs for both algorithms. The convergence argument of the GP method does not depend on any monotonicity condition, which suggests the potential applications of the GP method to solve MFGs with non-monotone couplings in future work. We show the efficacy of our algorithms through experiments on a stationary MFG with a non-local coupling and on a time-dependent planning problem. We believe that the FF method can also serve as an alternative algorithm to solve general PDEs.

We present a new uncertainty principle for risk-aware statistical estimation, effectively quantifying the inherent trade-off between mean squared error ($\mse$) and risk, the latter measured by the associated average predictive squared error variance ($\sev$), for every admissible estimator of choice. Our uncertainty principle has a familiar form and resembles fundamental and classical results arising in several other areas, such as the Heisenberg principle in statistical and quantum mechanics, and the Gabor limit (time-scale trade-offs) in harmonic analysis. In particular, we prove that, provided a joint generative model of states and observables, the product between $\mse$ and $\sev$ is bounded from below by a computable model-dependent constant, which is explicitly related to the Pareto frontier of a recently studied $\sev$-constrained minimum $\mse$ (MMSE) estimation problem. Further, we show that the aforementioned constant is inherently connected to an intuitive new and rigorously topologically grounded statistical measure of distribution skewness in multiple dimensions, consistent with Pearson's moment coefficient of skewness for variables on the line. Our results are also illustrated via numerical simulations.

Uncertainty estimation (UE) techniques -- such as the Gaussian process (GP), Bayesian neural networks (BNN), Monte Carlo dropout (MCDropout) -- aim to improve the interpretability of machine learning models by assigning an estimated uncertainty value to each of their prediction outputs. However, since too high uncertainty estimates can have fatal consequences in practice, this paper analyzes the above techniques. Firstly, we show that GP methods always yield high uncertainty estimates on out of distribution (OOD) data. Secondly, we show on a 2D toy example that both BNNs and MCDropout do not give high uncertainty estimates on OOD samples. Finally, we show empirically that this pitfall of BNNs and MCDropout holds on real world datasets as well. Our insights (i) raise awareness for the more cautious use of currently popular UE methods in Deep Learning, (ii) encourage the development of UE methods that approximate GP-based methods -- instead of BNNs and MCDropout, and (iii) our empirical setups can be used for verifying the OOD performances of any other UE method. The source code is available at //github.com/epfml/uncertainity-estimation.

We study a localized notion of uniform convergence known as an "optimistic rate" (Panchenko 2002; Srebro et al. 2010) for linear regression with Gaussian data. Our refined analysis avoids the hidden constant and logarithmic factor in existing results, which are known to be crucial in high-dimensional settings, especially for understanding interpolation learning. As a special case, our analysis recovers the guarantee from Koehler et al. (2021), which tightly characterizes the population risk of low-norm interpolators under the benign overfitting conditions. Our optimistic rate bound, though, also analyzes predictors with arbitrary training error. This allows us to recover some classical statistical guarantees for ridge and LASSO regression under random designs, and helps us obtain a precise understanding of the excess risk of near-interpolators in the over-parameterized regime.

We investigate the complexity of computing the Zariski closure of a finitely generated group of matrices. The Zariski closure was previously shown to be computable by Derksen, Jeandel, and Koiran, but the termination argument for their algorithm appears not to yield any complexity bound. In this paper we follow a different approach and obtain a bound on the degree of the polynomials that define the closure. Our bound shows that the closure can be computed in elementary time. We also obtain upper bounds on the length of chains of linear algebraic groups, where all the groups are generated over a fixed number field.

Suszko's Sentential Calculus with Identity SCI results from classical propositional calculus CPC by adding a new connective $\equiv$ and axioms for identity $\varphi\equiv\psi$ (which we interpret here as `propositional identity'). We reformulate the original semantics of SCI in terms of Boolean prealgebras establishing a connection to `hyperintensional semantics'. Furthermore, we define a general framework of dualities between certain SCI-theories and Lewis-style modal systems in the vicinity of S3. Suszko's original approach to two SCI-theories corresponding to S4 and S5 can be formulated as a special case. All these dualities rely particularly on the fact that Lewis' `strict equivalence' is axiomatized by the SCI-principles of `propositional identity'.

An $r$-quasiplanar graph is a graph drawn in the plane with no $r$ pairwise crossing edges. We prove that there is a constant $C>0$ such that for any $s>2$, every $2^s$-quasiplanar graph with $n$ vertices has at most $n(\frac{C\log n}{s})^{2s-4}$ edges. A graph whose vertices are continuous curves in the plane, two being connected by an edge if and only if they intersect, is called a \emph{string graph}. We show that for every $\epsilon>0$, there exists $\delta>0$ such that every string graph with $n$ vertices, whose chromatic number is at least $n^{\epsilon}$ contains a clique of size at least $n^{\delta}$. A clique of this size or a coloring using fewer than $n^{\epsilon}$ colors can be found by a polynomial time algorithm in terms of the size of the geometric representation of the set of strings. For every $r\ge 3$, we construct families of $n$ segments in the plane without $r$ pairwise crossing members, which have the property that in any coloring of the segments with fewer than $c \log\log n $ colors, at least one of the color classes contains $r-1$ pairwise crossing segments. Here $c=c(r)>0$ is a suitable constant. In the process, we use, generalize, and strengthen previous results of Lee, Tomon, Walczak, and others. All of our theorems are related to geometric variants of the following classical graph-theoretic problem of Erd\H os, Gallai, and Rogers. Given a $K_r$-free graph on $n$ vertices and an integer $s<r$, at least how many vertices can we find such that the subgraph induced by them is $K_s$-free?

We consider a least-squares variational kernel-based method for numerical solution of second order elliptic partial differential equations on a multi-dimensional domain. In this setting it is not assumed that the differential operator is self-adjoint or positive definite as it should be in the Rayleigh-Ritz setting. However, the new scheme leads to a symmetric and positive definite algebraic system of equations. Moreover, the resulting method does not rely on certain subspaces satisfying the boundary conditions. The trial space for discretization is provided via standard kernels that reproduce the Sobolev spaces as their native spaces. The error analysis of the method is given, but it is partly subjected to an inverse inequality on the boundary which is still an open problem. The condition number of the final linear system is approximated in terms of the smoothness of the kernel and the discretization quality. Finally, the results of some computational experiments support the theoretical error bounds.

he octonion offset linear canonical transform can be defined as a time shifted and frequency modulated version of the octonion linear canonical transform, a more general framework of most existing signal processing tools. In this paper, we first define the and provide its closed-form representation. Based on this fact, we study some fundamental properties of proposed transform including inversion formula, norm split and energy conservation. The crux of the paper lies in the generalization of several well known uncertainty relations for the that include Pitts inequality, logarithmic uncertainty inequality, Hausdorff Young inequality and local uncertainty inequalities.

In recent years, knowledge graph completion methods have been extensively studied, in which graph embedding approaches learn low dimensional representations of entities and relations to predict missing facts. Those models usually view the relation vector as a translation (TransE) or rotation (rotatE and QuatE) between entity pairs, enjoying the advantage of simplicity and efficiency. However, QuatE has two main problems: 1) The model to capture the ability of representation and feature interaction between entities and relations are relatively weak because it only relies on the rigorous calculation of three embedding vectors; 2) Although the model can handle various relation patterns including symmetry, anti-symmetry, inversion and composition, but mapping properties of relations are not to be considered, such as one-to-many, many-to-one, and many-to-many. In this paper, we propose a novel model, QuatDE, with a dynamic mapping strategy to explicitly capture a variety of relational patterns, enhancing the feature interaction capability between elements of the triplet. Our model relies on three extra vectors donated as subject transfer vector, object transfer vector and relation transfer vector. The mapping strategy dynamically selects the transition vectors associated with each triplet, used to adjust the point position of the entity embedding vectors in the quaternion space via Hamilton product. Experiment results show QuatDE achieves state-of-the-art performance on three well-established knowledge graph completion benchmarks. In particular, the MR evaluation has relatively increased by 26% on WN18 and 15% on WN18RR, which proves the generalization of QuatDE.

北京阿比特科技有限公司