亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study existence and computability of finite bases for ideals of polynomials over infinitely many variables. In our setting, variables come from a countable logical structure A, and embeddings from A to A act on polynomials by renaming variables. First, we give a sufficient and necessary condition for A to guarantee the following generalisation of Hilbert's Basis Theorem: every polynomial ideal which is equivariant, i.e. invariant under renaming of variables, is finitely generated. Second, we develop an extension of classical Buchberger's algorithm to compute a Gr\"obner basis of a given equivariant ideal. This implies decidability of the membership problem for equivariant ideals. Finally, we sketch upon various applications of these results to register automata, Petri nets with data, orbit-finitely generated vector spaces, and orbit-finite systems of linear equations.

相關內容

Visualizing data and finding patterns in data are ubiquitous problems in the sciences. Increasingly, applications seek signal and structure in a contrastive setting: a foreground dataset relative to a background dataset. For this purpose, we propose contrastive independent component analysis (cICA). This generalizes independent component analysis to independent latent variables across a foreground and background. We propose a hierarchical tensor decomposition algorithm for cICA. We study the identifiability of cICA and demonstrate its performance visualizing data and finding patterns in data, using synthetic and real-world datasets, comparing the approach to existing contrastive methods.

Our goal is to highlight some of the deep links between numerical splitting methods and control theory. We consider evolution equations of the form $\dot{x} = f_0(x) + f_1(x)$, where $f_0$ encodes a non-reversible dynamic, so that one is interested in schemes only involving forward flows of $f_0$. In this context, a splitting method can be interpreted as a trajectory of the control-affine system $\dot{x}(t)=f_0(x(t))+u(t)f_1(x(t))$, associated with a control~$u$ which is a finite sum of Dirac masses. The general goal is then to find a control such that the flow of $f_0 + u(t) f_1$ is as close as possible to the flow of $f_0+f_1$. Using this interpretation and classical tools from control theory, we revisit well-known results concerning numerical splitting methods, and we prove a handful of new ones, with an emphasis on splittings with additional positivity conditions on the coefficients. First, we show that there exist numerical schemes of any arbitrary order involving only forward flows of $f_0$ if one allows complex coefficients for the flows of $f_1$. Equivalently, for complex-valued controls, we prove that the Lie algebra rank condition is equivalent to the small-time local controllability of a system. Second, for real-valued coefficients, we show that the well-known order restrictions are linked with so-called "bad" Lie brackets from control theory, which are known to yield obstructions to small-time local controllability. We use our recent basis of the free Lie algebra to precisely identify the conditions under which high-order methods exist.

We show that the limiting variance of a sequence of estimators for a structured covariance matrix has a general form that appears as the variance of a scaled projection of a random matrix that is of radial type and a similar result is obtained for the corresponding sequence of estimators for the vector of variance components. These results are illustrated by the limiting behavior of estimators for a linear covariance structure in a variety of multivariate statistical models. We also derive a characterization for the influence function of corresponding functionals. Furthermore, we derive the limiting distribution and influence function of scale invariant mappings of such estimators and their corresponding functionals. As a consequence, the asymptotic relative efficiency of different estimators for the shape component of a structured covariance matrix can be compared by means of a single scalar and the gross error sensitivity of the corresponding influence functions can be compared by means of a single index. Similar results are obtained for estimators of the normalized vector of variance components. We apply our results to investigate how the efficiency, gross error sensitivity, and breakdown point of S-estimators for the normalized variance components are affected simultaneously by varying their cutoff value.

In this work, we study and extend a class of semi-Lagrangian exponential methods, which combine exponential time integration techniques, suitable for integrating stiff linear terms, with a semi-Lagrangian treatment of nonlinear advection terms. Partial differential equations involving both processes arise for instance in atmospheric circulation models. Through a truncation error analysis, we show that previously formulated semi-Lagrangian exponential schemes are limited to first-order accuracy due to the discretization of the linear term; we then formulate a new discretization leading to second-order accuracy. Also, a detailed stability study is conducted to compare several Eulerian and semi-Lagrangian exponential schemes, as well as a well-established semi-Lagrangian semi-implicit method, which is used in operational atmospheric models. Numerical simulations of the shallow-water equations on the rotating sphere are performed to assess the orders of convergence, stability properties, and computational cost of each method. The proposed second-order semi-Lagrangian exponential method was shown to be more stable and accurate than the previously formulated schemes of the same class at the expense of larger wall-clock times; however, the method is more stable and has a similar cost compared to the well-established semi-Lagrangian semi-implicit method; therefore, it is a competitive candidate for potential operational applications in atmospheric circulation modeling.

In this article, we investigate both forward and backward problems for coupled systems of time-fractional diffusion equations, encompassing scenarios of strong coupling. For the forward problem, we establish the well-posedness of the system, leveraging the eigensystem of the corresponding elliptic system as the foundation. When considering the backward problem, specifically the determination of initial values through final time observations, we demonstrate a Lipschitz stability estimate, which is consist with the stability observed in the case of a single equation. To numerically address this backward problem, we refer to the explicit formulation of Tikhonov regularization to devise a multi-channel neural network architecture. This innovative architecture offers a versatile approach, exhibiting its efficacy in multidimensional settings through numerical examples and its robustness in handling initial values that have not been trained.

We address the problem of the best uniform approximation of a continuous function on a convex domain. The approximation is by linear combinations of a finite system of functions (not necessarily Chebyshev) under arbitrary linear constraints. By modifying the concept of alternance and of the Remez iterative procedure we present a method, which demonstrates its efficiency in numerical problems. The linear rate of convergence is proved under some favourable assumptions. A special attention is paid to systems of complex exponents, Gaussian functions, lacunar algebraic and trigonometric polynomials. Applications to signal processing, linear ODE, switching dynamical systems, and to Markov-Bernstein type inequalities are considered.

Influence propagation in social networks is a central problem in modern social network analysis, with important societal applications in politics and advertising. A large body of work has focused on cascading models, viral marketing, and finite-horizon diffusion. There is, however, a need for more developed, mathematically principled \emph{adversarial models}, in which multiple, opposed actors strategically select nodes whose influence will maximally sway the crowd to their point of view. In the present work, we develop and analyze such a model based on harmonic functions and linear diffusion. We prove that our general problem is NP-hard and that the objective function is monotone and submodular; consequently, we can greedily approximate the solution within a constant factor. Introducing and analyzing a convex relaxation, we show that the problem can be approximately solved using smooth optimization methods. We illustrate the effectiveness of our approach on a variety of example networks.

An essential problem in statistics and machine learning is the estimation of expectations involving PDFs with intractable normalizing constants. The self-normalized importance sampling (SNIS) estimator, which normalizes the IS weights, has become the standard approach due to its simplicity. However, the SNIS has been shown to exhibit high variance in challenging estimation problems, e.g, involving rare events or posterior predictive distributions in Bayesian statistics. Further, most of the state-of-the-art adaptive importance sampling (AIS) methods adapt the proposal as if the weights had not been normalized. In this paper, we propose a framework that considers the original task as estimation of a ratio of two integrals. In our new formulation, we obtain samples from a joint proposal distribution in an extended space, with two of its marginals playing the role of proposals used to estimate each integral. Importantly, the framework allows us to induce and control a dependency between both estimators. We propose a construction of the joint proposal that decomposes in two (multivariate) marginals and a coupling. This leads to a two-stage framework suitable to be integrated with existing or new AIS and/or variational inference (VI) algorithms. The marginals are adapted in the first stage, while the coupling can be chosen and adapted in the second stage. We show in several examples the benefits of the proposed methodology, including an application to Bayesian prediction with misspecified models.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

Most algorithms for representation learning and link prediction in relational data have been designed for static data. However, the data they are applied to usually evolves with time, such as friend graphs in social networks or user interactions with items in recommender systems. This is also the case for knowledge bases, which contain facts such as (US, has president, B. Obama, [2009-2017]) that are valid only at certain points in time. For the problem of link prediction under temporal constraints, i.e., answering queries such as (US, has president, ?, 2012), we propose a solution inspired by the canonical decomposition of tensors of order 4. We introduce new regularization schemes and present an extension of ComplEx (Trouillon et al., 2016) that achieves state-of-the-art performance. Additionally, we propose a new dataset for knowledge base completion constructed from Wikidata, larger than previous benchmarks by an order of magnitude, as a new reference for evaluating temporal and non-temporal link prediction methods.

北京阿比特科技有限公司