亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper discusses the construction of local bounded commuting projections for discrete subcomplexes of the gradgrad complexes in two and three dimensions, which play an important role in the finite element theory of elasticity (2D) and general relativity (3D). The construction first extends the local bounded commuting projections to the discrete de Rham complexes to other discrete complexes. Moreover, the argument also provides a guidance in the design of new discrete gradgrad complexes.

相關內容

Network data, commonly used throughout the physical, social, and biological sciences, consists of nodes (individuals) and the edges (interactions) between them. One way to represent network data's complex, high-dimensional structure is to embed the graph into a low-dimensional geometric space. The curvature of this space, in particular, provides insights about the structure in the graph, such as the propensity to form triangles or present tree-like structures. We derive an estimating function for curvature based on triangle side lengths and the length of the midpoint of a side to the opposing corner. We construct an estimator where the only input is a distance matrix and also establish asymptotic normality. We next introduce a novel latent distance matrix estimator for networks and an efficient algorithm to compute the estimate via solving iterative quadratic programs. We apply this method to the Los Alamos National Laboratory Unified Network and Host dataset and show how curvature estimates can be used to detect a red-team attack faster than naive methods, as well as discover non-constant latent curvature in co-authorship networks in physics. The code for this paper is available at //github.com/SteveJWR/netcurve, and the methods are implemented in the R package //github.com/SteveJWR/lolaR.

We investigate the propagation of acoustic singular surfaces, specifically, linear shock waves and nonlinear acceleration waves, in a class of inhomogeneous gases whose ambient mass density varies exponentially. Employing the mathematical tools of singular surface theory, we first determine the evolution of both the jump amplitudes and the locations/velocities of their associated wave-fronts, along with a variety of related analytical results. We then turn to what have become known as Krylov subspace spectral (KSS) methods to numerically simulate the evolution of the full waveforms under consideration. These are not only performed quite efficiently, since KSS allows the use of `large' CFL numbers, but also quite accurately, in the sense of capturing theoretically-predicted features of the solution profiles more faithfully than other time-stepping methods, since KSS customizes the computation of the components of the solution corresponding to the different frequencies involved. The presentation concludes with a listing of possible, acoustics-related, follow-on studies.

We study the problem of locally private mean estimation of high-dimensional vectors in the Euclidean ball. Existing algorithms for this problem either incur sub-optimal error or have high communication and/or run-time complexity. We propose a new algorithmic framework, ProjUnit, for private mean estimation that yields algorithms that are computationally efficient, have low communication complexity, and incur optimal error up to a $1+o(1)$-factor. Our framework is deceptively simple: each randomizer projects its input to a random low-dimensional subspace, normalizes the result, and then runs an optimal algorithm such as PrivUnitG in the lower-dimensional space. In addition, we show that, by appropriately correlating the random projection matrices across devices, we can achieve fast server run-time. We mathematically analyze the error of the algorithm in terms of properties of the random projections, and study two instantiations. Lastly, our experiments for private mean estimation and private federated learning demonstrate that our algorithms empirically obtain nearly the same utility as optimal ones while having significantly lower communication and computational cost.

Recently there has been great interest in operator learning, where networks learn operators between function spaces from an essentially infinite-dimensional perspective. In this work we present results for when the operators learned by these networks are injective and surjective. As a warmup, we combine prior work in both the finite-dimensional ReLU and operator learning setting by giving sharp conditions under which ReLU layers with linear neural operators are injective. We then consider the case the case when the activation function is pointwise bijective and obtain sufficient conditions for the layer to be injective. We remark that this question, while trivial in the finite-rank case, is subtler in the infinite-rank case and is proved using tools from Fredholm theory. Next, we prove that our supplied injective neural operators are universal approximators and that their implementation, with finite-rank neural networks, are still injective. This ensures that injectivity is not `lost' in the transcription from analytical operators to their finite-rank implementation with networks. Finally, we conclude with an increase in abstraction and consider general conditions when subnetworks, which may be many layers deep, are injective and surjective and provide an exact inversion from a `linearization.' This section uses general arguments from Fredholm theory and Leray-Schauder degree theory for non-linear integral equations to analyze the mapping properties of neural operators in function spaces. These results apply to subnetworks formed from the layers considered in this work, under natural conditions. We believe that our work has applications in Bayesian UQ where injectivity enables likelihood estimation and in inverse problems where surjectivity and injectivity corresponds to existence and uniqueness, respectively.

We study a sequential decision making problem between a principal and an agent with incomplete information on both sides. In this model, the principal and the agent interact in a stochastic environment, and each is privy to observations about the state not available to the other. The principal has the power of commitment, both to elicit information from the agent and to provide signals about her own information. The principal and the agent communicate their signals to each other, and select their actions independently based on this communication. Each player receives a payoff based on the state and their joint actions, and the environment moves to a new state. The interaction continues over a finite time horizon, and both players act to optimize their own total payoffs over the horizon. Our model encompasses as special cases stochastic games of incomplete information and POMDPs, as well as sequential Bayesian persuasion and mechanism design problems. We study both computation of optimal policies and learning in our setting. While the general problems are computationally intractable, we study algorithmic solutions under a conditional independence assumption on the underlying state-observation distributions. We present an polynomial-time algorithm to compute the principal's optimal policy up to an additive approximation. Additionally, we show an efficient learning algorithm in the case where the transition probabilities are not known beforehand. The algorithm guarantees sublinear regret for both players.

Fixpoint operators are tools to reason on recursive programs and data types obtained by induction (e.g. lists, trees) or coinduction (e.g. streams). They were given a categorical treatment with the notion of categories with fixpoints. A theorem by Plotkin and Simpson characterizes existence and uniqueness of fixpoint operators for categories satisfying some conditions on bifree algebras and recovers the standard examples of the category Cppo ($\omega$-complete pointed partial orders and continuous functions) in domain theory and the relational model in linear logic. We present a categorification of this result and develop the theory of 2-categorical fixpoint operators where the 2-dimensional framework allows to model the execution steps for languages with (co)inductive principles. We recover the standard categorical constructions of initial algebras and final coalgebras for endofunctors as well as fixpoints of generalized species and polynomial functors.

In this paper, based on an optimization problem, a sketch-and-project method for solving the linear matrix equation AXB = C is proposed. We provide a thorough convergence analysis for the new method and derive a lower bound on the convergence rate and some convergence conditions including the case that the coefficient matrix is rank deficient. By varying three parameters in the new method and convergence theorems, the new method recovers an array of well-known algorithms and their convergence results. Meanwhile, with the use of Gaussian sampling, we can obtain the Gaussian global randomized Kaczmarz (GaussGRK) method which shows some advantages in solving the matrix equation AXB = C. Finally, numerical experiments are given to illustrate the effectiveness of recovered methods.

Orienting the edges of an undirected graph such that the resulting digraph satisfies some given constraints is a classical problem in graph theory, with multiple algorithmic applications. In particular, an $st$-orientation orients each edge of the input graph such that the resulting digraph is acyclic, and it contains a single source $s$ and a single sink $t$. Computing an $st$-orientation of a graph can be done efficiently, and it finds notable applications in graph algorithms and in particular in graph drawing. On the other hand, finding an $st$-orientation with at most $k$ transitive edges is more challenging and it was recently proven to be NP-hard already when $k=0$. We strengthen this result by showing that the problem remains NP-hard even for graphs of bounded diameter, and for graphs of bounded vertex degree. These computational lower bounds naturally raise the question about which structural parameters can lead to tractable parameterizations of the problem. Our main result is a fixed-parameter tractable algorithm parameterized by treewidth.

We study the complexity of isomorphism problems for d-way arrays, or tensors, under natural actions by classical groups such as orthogonal, unitary, and symplectic groups. Such problems arise naturally in statistical data analysis and quantum information. We study two types of complexity-theoretic questions. First, for a fixed action type (isomorphism, conjugacy, etc.), we relate the complexity of the isomorphism problem over a classical group to that over the general linear group. Second, for a fixed group type (orthogonal, unitary, or symplectic), we compare the complexity of the decision problems for different actions. Our main results are as follows. First, for orthogonal and symplectic groups acting on 3-way arrays, the isomorphism problems reduce to the corresponding problem over the general linear group. Second, for orthogonal and unitary groups, the isomorphism problems of five natural actions on 3-way arrays are polynomial-time equivalent, and the d-tensor isomorphism problem reduces to the 3-tensor isomorphism problem for any fixed d>3. For unitary groups, the preceding result implies that LOCC classification of tripartite quantum states is at least as difficult as LOCC classification of d-partite quantum states for any d. Lastly, we also show that the graph isomorphism problem reduces to the tensor isomorphism problem over orthogonal and unitary groups.

The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.

北京阿比特科技有限公司