亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Block majorization-minimization (BMM) is a simple iterative algorithm for nonconvex constrained optimization that sequentially minimizes majorizing surrogates of the objective function in each block coordinate while the other coordinates are held fixed. BMM entails a large class of optimization algorithms such as block coordinate descent and its proximal-point variant, expectation-minimization, and block projected gradient descent. We establish that for general constrained nonconvex optimization, BMM with strongly convex surrogates can produce an $\epsilon$-stationary point within $O(\epsilon^{-2}(\log \epsilon^{-1})^{2})$ iterations and asymptotically converges to the set of stationary points. Furthermore, we propose a trust-region variant of BMM that can handle surrogates that are only convex and still obtain the same iteration complexity and asymptotic stationarity. These results hold robustly even when the convex sub-problems are inexactly solved as long as the optimality gaps are summable. As an application, we show that a regularized version of the celebrated multiplicative update algorithm for nonnegative matrix factorization by Lee and Seung has iteration complexity of $O(\epsilon^{-2}(\log \epsilon^{-1})^{2})$. The same result holds for a wide class of regularized nonnegative tensor decomposition algorithms as well as the classical block projected gradient descent algorithm. These theoretical results are validated through various numerical experiments.

相關內容

We propose a method for computing the Lyapunov exponents of renewal equations (delay equations of Volterra type) and of coupled systems of renewal and delay differential equations. The method consists in the reformulation of the delay equation as an abstract differential equation, the reduction of the latter to a system of ordinary differential equations via pseudospectral collocation, and the application of the standard discrete QR method. The effectiveness of the method is shown experimentally and a MATLAB implementation is provided.

It is often claimed that the theory of function levels proposed by Frege in Grundgesetze der Arithmetik anticipates the hierarchy of types that underlies Church's simple theory of types. This claim roughly states that Frege presupposes a type of functions in the sense of simple type theory in the expository language of Grundgesetze. However, this view makes it hard to accommodate function names of two arguments and view functions as incomplete entities. I propose and defend an alternative interpretation of first-level function names in Grundgesetze into simple type-theoretic open terms rather than into closed terms of a function type. This interpretation offers a still unhistorical but more faithful type-theoretic approximation of Frege's theory of levels and can be naturally extended to accommodate second-level functions. It is made possible by two key observations that Frege's Roman markers behave essentially like open terms and that Frege lacks a clear criterion for distinguishing between Roman markers and function names.

We consider the numerical approximation of variational problems with orthotropic growth, that is those where the integrand depends strongly on the coordinate directions with possibly different growth in each direction. Under realistic regularity assumptions we derive optimal error estimates. These estimates depend on the existence of an orthotropically stable interpolation operator. Over certain meshes we construct an orthotropically stable interpolant that is also a projection. Numerical experiments illustrate and explore the limits of our theory.

We develop a theory for the representation of opaque solids as volumetric models. Starting from a stochastic representation of opaque solids as random indicator functions, we prove the conditions under which such solids can be modeled using exponential volumetric transport. We also derive expressions for the volumetric attenuation coefficient as a functional of the probability distributions of the underlying indicator functions. We generalize our theory to account for isotropic and anisotropic scattering at different parts of the solid, and for representations of opaque solids as implicit surfaces. We derive our volumetric representation from first principles, which ensures that it satisfies physical constraints such as reciprocity and reversibility. We use our theory to explain, compare, and correct previous volumetric representations, as well as propose meaningful extensions that lead to improved performance in 3D reconstruction tasks.

The Galerkin method is often employed for numerical integration of evolutionary equations, such as the Navier-Stokes equation or the magnetic induction equation. Application of the method requires solving an equation of the form $P(Av-f)=0$ at each time step, where $v$ is an element of a finite-dimensional space $V$ with a basis satisfying boundary conditions, $P$ is the orthogonal projection on this space and $A$ is a linear operator. Usually the coefficients of $v$ expanded in the basis are found by calculating the matrix of $PA$ acting on $V$ and solving the respective system of linear equations. For physically realistic boundary conditions (such as the no-slip boundary conditions for the velocity, or for a dielectric outside the fluid volume for the magnetic field) the basis is often not orthogonal and solving the problem can be computationally demanding. We propose an algorithm giving an opportunity to reduce the computational cost for such a problem. Suppose there exists a space $W$ that contains $V$, the difference between the dimensions of $W$ and $V$ is small relative to the dimension of $V$, and solving the problem $P(Aw-f)=0$, where $w$ is an element of $W$, requires less operations than solving the original problem. The equation $P(Av-f)=0$ is then solved in two steps: we solve the problem $P(Aw-f)=0$ in $W$, find a correction $h=v-w$ that belongs to a complement to $V$ in $W$, and obtain the solution $w+h$. When the dimension of the complement is small the proposed algorithm is more efficient than the traditional one.

Long quantum codes using projective Reed-Muller codes are constructed. We obtain asymmetric and symmetric quantum codes by using the CSS construction and the Hermitian construction, respectively. Quantum codes obtained from projective Reed-Muller codes usually require entanglement assistance, but we show that sometimes we can avoid this requirement by considering monomially equivalent codes. Moreover, we also provide some constructions of quantum codes from subfield subcodes of projective Reed-Muller codes.

We propose a simple multivariate normality test based on Kac-Bernstein's characterization, which can be conducted by utilising existing statistical independence tests for sums and differences of data samples. We also perform its empirical investigation, which reveals that for high-dimensional data, the proposed approach may be more efficient than the alternative ones. The accompanying code repository is provided at \url{//shorturl.at/rtuy5}.

We present a simple unifying treatment of a large class of applications from statistical mechanics, econometrics, mathematical finance, and insurance mathematics, where stable (possibly subordinated) L\'evy noise arises as a scaling limit of some form of continuous-time random walk (CTRW). For each application, it is natural to rely on weak convergence results for stochastic integrals on Skorokhod space in Skorokhod's J1 or M1 topologies. As compared to earlier and entirely separate works, we are able to give a more streamlined account while also allowing for greater generality and providing important new insights. For each application, we first make clear how the fundamental conclusions for J1 convergent CTRWs emerge as special cases of the same general principles, and we then illustrate how the specific settings give rise to different results for strictly M1 convergent CTRWs.

Many complex tasks and environments can be decomposed into simpler, independent parts. Discovering such underlying compositional structure has the potential to expedite adaptation and enable compositional generalization. Despite progress, our most powerful systems struggle to compose flexibly. While most of these systems are monolithic, modularity promises to allow capturing the compositional nature of many tasks. However, it is unclear under which circumstances modular systems discover this hidden compositional structure. To shed light on this question, we study a teacher-student setting with a modular teacher where we have full control over the composition of ground truth modules. This allows us to relate the problem of compositional generalization to that of identification of the underlying modules. We show theoretically that identification up to linear transformation purely from demonstrations is possible in hypernetworks without having to learn an exponential number of module combinations. While our theory assumes the infinite data limit, in an extensive empirical study we demonstrate how meta-learning from finite data can discover modular solutions that generalize compositionally in modular but not monolithic architectures. We further show that our insights translate outside the teacher-student setting and demonstrate that in tasks with compositional preferences and tasks with compositional goals hypernetworks can discover modular policies that compositionally generalize.

Dependence is undoubtedly a central concept in statistics. Though, it proves difficult to locate in the literature a formal definition which goes beyond the self-evident 'dependence = non-independence'. This absence has allowed the term 'dependence' and its declination to be used vaguely and indiscriminately for qualifying a variety of disparate notions, leading to numerous incongruities. For example, the classical Pearson's, Spearman's or Kendall's correlations are widely regarded as 'dependence measures' of major interest, in spite of returning 0 in some cases of deterministic relationships between the variables at play, evidently not measuring dependence at all. Arguing that research on such a fundamental topic would benefit from a slightly more rigid framework, this paper suggests a general definition of the dependence between two random variables defined on the same probability space. Natural enough for aligning with intuition, that definition is still sufficiently precise for allowing unequivocal identification of a 'universal' representation of the dependence structure of any bivariate distribution. Links between this representation and familiar concepts are highlighted, and ultimately, the idea of a dependence measure based on that universal representation is explored and shown to satisfy Renyi's postulates.

北京阿比特科技有限公司