亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Let $\Omega = [0,1]^d$ be the unit cube in $\mathbb{R}^d$. We study the problem of how efficiently, in terms of the number of parameters, deep neural networks with the ReLU activation function can approximate functions in the Sobolev spaces $W^s(L_q(\Omega))$ and Besov spaces $B^s_r(L_q(\Omega))$, with error measured in the $L_p(\Omega)$ norm. This problem is important when studying the application of neural networks in a variety of fields, including scientific computing and signal processing, and has previously been completely solved only when $p=q=\infty$. Our contribution is to provide a complete solution for all $1\leq p,q\leq \infty$ and $s > 0$, including asymptotically matching upper and lower bounds. The key technical tool is a novel bit-extraction technique which gives an optimal encoding of sparse vectors. This enables us to obtain sharp upper bounds in the non-linear regime where $p > q$. We also provide a novel method for deriving $L_p$-approximation lower bounds based upon VC-dimension when $p < \infty$. Our results show that very deep ReLU networks significantly outperform classical methods of approximation in terms of the number of parameters, but that this comes at the cost of parameters which are not encodable.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網絡會議。 Publisher:IFIP。 SIT:

Given a continuous definable function $f: S \to \mathbb{R}$ on a definable set $S$, we study sublevel sets of the form $S^f_t = \{x \in S: f(x) \leq t\}$ for all $t \in \mathbb{R}$. Using o-minimal structures, we prove that the Euler characteristic of $S^f_t$ is right continuous with respect to $t$. Furthermore, when $S$ is compact, we show that $S^f_{t+\delta}$ deformation retracts to $S^f_t$ for all sufficiently small $\delta > 0$. Applying these results, we also characterize the relationship between the concepts of Euler characteristic transform and smooth Euler characteristic transform in topological data analysis.

We investigate the equational theory of Kleene algebra terms with variable complements -- (language) complement where it applies only to variables -- w.r.t. languages. While the equational theory w.r.t. languages coincides with the language equivalence (under the standard language valuation) for Kleene algebra terms, this coincidence is broken if we extend the terms with complements. In this paper, we prove the decidability of some fragments of the equational theory: the universality problem is coNP-complete, and the inequational theory t <= s is coNP-complete when t does not contain Kleene-star. To this end, we introduce words-to-letters valuations; they are sufficient valuations for the equational theory and ease us in investigating the equational theory w.r.t. languages. Additionally, we prove that for words with variable complements, the equational theory coincides with the word equivalence.

We propose and analyze exact and inexact regularized Newton-type methods for finding a global saddle point of \emph{convex-concave} unconstrained min-max optimization problems. Compared to first-order methods, our understanding of second-order methods for min-max optimization is relatively limited, as obtaining global rates of convergence with second-order information is much more involved. In this paper, we examine how second-order information can be used to speed up extra-gradient methods, even under inexactness. Specifically, we show that the proposed algorithms generate iterates that remain within a bounded set and the averaged iterates converge to an $\epsilon$-saddle point within $O(\epsilon^{-2/3})$ iterations in terms of a restricted gap function. Our algorithms match the theoretically established lower bound in this context and our analysis provides a simple and intuitive convergence analysis for second-order methods without any boundedness requirements. Finally, we present a series of numerical experiments on synthetic and real data that demonstrate the efficiency of the proposed algorithms.

Let $\gamma$ be a generic closed curve in the plane. Samuel Blank, in his 1967 Ph.D. thesis, determined if $\gamma$ is self-overlapping by geometrically constructing a combinatorial word from $\gamma$. More recently, Zipei Nie, in an unpublished manuscript, computed the minimum homotopy area of $\gamma$ by constructing a combinatorial word algebraically. We provide a unified framework for working with both words and determine the settings under which Blank's word and Nie's word are equivalent. Using this equivalence, we give a new geometric proof for the correctness of Nie's algorithm. Unlike previous work, our proof is constructive which allows us to naturally compute the actual homotopy that realizes the minimum area. Furthermore, we contribute to the theory of self-overlapping curves by providing the first polynomial-time algorithm to compute a self-overlapping decomposition of any closed curve $\gamma$ with minimum area.

Orbifolds are a modern mathematical concept that arises in the research of hyperbolic geometry with applications in computer graphics and visualization. In this paper, we make use of rooms with mirrors as the visual metaphor for orbifolds. Given any arbitrary two-dimensional kaleidoscopic orbifold, we provide an algorithm to construct a Euclidean, spherical, or hyperbolic polygon to match the orbifold. This polygon is then used to create a room for which the polygon serves as the floor and the ceiling. With our system that implements M\"obius transformations, the user can interactively edit the scene and see the reflections of the edited objects. To correctly visualize non-Euclidean orbifolds, we adapt the rendering algorithms to account for the geodesics in these spaces, which light rays follow. Our interactive orbifold design system allows the user to create arbitrary two-dimensional kaleidoscopic orbifolds. In addition, our mirror-based orbifold visualization approach has the potential of helping our users gain insight on the orbifold, including its orbifold notation as well as its universal cover, which can also be the spherical space and the hyperbolic space.

We consider the problem of computing the Maximal Exact Matches (MEMs) of a given pattern $P[1 .. m]$ on a large repetitive text collection $T[1 .. n]$, which is represented as a (hopefully much smaller) run-length context-free grammar of size $g_{rl}$. We show that the problem can be solved in time $O(m^2 \log^\epsilon n)$, for any constant $\epsilon > 0$, on a data structure of size $O(g_{rl})$. Further, on a locally consistent grammar of size $O(\delta\log\frac{n}{\delta})$, the time decreases to $O(m\log m(\log m + \log^\epsilon n))$. The value $\delta$ is a function of the substring complexity of $T$ and $\Omega(\delta\log\frac{n}{\delta})$ is a tight lower bound on the compressibility of repetitive texts $T$, so our structure has optimal size in terms of $n$ and $\delta$. We extend our results to several related problems, such as finding $k$-MEMs, MUMs, rare MEMs, and applications.

Mesh degeneration is a bottleneck for fluid-structure interaction (FSI) simulations and for shape optimization via the method of mappings. In both cases, an appropriate mesh motion technique is required. The choice is typically based on heuristics, e.g., the solution operators of partial differential equations (PDE), such as the Laplace or biharmonic equation. Especially the latter, which shows good numerical performance for large displacements, is expensive. Moreover, from a continuous perspective, choosing the mesh motion technique is to a certain extent arbitrary and has no influence on the physically relevant quantities. Therefore, we consider approaches inspired by machine learning. We present a hybrid PDE-NN approach, where the neural network (NN) serves as parameterization of a coefficient in a second order nonlinear PDE. We ensure existence of solutions for the nonlinear PDE by the choice of the neural network architecture. Moreover, we present an approach where a neural network corrects the harmonic extension such that the boundary displacement is not changed. In order to avoid technical difficulties in coupling finite element and machine learning software, we work with a splitting of the monolithic FSI system into three smaller subsystems. This allows to solve the mesh motion equation in a separate step. We assess the quality of the learned mesh motion technique by applying it to a FSI benchmark problem.

We derive large-sample and other limiting distributions of the ``frequency of frequencies'' vector, ${\bf M_n}$, together with the number of species, $K_n$, in a Poisson-Dirichlet or generalised Poisson-Dirichlet gene or species sampling model. Models analysed include those constructed from gamma and $\alpha$-stable subordinators by Kingman, the two-parameter extension by Pitman and Yor, and another two-parameter version constructed by omitting large jumps from an $\alpha$-stable subordinator. In the Poisson-Dirichlet case ${\bf M_n}$ and $K_n$ turn out to be asymptotically independent, and notable, especially for statistical applications, is that in other cases the conditional limiting distribution of ${\bf M_n}$, given $K_n$, is normal, after certain centering and norming.

Given a rectangle $R$ with area $A$ and a set of areas $L=\{A_1,...,A_n\}$ with $\sum_{i=1}^n A_i = A$, we consider the problem of partitioning $R$ into $n$ sub-regions $R_1,...,R_n$ with areas $A_1,...,A_n$ in a way that the total perimeter of all sub-regions is minimized. The goal is to create square-like sub-regions, which are often more desired. We propose an efficient $1.203$--approximation algorithm for this problem based on a divide and conquer scheme that runs in $\mathcal{O}(n^2)$ time. For the special case when the aspect ratios of all rectangles are bounded from above by 3, the approximation factor is $2/\sqrt{3} \leq 1.1548$. We also present a modified version of out algorithm as a heuristic that achieves better average and best run times.

Let $C$ be a linear code of length $n$ and dimension $k$ over the finite field $\mathbb{F}_{q^m}$. The trace code $\mathrm{Tr}(C)$ is a linear code of the same length $n$ over the subfield $\mathbb{F}_q$. The obvious upper bound for the dimension of the trace code over $\mathbb{F}_q$ is $mk$. If equality holds, then we say that $C$ has maximum trace dimension. The problem of finding the true dimension of trace codes and their duals is relevant for the size of the public key of various code-based cryptographic protocols. Let $C_{\mathbf{a}}$ denote the code obtained from $C$ and a multiplier vector $\mathbf{a}\in (\mathbb{F}_{q^m})^n$. In this paper, we give a lower bound for the probability that a random multiplier vector produces a code $C_{\mathbf{a}}$ of maximum trace dimension. We give an interpretation of the bound for the class of algebraic geometry codes in terms of the degree of the defining divisor. The bound explains the experimental fact that random alternant codes have minimal dimension. Our bound holds whenever $n\geq m(k+h)$, where $h\geq 0$ is the Singleton defect of $C$. For the extremal case $n=m(h+k)$, numerical experiments reveal a closed connection between the probability of having maximum trace dimension and the probability that a random matrix has full rank.

北京阿比特科技有限公司