亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a method of detecting bifurcations by locating zeros of a signed version of the smallest singular value of the Jacobian. This enables the use of quadratically convergent root-bracketing techniques or Chebyshev interpolation to locate bifurcation points. Only positive singular values have to be computed, though the method relies on the existence of an analytic or smooth singular value decomposition (SVD). The sign of the determinant of the Jacobian, computed as part of the bidiagonal reduction in the SVD algorithm, eliminates slope discontinuities at the zeros of the smallest singular value. We use the method to search for spatially quasi-periodic traveling water waves that bifurcate from large-amplitude periodic waves. The water wave equations are formulated in a conformal mapping framework to facilitate the computation of the quasi-periodic Dirichlet-Neumann operator. We find examples of pure gravity waves with zero surface tension and overhanging gravity-capillary waves. In both cases, the waves have two spatial quasi-periods whose ratio is irrational. We follow the secondary branches via numerical continuation beyond the realm of linearization about solutions on the primary branch to obtain traveling water waves that extend over the real line with no two crests or troughs of exactly the same shape. The pure gravity wave problem is of relevance to ocean waves, where capillary effects can be neglected. Such waves can only exist through secondary bifurcation as they do not persist to zero amplitude. The gravity-capillary wave problem demonstrates the effectiveness of using the signed smallest singular value as a test function for multi-parameter bifurcation problems. This test function becomes mesh independent once the mesh is fine enough.

相關內容

Modern machine learning classifiers often exhibit vanishing classification error on the training set. They achieve this by learning nonlinear representations of the inputs that maps the data into linearly separable classes. Motivated by these phenomena, we revisit high-dimensional maximum margin classification for linearly separable data. We consider a stylized setting in which data $(y_i,{\boldsymbol x}_i)$, $i\le n$ are i.i.d. with ${\boldsymbol x}_i\sim\mathsf{N}({\boldsymbol 0},{\boldsymbol \Sigma})$ a $p$-dimensional Gaussian feature vector, and $y_i \in\{+1,-1\}$ a label whose distribution depends on a linear combination of the covariates $\langle {\boldsymbol \theta}_*,{\boldsymbol x}_i \rangle$. While the Gaussian model might appear extremely simplistic, universality arguments can be used to show that the results derived in this setting also apply to the output of certain nonlinear featurization maps. We consider the proportional asymptotics $n,p\to\infty$ with $p/n\to \psi$, and derive exact expressions for the limiting generalization error. We use this theory to derive two results of independent interest: $(i)$ Sufficient conditions on $({\boldsymbol \Sigma},{\boldsymbol \theta}_*)$ for `benign overfitting' that parallel previously derived conditions in the case of linear regression; $(ii)$ An asymptotically exact expression for the generalization error when max-margin classification is used in conjunction with feature vectors produced by random one-layer neural networks.

Bayesian Optimization (BO) is used to find the global optima of black box functions. In this work, we propose a practical BO method of function compositions where the form of the composition is known but the constituent functions are expensive to evaluate. By assuming an independent Gaussian process (GP) model for each of the constituent black-box function, we propose EI and UCB based BO algorithms and demonstrate their ability to outperform vanilla BO and the current state-of-art algorithms. We demonstrate a novel application of the proposed methods to dynamic pricing in revenue management when the underlying demand function is expensive to evaluate.

Satisfiability Modulo the Theory of Nonlinear Real Arithmetic, SMT(NRA) for short, concerns the satisfiability of polynomial formulas, which are quantifier-free Boolean combinations of polynomial equations and inequalities with integer coefficients and real variables. In this paper, we propose a local search algorithm for a special subclass of SMT(NRA), where all constraints are strict inequalities. An important fact is that, given a polynomial formula with $n$ variables, the zero level set of the polynomials in the formula decomposes the $n$-dimensional real space into finitely many components (cells) and every polynomial has constant sign in each cell. The key point of our algorithm is a new operation based on real root isolation, called cell-jump, which updates the current assignment along a given direction such that the assignment can `jump' from one cell to another. One cell-jump may adjust the values of several variables while traditional local search operations, such as flip for SAT and critical move for SMT(LIA), only change that of one variable. We also design a two-level operation selection to balance the success rate and efficiency. Furthermore, our algorithm can be easily generalized to a wider subclass of SMT(NRA) where polynomial equations linear with respect to some variable are allowed. Experiments show the algorithm is competitive with state-of-the-art SMT solvers, and performs particularly well on those formulas with high-degree polynomials.

Parallel-in-time methods for partial differential equations (PDEs) have been the subject of intense development over recent decades, particularly for diffusion-dominated problems. It has been widely reported in the literature, however, that many of these methods perform quite poorly for advection-dominated problems. Here we analyze the particular iterative parallel-in-time algorithm of multigrid reduction-in-time (MGRIT) for discretizations of constant-wave-speed linear advection problems. We focus on common method-of-lines discretizations that employ upwind finite differences in space and Runge-Kutta methods in time. Using a convergence framework we developed in previous work, we prove for a subclass of these discretizations that, if using the standard approach of rediscretizing the fine-grid problem on the coarse grid, robust MGRIT convergence with respect to CFL number and coarsening factor is not possible. This poor convergence and non-robustness is caused, at least in part, by an inadequate coarse-grid correction for smooth Fourier modes known as characteristic components.We propose an alternative coarse-grid that provides a better correction of these modes. This coarse-grid operator is related to previous work and uses a semi-Lagrangian discretization combined with an implicitly treated truncation error correction. Theory and numerical experiments show the coarse-grid operator yields fast MGRIT convergence for many of the method-of-lines discretizations considered, including for both implicit and explicit discretizations of high order. Parallel results demonstrate substantial speed-up over sequential time-stepping.

Although existing neural network approaches have achieved great success on Chinese spelling correction, there is still room to improve. The model is required to avoid over-correction and to distinguish a correct token from its phonological and visually similar ones. In this paper, we propose an error-guided correction model (EGCM) to improve Chinese spelling correction. By borrowing the powerful ability of BERT, we propose a novel zero-shot error detection method to do a preliminary detection, which guides our model to attend more on the probably wrong tokens in encoding and to avoid modifying the correct tokens in generating. Furthermore, we introduce a new loss function to integrate the error confusion set, which enables our model to distinguish easily misused tokens. Moreover, our model supports highly parallel decoding to meet real application requirements. Experiments are conducted on widely used benchmarks. Our model achieves superior performance against state-of-the-art approaches by a remarkable margin, on both the correction quality and computation speed.

Temporal memory safety bugs, especially use-after-free and double free bugs, pose a major security threat to C programs. Real-world exploits utilizing these bugs enable attackers to read and write arbitrary memory locations, causing disastrous violations of confidentiality, integrity, and availability. Many previous solutions retrofit temporal memory safety to C, but they all either incur high performance overhead and/or miss detecting certain types of temporal memory safety bugs. In this paper, we propose a temporal memory safety solution that is both efficient and comprehensive. Specifically, we extend Checked C, a spatially-safe extension to C, with temporally-safe pointers. These are implemented by combining two techniques: fat pointers and dynamic key-lock checks. We show that the fat-pointer solution significantly improves running time and memory overhead compared to the disjoint-metadata approach that provides the same level of protection. With empirical program data and hands-on experience porting real-world applications, we also show that our solution is practical in terms of backward compatibility -- one of the major complaints about fat pointers.

We consider the truncated multivariate normal distributions for which every component is one-sided truncated. We show that this family of distributions is an exponential family. We identify $\mathcal{D}$, the corresponding natural parameter space, and deduce that the family of distributions is not regular. We prove that the gradient of the cumulant-generating function of the family of distributions remains bounded near certain boundary points in $\mathcal{D}$, and therefore the family also is not steep. We also consider maximum likelihood estimation for $\boldsymbol{\mu}$, the location vector parameter, and $\boldsymbol{\Sigma}$, the positive definite (symmetric) matrix dispersion parameter, of a truncated non-singular multivariate normal distribution. We prove that each solution to the score equations for $(\boldsymbol{\mu},\boldsymbol{\Sigma})$ satisfies the method-of-moments equations, and we obtain a necessary condition for the existence of solutions to the score equations.

Our work explores the hardness of $3$SUM instances without certain additive structures, and its applications. As our main technical result, we show that solving $3$SUM on a size-$n$ integer set that avoids solutions to $a+b=c+d$ for $\{a, b\} \ne \{c, d\}$ still requires $n^{2-o(1)}$ time, under the $3$SUM hypothesis. Such sets are called Sidon sets and are well-studied in the field of additive combinatorics. - Combined with previous reductions, this implies that the All-Edges Sparse Triangle problem on $n$-vertex graphs with maximum degree $\sqrt{n}$ and at most $n^{k/2}$ $k$-cycles for every $k \ge 3$ requires $n^{2-o(1)}$ time, under the $3$SUM hypothesis. This can be used to strengthen the previous conditional lower bounds by Abboud, Bringmann, Khoury, and Zamir [STOC'22] of $4$-Cycle Enumeration, Offline Approximate Distance Oracle and Approximate Dynamic Shortest Path. In particular, we show that no algorithm for the $4$-Cycle Enumeration problem on $n$-vertex $m$-edge graphs with $n^{o(1)}$ delays has $O(n^{2-\varepsilon})$ or $O(m^{4/3-\varepsilon})$ pre-processing time for $\varepsilon >0$. We also present a matching upper bound via simple modifications of the known algorithms for $4$-Cycle Detection. - A slight generalization of the main result also extends the result of Dudek, Gawrychowski, and Starikovskaya [STOC'20] on the $3$SUM hardness of nontrivial 3-Variate Linear Degeneracy Testing (3-LDTs): we show $3$SUM hardness for all nontrivial 4-LDTs. The proof of our main technical result combines a wide range of tools: Balog-Szemer{\'e}di-Gowers theorem, sparse convolution algorithm, and a new almost-linear hash function with almost $3$-universal guarantee for integers that do not have small-coefficient linear relations.

Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial intelligence (AI), including computer vision, natural language processing and speech recognition. However, their superior performance comes at the considerable cost of computational complexity, which greatly hinders their applications in many resource-constrained devices, such as mobile phones and Internet of Things (IoT) devices. Therefore, methods and techniques that are able to lift the efficiency bottleneck while preserving the high accuracy of DNNs are in great demand in order to enable numerous edge AI applications. This paper provides an overview of efficient deep learning methods, systems and applications. We start from introducing popular model compression methods, including pruning, factorization, quantization as well as compact model design. To reduce the large design cost of these manual solutions, we discuss the AutoML framework for each of them, such as neural architecture search (NAS) and automated pruning and quantization. We then cover efficient on-device training to enable user customization based on the local data on mobile devices. Apart from general acceleration techniques, we also showcase several task-specific accelerations for point cloud, video and natural language processing by exploiting their spatial sparsity and temporal/token redundancy. Finally, to support all these algorithmic advancements, we introduce the efficient deep learning system design from both software and hardware perspectives.

In 1954, Alston S. Householder published Principles of Numerical Analysis, one of the first modern treatments on matrix decomposition that favored a (block) LU decomposition-the factorization of a matrix into the product of lower and upper triangular matrices. And now, matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, Hilbert space, and things in the complex domain. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields.

北京阿比特科技有限公司