亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We develop new tools to study landscapes in nonconvex optimization. Given one optimization problem, we pair it with another by smoothly parametrizing the domain. This is either for practical purposes (e.g., to use smooth optimization algorithms with good guarantees) or for theoretical purposes (e.g., to reveal that the landscape satisfies a strict saddle property). In both cases, the central question is: how do the landscapes of the two problems relate? More precisely: how do desirable points such as local minima and critical points in one problem relate to those in the other problem? A key finding in this paper is that these relations are often determined by the parametrization itself, and are almost entirely independent of the cost function. Accordingly, we introduce a general framework to study parametrizations by their effect on landscapes. The framework enables us to obtain new guarantees for an array of problems, some of which were previously treated on a case-by-case basis in the literature. Applications include: optimizing low-rank matrices and tensors through factorizations; solving semidefinite programs via the Burer-Monteiro approach; training neural networks by optimizing their weights and biases; and quotienting out symmetries.

相關內容

Non-Hermitian topological phases can produce some remarkable properties, compared with their Hermitian counterpart, such as the breakdown of conventional bulk-boundary correspondence and the non-Hermitian topological edge mode. Here, we introduce several algorithms with multi-layer perceptron (MLP), and convolutional neural network (CNN) in the field of deep learning, to predict the winding of eigenvalues non-Hermitian Hamiltonians. Subsequently, we use the smallest module of the periodic circuit as one unit to construct high-dimensional circuit data features. Further, we use the Dense Convolutional Network (DenseNet), a type of convolutional neural network that utilizes dense connections between layers to design a non-Hermitian topolectrical Chern circuit, as the DenseNet algorithm is more suitable for processing high-dimensional data. Our results demonstrate the effectiveness of the deep learning network in capturing the global topological characteristics of a non-Hermitian system based on training data.

In this work, we consider the notion of "criterion collapse," in which optimization of one metric implies optimality in another, with a particular focus on conditions for collapse into error probability minimizers under a wide variety of learning criteria, ranging from DRO and OCE risks (CVaR, tilted ERM) to non-monotonic criteria underlying recent ascent-descent algorithms explored in the literature (Flooding, SoftAD). We show how collapse in the context of losses with a Bernoulli distribution goes far beyond existing results for CVaR and DRO, then expand our scope to include surrogate losses, showing conditions where monotonic criteria such as tilted ERM cannot avoid collapse, whereas non-monotonic alternatives can.

In this study, we consider the application of orthogonality sampling method (OSM) with single and multiple sources for a fast identification of small objects in limited-aperture inverse scattering problem. We first apply the OSM with single source and show that the indicator function with single source can be expressed by the Bessel function of order zero of the first kind, infinite series of Bessel function of nonzero integer order of the first kind, range of signal receiver, and the location of emitter. Based on this result, we explain that the objects can be identified through the OSM with single source but the identification is significantly influenced by the location of source and applied frequency. For a successful improvement, we then consider the OSM with multiple sources. Based on the identified structure of the OSM with single source, we design an indicator function of the OSM with multiple sources and show that it can be expressed by the square of the Bessel function of order zero of the first kind an infinite series of the square of Bessel function of nonzero integer order of the first kind. Based on the theoretical results, we explain that the objects can be identified uniquely through the designed OSM. Several numerical experiments with experimental data provided by the Institute Fresnel demonstrate the pros and cons of the OSM with single source and how the designed OSM with multiple sources behave.

Expander graphs, due to their good mixing properties, are useful in many algorithms and combinatorial constructions. One can produce an expander graph with high probability by taking a random graph. For example, for the case of bipartite graphs of degree $d$ and $n$ vertices in each part we may take independently $d$ permutations of an $n$-element set and use them for edges. This construction is much simpler than all known explicit constructions of expanders and gives graphs with good mixing properties (small second largest eigenvalue) with high probability. However, from the practical viewpoint, it uses too many random bits, so it is difficult to generate and store these bits for reasonably large graphs. The natural idea is to replace the group of all permutations by its small subgroup. Let $n$ be $q^k-1$ for some $k$ and some prime $q$. Then we may interpret vertices as non-zero $k$-dimensional vector over the field $\mathbb{F}_q$, and take random \emph{linear} permutations, i.e., random elements of $GL_k(\mathbb{F}_q)$. In this way the number of random bits used will be polynomial in $k$ (i.e., the logarithm of the graph size, instead of the graph size itself) and the degree. In this paper we provide some experimental data that show that indeed this replacement does not change much the mixing properties (the second eigenvalue) of the random graph that we obtain. These data are provided for several types of graphs (undirected regular and biregular bipartite graphs). We also prove some upper bounds for the second eigenvalue (though it is quite weak compared with the experimental results). Finally, we discuss the possibility to decrease the number of random bits further by using Toeplitz matrices; our experiments show that this change makes the mixing properties of graphs only marginally worse, while the number of random bits decreases significantly.

In this paper, we use the Bayesian inversion approach to study the data assimilation problem for a family of tumor growth models described by porous-medium type equations. The models contain uncertain parameters and are indexed by a physical parameter $m$, which characterizes the constitutive relation between density and pressure. Based on these models, we employ the Bayesian inversion framework to infer parametric and nonparametric unknowns that affect tumor growth from noisy observations of tumor cell density. We establish the well-posedness and the stability theories for the Bayesian inversion problem and further prove the convergence of the posterior distribution in the so-called incompressible limit, $m \rightarrow \infty$. Since the posterior distribution across the index regime $m\in[2,\infty)$ can thus be treated in a unified manner, such theoretical results also guide the design of the numerical inference for the unknown. We propose a generic computational framework for such inverse problems, which consists of a typical sampling algorithm and an asymptotic preserving solver for the forward problem. With extensive numerical tests, we demonstrate that the proposed method achieves satisfactory accuracy in the Bayesian inference of the tumor growth models, which is uniform with respect to the constitutive relation.

In this paper, we propose to consider various models of pattern recognition. At the same time, it is proposed to consider models in the form of two operators: a recognizing operator and a decision rule. Algebraic operations are introduced on recognizing operators, and based on the application of these operators, a family of recognizing algorithms is created. An upper estimate is constructed for the model, which guarantees the completeness of the extension.

The goal of this paper is to investigate a family of optimization problems arising from list homomorphisms, and to understand what the best possible algorithms are if we restrict the problem to bounded-treewidth graphs. For a fixed $H$, the input of the optimization problem LHomVD($H$) is a graph $G$ with lists $L(v)$, and the task is to find a set $X$ of vertices having minimum size such that $(G-X,L)$ has a list homomorphism to $H$. We define analogously the edge-deletion variant LHomED($H$). This expressive family of problems includes members that are essentially equivalent to fundamental problems such as Vertex Cover, Max Cut, Odd Cycle Transversal, and Edge/Vertex Multiway Cut. For both variants, we first characterize those graphs $H$ that make the problem polynomial-time solvable and show that the problem is NP-hard for every other fixed $H$. Second, as our main result, we determine for every graph $H$ for which the problem is NP-hard, the smallest possible constant $c_H$ such that the problem can be solved in time $c^t_H\cdot n^{O(1)}$ if a tree decomposition of $G$ having width $t$ is given in the input.Let $i(H)$ be the maximum size of a set of vertices in $H$ that have pairwise incomparable neighborhoods. For the vertex-deletion variant LHomVD($H$), we show that the smallest possible constant is $i(H)+1$ for every $H$. The situation is more complex for the edge-deletion version. For every $H$, one can solve LHomED($H$) in time $i(H)^t\cdot n^{O(1)}$ if a tree decomposition of width $t$ is given. However, the existence of a specific type of decomposition of $H$ shows that there are graphs $H$ where LHomED($H$) can be solved significantly more efficiently and the best possible constant can be arbitrarily smaller than $i(H)$. Nevertheless, we determine this best possible constant and (assuming the SETH) prove tight bounds for every fixed $H$.

In many practical studies, learning directionality between a pair of variables is of great interest while notoriously hard when their underlying relation is nonlinear. This paper presents a method that examines asymmetry in exposure-outcome pairs when a priori assumptions about their relative ordering are unavailable. Our approach utilizes a framework of generative exposure mapping (GEM) to study asymmetric relations in continuous exposure-outcome pairs, through which we can capture distributional asymmetries with no prefixed variable ordering. We propose a coefficient of asymmetry to quantify relational asymmetry using Shannon's entropy analytics as well as statistical estimation and inference for such an estimand of directionality. Large-sample theoretical guarantees are established for cross-fitting inference techniques. The proposed methodology is extended to allow both measured confounders and contamination in outcome measurements, which is extensively evaluated through extensive simulation studies and real data applications.

We adopt the integral definition of the fractional Laplace operator and study an optimal control problem on Lipschitz domains that involves a fractional elliptic partial differential equation (PDE) as state equation and a control variable that enters the state equation as a coefficient; pointwise constraints on the control variable are considered as well. We establish the existence of optimal solutions and analyze first and, necessary and sufficient, second order optimality conditions. Regularity estimates for optimal variables are also analyzed. We develop two finite element discretization strategies: a semidiscrete scheme in which the control variable is not discretized, and a fully discrete scheme in which the control variable is discretized with piecewise constant functions. For both schemes, we analyze the convergence properties of discretizations and derive error estimates.

This paper develops efficient preconditioned iterative solvers for incompressible flow problems discretised by an enriched Taylor-Hood mixed approximation, in which the usual pressure space is augmented by a piecewise constant pressure to ensure local mass conservation. This enrichment process causes over-specification of the pressure when the pressure space is defined by the union of standard Taylor-Hood basis functions and piecewise constant pressure basis functions, which complicates the design and implementation of efficient solvers for the resulting linear systems. We first describe the impact of this choice of pressure space specification on the matrices involved. Next, we show how to recover effective solvers for Stokes problems, with preconditioners based on the singular pressure mass matrix, and for Oseen systems arising from linearised Navier-Stokes equations, by using a two-stage pressure convection-diffusion strategy. The codes used to generate the numerical results are available online.

北京阿比特科技有限公司