亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, a generalized finite element method (GFEM) with optimal local approximation spaces for solving high-frequency heterogeneous Helmholtz problems is systematically studied. The local spaces are built from selected eigenvectors of local eigenvalue problems defined on generalized harmonic spaces. At both continuous and discrete levels, $(i)$ wavenumber explicit and nearly exponential decay rates for the local approximation errors are obtained without any assumption on the size of subdomains; $(ii)$ a quasi-optimal and nearly exponential global convergence of the method is established by assuming that the size of subdomains is $O(1/k)$ ($k$ is the wavenumber). A novel resonance effect between the wavenumber and the dimension of local spaces on the decay of error with respect to the oversampling size is implied by the analysis. Furthermore, for fixed dimensions of local spaces, the discrete local errors are proved to converge as $h\rightarrow 0$ ($h$ denoting the mesh size) towards the continuous local errors. The method at the continuous level extends the plane wave partition of unity method [I. Babuska and J. M. Melenk, Int.\;J.\;Numer.\;Methods Eng., 40 (1997), pp.~727--758] to the heterogeneous-coefficients case, and at the discrete level, it delivers an efficient non-iterative domain decomposition method for solving discrete Helmholtz problems resulting from standard FE discretizations. Numerical results are provided to confirm the theoretical analysis and to validate the proposed method.

相關內容

This paper presents fault-tolerant asynchronous Stochastic Gradient Descent (SGD) algorithms. SGD is widely used for approximating the minimum of a cost function $Q$, as a core part of optimization and learning algorithms. Our algorithms are designed for the cluster-based model, which combines message-passing and shared-memory communication layers. Processes may fail by crashing, and the algorithm inside each cluster is wait-free, using only reads and writes. For a strongly convex function $Q$, our algorithm tolerates any number of failures, and provides convergence rate that yields the maximal distributed acceleration over the optimal convergence rate of sequential SGD. For arbitrary functions, the convergence rate has an additional term that depends on the maximal difference between the parameters at the same iteration. (This holds under standard assumptions on $Q$.) In this case, the algorithm obtains the same convergence rate as sequential SGD, up to a logarithmic factor. This is achieved by using, at each iteration, a multidimensional approximate agreement algorithm, tailored for the cluster-based model. The algorithm for arbitrary functions requires that at least a majority of the clusters contain at least one nonfaulty process. We prove that this condition is necessary when optimizing some non-convex functions.

We consider the reinforcement learning problem for partially observed Markov decision processes (POMDPs) with large or even countably infinite state spaces, where the controller has access to only noisy observations of the underlying controlled Markov chain. We consider a natural actor-critic method that employs a finite internal memory for policy parameterization, and a multi-step temporal difference learning algorithm for policy evaluation. We establish, to the best of our knowledge, the first non-asymptotic global convergence of actor-critic methods for partially observed systems under function approximation. In particular, in addition to the function approximation and statistical errors that also arise in MDPs, we explicitly characterize the error due to the use of finite-state controllers. This additional error is stated in terms of the total variation distance between the traditional belief state in POMDPs and the posterior distribution of the hidden state when using a finite-state controller. Further, we show that this error can be made small in the case of sliding-block controllers by using larger block sizes.

We introduce and discuss shape-based models for finding the best interpolation data in the compression of images with noise. The aim is to reconstruct missing regions by means of minimizing a data fitting term in the $L^2$-norm between the images and their reconstructed counterparts using time-dependent PDE inpainting. We analyze the proposed models in the framework of the $\Gamma$-convergence from two different points of view. First, we consider a continuous stationary PDE model, obtained by focusing on the first iteration of the discretized time-dependent PDE, and get pointwise information on the "relevance" of each pixel by a topological asymptotic method. Second, we introduce a finite dimensional setting of the continuous model based on "fat pixels" (balls with positive radius), and we study by $\Gamma$-convergence the asymptotics when the radius vanishes. Numerical computations are presented that confirm the usefulness of our theoretical findings for non-stationary PDE-based image compression.

This paper presents a new parameter free partially penalized immersed finite element method and convergence analysis for solving second order elliptic interface problems. A lifting operator is introduced on interface edges to ensure the coercivity of the method without requiring an ad-hoc stabilization parameter. The optimal approximation capabilities of the immersed finite element space is proved via a novel new approach that is much simpler than that in the literature. A new trace inequality which is necessary to prove the optimal convergence of immersed finite element methods is established on interface elements. Optimal error estimates are derived rigorously with the constant independent of the interface location relative to the mesh. The new method and analysis have also been extended to variable coefficients and three-dimensional problems. Numerical examples are also provided to confirm the theoretical analysis and efficiency of the new method.

All-or-nothing transforms (AONT) were proposed by Rivest as a message preprocessing technique for encrypting data to protect against brute-force attacks, and have numerous applications in cryptography and information security. Later the unconditionally secure AONT and their combinatorial characterization were introduced by Stinson. Informally, a combinatorial AONT is an array with the unbiased requirements and its security properties in general depend on the prior probability distribution on the inputs $s$-tuples. Recently, it was shown by Esfahani and Stinson that a combinatorial AONT has perfect security provided that all the inputs $s$-tuples are equiprobable, and has weak security provided that all the inputs $s$-tuples are with non-zero probability. This paper aims to explore on the gap between perfect security and weak security for combinatorial $(t,s,v)$-AONTs. Concretely, we consider the typical scenario that all the $s$ inputs take values independently (but not necessarily identically) and quantify the amount of information $H(\mathcal{X}|\mathcal{Y})$ about any $t$ inputs $\mathcal{X}$ that is not revealed by any $s-t$ outputs $\mathcal{Y}$. In particular, we establish the general lower and upper bounds on $H(\mathcal{X}|\mathcal{Y})$ for combinatorial AONTs using information-theoretic techniques, and also show that the derived bounds can be attained in certain cases. Furthermore, the discussions are extended for the security properties of combinatorial asymmetric AONTs.

A mass-preserving two-step Lagrange-Galerkin scheme of second order in time for convection-diffusion problems is presented, and convergence with optimal error estimates is proved in the framework of $L^2$-theory. The introduced scheme maintains the advantages of the Lagrange-Galerkin method, i.e., CFL-free robustness for convection-dominated problems and a symmetric and positive coefficient matrix resulting from the discretization. In addition, the scheme conserves the mass on the discrete level if the involved integrals are computed exactly. Unconditional stability and error estimates of second order in time are proved by employing two new key lemmas on the truncation error of the material derivative in conservative form and on a discrete Gronwall inequality for multistep methods. The mass-preserving property is achieved by the Jacobian multiplication technique introduced by Rui and Tabata in 2010, and the accuracy of second order in time is obtained based on the idea of the multistep Galerkin method along characteristics originally introduced by Ewing and Russel in 1981. For the first time step, the mass-preserving scheme of first order in time by Rui and Tabata in 2010 is employed, which is efficient and does not cause any loss of convergence order in the $\ell^\infty(L^2)$- and $\ell^2(H^1_0)$-norms. For the time increment $\Delta t$, the mesh size $h$ and a conforming finite element space of polynomial degree $k$, the convergence order is of $O(\Delta t^2 + h^k)$ in the $\ell^\infty(L^2)\cap \ell^2(H^1_0)$-norm and of $O(\Delta t^2 + h^{k+1})$ in the $\ell^\infty(L^2)$-norm if the duality argument can be employed. Error estimates of $O(\Delta t^{3/2}+h^k)$ in discrete versions of the $L^\infty(H^1_0)$- and $H^1(L^2)$-norm are additionally proved. Numerical results confirm the theoretical convergence orders in one, two and three dimensions.

We consider generalized Nash equilibrium problems (GNEPs) with non-convex strategy spaces and non-convex cost functions. This general class of games includes the important case of games with mixed-integer variables for which only a few results are known in the literature. We present a new approach to characterize equilibria via a convexification technique using the Nikaido-Isoda function. To any given instance of the GNEP, we construct a set of convexified instances and show that a feasible strategy profile is an equilibrium for the original instance if and only if it is an equilibrium for any convexified instance and the convexified cost functions coincide with the initial ones. We further develop this approach along three dimensions. We first show that for quasi-linear models, where a convexified instance exists in which for fixed strategies of the opponent players, the cost function of every player is linear and the respective strategy space is polyhedral, the convexification reduces the GNEP to a standard (non-linear) optimization problem. Secondly, we derive two complete characterizations of those GNEPs for which the convexification leads to a jointly constrained or a jointly convex GNEP, respectively. These characterizations require new concepts related to the interplay of the convex hull operator applied to restricted subsets of feasible strategies and may be interesting on their own. Finally, we demonstrate the applicability of our results by presenting a numerical study regarding the computation of equilibria for a class of integral network flow GNEPs.

We propose a framework to study the effect of local recovery requirements of codeword symbols on the dimension of linear codes, based on a combinatorial proxy that we call \emph{visible rank}. The locality constraints of a linear code are stipulated by a matrix $H$ of $\star$'s and $0$'s (which we call a "stencil"), whose rows correspond to the local parity checks (with the $\star$'s indicating the support of the check). The visible rank of $H$ is the largest $r$ for which there is a $r \times r$ submatrix in $H$ with a unique generalized diagonal of $\star$'s. The visible rank yields a field-independent combinatorial lower bound on the rank of $H$ and thus the co-dimension of the code. We prove a rank-nullity type theorem relating visible rank to the rank of an associated construct called \emph{symmetric spanoid}, which was introduced by Dvir, Gopi, Gu, and Wigderson~\cite{DGGW20}. Using this connection and a construction of appropriate stencils, we answer a question posed in \cite{DGGW20} and demonstrate that symmetric spanoid rank cannot improve the currently best known $\widetilde{O}(n^{(q-2)/(q-1)})$ upper bound on the dimension of $q$-query locally correctable codes (LCCs) of length $n$. We also study the $t$-Disjoint Repair Group Property ($t$-DRGP) of codes where each codeword symbol must belong to $t$ disjoint check equations. It is known that linear $2$-DRGP codes must have co-dimension $\Omega(\sqrt{n})$. We show that there are stencils corresponding to $2$-DRGP with visible rank as small as $O(\log n)$. However, we show the second tensor of any $2$-DRGP stencil has visible rank $\Omega(n)$, thus recovering the $\Omega(\sqrt{n})$ lower bound for $2$-DRGP. For $q$-LCC, however, the $k$'th tensor power for $k\le n^{o(1)}$ is unable to improve the $\widetilde{O}(n^{(q-2)/(q-1)})$ upper bound on the dimension of $q$-LCCs by a polynomial factor.

We design a new sparse projection method for a set of vectors that guarantees a desired average sparsity level measured leveraging the popular Hoyer measure (an affine function of the ratio of the $\ell_1$ and $\ell_2$ norms). Existing approaches either project each vector individually or require the use of a regularization parameter which implicitly maps to the average $\ell_0$-measure of sparsity. Instead, in our approach we set the sparsity level for the whole set explicitly and simultaneously project a group of vectors with the sparsity level of each vector tuned automatically. We show that the computational complexity of our projection operator is linear in the size of the problem. Additionally, we propose a generalization of this projection by replacing the $\ell_1$ norm by its weighted version. We showcase the efficacy of our approach in both supervised and unsupervised learning tasks on image datasets including CIFAR10 and ImageNet. In deep neural network pruning, the sparse models produced by our method on ResNet50 have significantly higher accuracies at corresponding sparsity values compared to existing competitors. In nonnegative matrix factorization, our approach yields competitive reconstruction errors against state-of-the-art algorithms.

Recent studies have shown the vulnerability of reinforcement learning (RL) models in noisy settings. The sources of noises differ across scenarios. For instance, in practice, the observed reward channel is often subject to noise (e.g., when observed rewards are collected through sensors), and thus observed rewards may not be credible as a result. Also, in applications such as robotics, a deep reinforcement learning (DRL) algorithm can be manipulated to produce arbitrary errors. In this paper, we consider noisy RL problems where observed rewards by RL agents are generated with a reward confusion matrix. We call such observed rewards as perturbed rewards. We develop an unbiased reward estimator aided robust RL framework that enables RL agents to learn in noisy environments while observing only perturbed rewards. Our framework draws upon approaches for supervised learning with noisy data. The core ideas of our solution include estimating a reward confusion matrix and defining a set of unbiased surrogate rewards. We prove the convergence and sample complexity of our approach. Extensive experiments on different DRL platforms show that policies based on our estimated surrogate reward can achieve higher expected rewards, and converge faster than existing baselines. For instance, the state-of-the-art PPO algorithm is able to obtain 67.5% and 46.7% improvements in average on five Atari games, when the error rates are 10% and 30% respectively.

北京阿比特科技有限公司