亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper is concerned with the convergence of a series associated with a certain version of the convexification method. That version has been recently developed by the research group of the first author for solving coefficient inverse problems. The convexification method aims to construct a globally convex Tikhonov-like functional with a Carleman Weight Function in it. In the previous works the construction of the strictly convex weighted Tikhonov-like functional assumes a truncated Fourier series (i.e. a finite series instead of an infinite one) for a function generated by the total wave field. In this paper we prove a convergence property for this truncated Fourier series approximation. More precisely, we show that the residual of the approximate PDE obtained by using the truncated Fourier series tends to zero in $L^{2}$ as the truncation index in the truncated Fourier series tends to infinity. The proof relies on a convergence result in the $H^{1}$-norm for a sequence of $L^{2}$-orthogonal projections on finite-dimensional subspaces spanned by elements of a special Fourier basis. However, due to the ill-posed nature of coefficient inverse problems, we cannot prove that the solution of that approximate PDE, which results from the minimization of that Tikhonov-like functional, converges to the correct solution.

相關內容

Recent work has shown that viewing allocators as black-box 2DBP solvers bears meaning. For instance, there exists a 2DBP-based fragmentation metric which often correlates monotonically with maximum resident set size (RSS). Given the field's indeterminacy with respect to fragmentation definitions, as well as the immense value of physical memory savings, we are motivated to set allocator-generated placements against their 2DBP-devised, makespan-optimizing counterparts. Of course, allocators must operate online while 2DBP algorithms work on complete request traces; but since both sides optimize criteria related to minimizing memory wastage, the idea of studying their relationship preserves its intellectual--and practical--interest. Unfortunately no implementations of 2DBP algorithms for DSA are available. This paper presents a first, though partial, implementation of the state-of-the-art. We validate its functionality by comparing its outputs' makespan to the theoretical upper bound provided by the original authors. Along the way, we identify and document key details to assist analogous future efforts. Our experiments comprise 4 modern allocators and 8 real application workloads. We make several notable observations on our empirical evidence: in terms of makespan, allocators outperform Robson's worst-case lower bound $93.75\%$ of the time. In $87.5\%$ of cases, GNU's \texttt{malloc} implementation demonstrates equivalent or superior performance to the 2DBP state-of-the-art, despite the second operating offline. Most surprisingly, the 2DBP algorithm proves competent in terms of fragmentation, producing up to $2.46$x better solutions. Future research can leverage such insights towards memory-targeting optimizations.

Neural networks are the state-of-the-art for many approximation tasks in high-dimensional spaces, as supported by an abundance of experimental evidence. However, we still need a solid theoretical understanding of what they can approximate and, more importantly, at what cost and accuracy. One network architecture of practical use, especially for approximation tasks involving images, is convolutional (residual) networks. However, due to the locality of the linear operators involved in these networks, their analysis is more complicated than for generic fully connected neural networks. This paper focuses on sequence approximation tasks, where a matrix or a higher-order tensor represents each observation. We show that when approximating sequences arising from space-time discretisations of PDEs we may use relatively small networks. We constructively derive these results by exploiting connections between discrete convolution and finite difference operators. Throughout, we design our network architecture to, while having guarantees, be similar to those typically adopted in practice for sequence approximation tasks. Our theoretical results are supported by numerical experiments which simulate linear advection, the heat equation, and the Fisher equation. The implementation used is available at the repository associated to the paper.

In this work, we focus on the inverse medium scattering problem (IMSP), which aims to recover unknown scatterers based on measured scattered data. Motivated by the efficient direct sampling method (DSM) introduced in [23], we propose a novel direct sampling-based deep learning approach (DSM-DL)for reconstructing inhomogeneous scatterers. In particular, we use the U-Net neural network to learn the relation between the index functions and the true contrasts. Our proposed DSM-DL is computationally efficient, robust to noise, easy to implement, and able to naturally incorporate multiple measured data to achieve high-quality reconstructions. Some representative tests are carried out with varying numbers of incident waves and different noise levels to evaluate the performance of the proposed method. The results demonstrate the promising benefits of combining deep learning techniques with the DSM for IMSP.

We present algorithms based on satisfiability problem (SAT) solving, as well as answer set programming (ASP), for solving the problem of determining inconsistency degrees in propositional knowledge bases. We consider six different inconsistency measures whose respective decision problems lie on the first level of the polynomial hierarchy. Namely, these are the contension inconsistency measure, the forgetting-based inconsistency measure, the hitting set inconsistency measure, the max-distance inconsistency measure, the sum-distance inconsistency measure, and the hit-distance inconsistency measure. In an extensive experimental analysis, we compare the SAT-based and ASP-based approaches with each other, as well as with a set of naive baseline algorithms. Our results demonstrate that overall, both the SAT-based and the ASP-based approaches clearly outperform the naive baseline methods in terms of runtime. The results further show that the proposed ASP-based approaches perform superior to the SAT-based ones with regard to all six inconsistency measures considered in this work. Moreover, we conduct additional experiments to explain the aforementioned results in greater detail.

The locally modified finite element method, which is introduced in [Frei, Richter: SINUM 52(2014), p. 2315-2334], is a simple fitted finite element method that is able to resolve weak discontinuities in interface problems. The method is based on a fixed structured coarse mesh, which is then refined into sub-elements to resolve an interior interface. In this work, we extend the locally modified finite element method {in two space dimensions} to second order using an isoparametric approach in the interface elements. Thereby we need to take care that the resulting curved edges do not lead to degenerate sub-elements. We prove optimal a priori error estimates in the $L^2$-norm and in a discrete energy norm. Finally, we present numerical examples to substantiate the theoretical findings.

Chernoff approximations are a flexible and powerful tool of functional analysis, which can be used, in particular, to find numerically approximate solutions of some differential equations with variable coefficients. For many classes of equations such approximations have already been constructed since pioneering papers of Prof. O.G.Somlyanov in 2000, however, the speed of their convergence to the exact solution has not been properly studied. We select the heat equation (because its exact solutions are already known) as a simple yet informative model example for the study of the rate of convergence of Chernoff approximations. Examples illustrating the rate of convergence of Chernoff approximations to the solution of the Cauchy problem for the heat equation are constructed in the paper. Numerically we show that for initial conditions that are smooth enough the order of approximation is equal to the order of Chernoff tangency of the Chernoff function used. We also consider not smooth enough initial conditions and show how H\"older class of initial condition is related to the rate of convergence. This method of study in the future can be applied to general second order parabolic equation with variable coefficients by a slight modification of our Python 3 code. This arXiv version of the text is a supplementary material for our journal article. Here we include all the written text from the article and additionally all illustrations (Appendix A) and full text of the Python 3 code (Appendix B).

Next-generation Wi-Fi networks are looking forward to introducing new features like multi-link operation (MLO) to both achieve higher throughput and lower latency. However, given the limited number of available channels, the use of multiple links by a group of contending Basic Service Sets (BSSs) can result in higher interference and channel contention, thus potentially leading to lower performance and reliability. In such a situation, it could be better for all contending BSSs to use less links if that contributes to reduce channel access contention. Recently, reinforcement learning (RL) has proven its potential for optimizing resource allocation in wireless networks. However, the independent operation of each wireless network makes difficult -- if not almost impossible -- for each individual network to learn a good configuration. To solve this issue, in this paper, we propose the use of a Federated Reinforcement Learning (FRL) framework, i.e., a collaborative machine learning approach to train models across multiple distributed agents without exchanging data, to collaboratively learn the the best MLO-Link Allocation (LA) strategy by a group of neighboring BSSs. The simulation results show that the FRL-based decentralized MLO-LA strategy achieves a better throughput fairness, and so a higher reliability -- because it allows the different BSSs to find a link allocation strategy which maximizes the minimum achieved data rate -- compared to fixed, random and RL-based MLO-LA schemes.

Classical mathematical techniques such as discrete integration, gradient descent optimization, and state estimation (exemplified by the Runge-Kutta method, Gauss-Newton minimization, and extended Kalman filter or EKF, respectively), rely on linear algebra and hence are only applicable to state vectors belonging to Euclidean spaces when implemented as described in the literature. This document discusses how to modify these methods so they can be applied to non-Euclidean state vectors, such as those containing rotations and full motions of rigid bodies. To do so, this document provides an in-depth review of the concept of manifolds or Lie groups, together with their tangent spaces or Lie algebras, their exponential and logarithmic maps, the analysis of perturbations, the treatment of uncertainty and covariance, and in particular the definitions of the Jacobians required to employ the previously mentioned calculus methods. These concepts are particularized to the specific cases of the SO(3) and SE(3) Lie groups, known as the special orthogonal and special Euclidean groups of R3, which represent the rigid body rotations and motions, describing their various possible parameterizations as well as their advantages and disadvantages.

Discrete Differential Equations (DDEs) are functional equations that relate polynomially a power series $F(t,u)$ in $t$ with polynomial coefficients in a "catalytic" variable $u$ and the specializations, say at $u=1$, of $F(t,u)$ and of some of its partial derivatives in $u$. DDEs occur frequently in combinatorics, especially in map enumeration. If a DDE is of fixed-point type then its solution $F(t,u)$ is unique, and a general result by Popescu (1986) implies that $F(t,u)$ is an algebraic power series. Constructive proofs of algebraicity for solutions of fixed-point type DDEs were proposed by Bousquet-M\'elou and Jehanne (2006). Bostan et. al (2022) initiated a systematic algorithmic study of such DDEs of order 1. We generalize this study to DDEs of arbitrary order. First, we propose nontrivial extensions of algorithms based on polynomial elimination and on the guess-and-prove paradigm. Second, we design two brand-new algorithms that exploit the special structure of the underlying polynomial systems. Last, but not least, we report on implementations that are able to solve highly challenging DDEs with a combinatorial origin.

We investigate the approximation of weighted integrals over $\mathbb{R}^d$ for integrands from weighted Sobolev spaces of mixed smoothness. We prove upper and lower bounds of the convergence rate of optimal quadratures with respect to $n$ integration nodes for functions from these spaces. In the one-dimensional case $(d=1)$, we obtain the right convergence rate of optimal quadratures. For $d \ge 2$, the upper bound is performed by sparse-grid quadratures with integration nodes on step hyperbolic crosses in the function domain $\mathbb{R}^d$.

北京阿比特科技有限公司