亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work, we study an inverse problem of recovering a space-time dependent diffusion coefficient in the subdiffusion model from the distributed observation, where the mathematical model involves a Djrbashian-Caputo fractional derivative of order $\alpha\in(0,1)$ in time. The main technical challenges of both theoretical and numerical analysis lie in the limited smoothing properties due to the fractional differential operator and the high degree of nonlinearity of the forward map from the unknown diffusion coefficient to the distributed observation. Theoretically, we establish two conditional stability results using a novel test function, which leads to a stability bound in $L^2(0,T;L^2(\Omega))$ under a suitable positivity condition. The positivity condition is verified for a large class of problem data. Numerically, we develop a rigorous procedure for the recovery of the diffusion coefficient based on a regularized least-squares formulation, which is then discretized by the standard Galerkin method with continuous piecewise linear elements in space and backward Euler convolution quadrature in time. We provide a complete error analysis of the fully discrete formulation, by combining several new error estimates for the direct problem (optimal in terms of data regularity), a discrete version of fractional maximal $L^p$ regularity, and a nonstandard energy argument. Under the positivity condition, we obtain a standard $L^2(0,T; L^2(\Omega))$ error estimate consistent with the conditional stability. Further, we illustrate the analysis with some numerical examples.

相關內容

We prove an optimal $O(n \log n)$ mixing time of the Glauber dynamics for the Ising models with edge activity $\beta \in \left(\frac{\Delta-2}{\Delta}, \frac{\Delta}{\Delta-2}\right)$. This mixing time bound holds even if the maximum degree $\Delta$ is unbounded. We refine the boosting technique developed in [CFYZ21], and prove a new boosting theorem by utilizing the entropic independence defined in [AJK+21]. The theorem relates the modified log-Sobolev (MLS) constant of the Glauber dynamics for a near-critical Ising model to that for an Ising model in a sub-critical regime.

We consider the Dynamical Low Rank (DLR) approximation of random parabolic equations and propose a class of fully discrete numerical schemes. Similarly to the continuous DLR approximation, our schemes are shown to satisfy a discrete variational formulation. By exploiting this property, we establish stability of our schemes: we show that our explicit and semi-implicit versions are conditionally stable under a parabolic type CFL condition which does not depend on the smallest singular value of the DLR solution; whereas our implicit scheme is unconditionally stable. Moreover, we show that, in certain cases, the semi-implicit scheme can be unconditionally stable if the randomness in the system is sufficiently small. Furthermore, we show that these schemes can be interpreted as projector-splitting integrators and are strongly related to the scheme proposed by Lubich et al. [BIT Num. Math., 54:171-188, 2014; SIAM J. on Num. Anal., 53:917-941, 2015], to which our stability analysis applies as well. The analysis is supported by numerical results showing the sharpness of the obtained stability conditions.

We define a notion called leftmost separator of size at most $k$. A leftmost separator of size $k$ is a minimal separator $S$ that separates two given sets of vertices $X$ and $Y$ such that we "cannot move $S$ more towards $X$" such that $|S|$ remains smaller than the threshold. One of the incentives is that by using leftmost separators we can improve the time complexity of treewidth approximation. Treewidth approximation is a problem which is known to have a linear time FPT algorithm in terms of input size, and only single exponential in terms of the parameter, treewidth. It is not known whether this result can be improved theoretically. However, the coefficient of the parameter $k$ (the treewidth) in the exponent is large. Hence, our goal is to decrease the coefficient of $k$ in the exponent, in order to achieve a more practical algorithm. Hereby, we trade a linear-time algorithm for an $\mathcal{O}(n \log n)$-time algorithm. The previous known $\mathcal{O}(f(k) n \log n)$-time algorithms have dependences of $2^{24k}k!$, $2^{8.766k}k^2$ (a better analysis shows that it is $2^{7.671k}k^2$), and higher. In this paper, we present an algorithm for treewidth approximation which runs in time $\mathcal{O}(2^{6.755k}\ n \log n)$, Furthermore, we count the number of leftmost separators and give a tight upper bound for them. We show that the number of leftmost separators of size $\leq k$ is at most $C_{k-1}$ (Catalan number). Then, we present an algorithm which outputs all leftmost separators in time $\mathcal{O}(\frac{4^k}{\sqrt{k}}n)$.

We study the $c$-approximate near neighbor problem under the continuous Fr\'echet distance: Given a set of $n$ polygonal curves with $m$ vertices, a radius $\delta > 0$, and a parameter $k \leq m$, we want to preprocess the curves into a data structure that, given a query curve $q$ with $k$ vertices, either returns an input curve with Fr\'echet distance at most $c\cdot \delta$ to $q$, or returns that there exists no input curve with Fr\'echet distance at most $\delta$ to $q$. We focus on the case where the input and the queries are one-dimensional polygonal curves -- also called time series -- and we give a comprehensive analysis for this case. We obtain new upper bounds that provide different tradeoffs between approximation factor, preprocessing time, and query time. Our data structures improve upon the state of the art in several ways. We show that for any $0 < \varepsilon \leq 1$ an approximation factor of $(1+\varepsilon)$ can be achieved within the same asymptotic time bounds as the previously best result for $(2+\varepsilon)$. Moreover, we show that an approximation factor of $(2+\varepsilon)$ can be obtained by using preprocessing time and space $O(nm)$, which is linear in the input size, and query time in $O(\frac{1}{\varepsilon})^{k+2}$, where the previously best result used preprocessing time in $n \cdot O(\frac{m}{\varepsilon k})^k$ and query time in $O(1)^k$. We complement our upper bounds with matching conditional lower bounds based on the Orthogonal Vectors Hypothesis. Interestingly, some of our lower bounds already hold for any super-constant value of $k$. This is achieved by proving hardness of a one-sided sparse version of the Orthogonal Vectors problem as an intermediate problem, which we believe to be of independent interest.

Given a $k$-vertex-connected graph $G$ and a set $S$ of extra edges (links), the goal of the $k$-vertex-connectivity augmentation problem is to find a set $S' \subseteq S$ of minimum size such that adding $S'$ to $G$ makes it $(k+1)$-vertex-connected. Unlike the edge-connectivity augmentation problem, research for the vertex-connectivity version has been sparse. In this work we present the first polynomial time approximation algorithm that improves the known ratio of 2 for $2$-vertex-connectivity augmentation, for the case in which $G$ is a cycle. This is the first step for attacking the more general problem of augmenting a $2$-connected graph. Our algorithm is based on local search and attains an approximation ratio of $1.8704$. To derive it, we prove novel results on the structure of minimal solutions.

In this work, we investigate the recovery of a parameter in a diffusion process given by the order of derivation in time for a class of diffusion type equations, including both classical and time-fractional diffusion equations, from the flux measurement observed at one point on the boundary. The mathematical model for time-fractional diffusion equations involves a Djrbashian-Caputo fractional derivative in time. We prove a uniqueness result in an unknown medium (e.g., diffusion coefficients, obstacle, initial condition and source), i.e., the recovery of the order of derivation in a diffusion process having several pieces of unknown information. The proof relies on the analyticity of the solution at large time, asymptotic decay behavior, strong maximum principle of the elliptic problem and suitable application of the Hopf lemma. Further we provide an easy-to-implement reconstruction algorithm based on a nonlinear least-squares formulation, and several numerical experiments are presented to complement the theoretical analysis.

A finite element analysis of a Dirichlet boundary control problem governed by the linear parabolic equation is presented in this article. The Dirichlet control is considered in a closed and convex subset of the energy space $H^1(\Omega \times(0,T)).$ We prove well-posedness and discuss some regularity results for the control problem. We derive the optimality system for the optimal control problem. The first order necessary optimality condition results in a simplified Signorini type problem for control variable. The space discretization of the state variable is done using conforming finite elements, whereas the time discretization is based on discontinuous Galerkin methods. To discretize the control we use the conforming prismatic Lagrange finite elements. We derive an optimal order of convergence of error in control, state, and adjoint state. The theoretical results are corroborated by some numerical tests.

UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. The result is a practical scalable algorithm that applies to real world data. The UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.

Importance sampling is one of the most widely used variance reduction strategies in Monte Carlo rendering. In this paper, we propose a novel importance sampling technique that uses a neural network to learn how to sample from a desired density represented by a set of samples. Our approach considers an existing Monte Carlo rendering algorithm as a black box. During a scene-dependent training phase, we learn to generate samples with a desired density in the primary sample space of the rendering algorithm using maximum likelihood estimation. We leverage a recent neural network architecture that was designed to represent real-valued non-volume preserving ('Real NVP') transformations in high dimensional spaces. We use Real NVP to non-linearly warp primary sample space and obtain desired densities. In addition, Real NVP efficiently computes the determinant of the Jacobian of the warp, which is required to implement the change of integration variables implied by the warp. A main advantage of our approach is that it is agnostic of underlying light transport effects, and can be combined with many existing rendering techniques by treating them as a black box. We show that our approach leads to effective variance reduction in several practical scenarios.

Stochastic gradient Markov chain Monte Carlo (SGMCMC) has become a popular method for scalable Bayesian inference. These methods are based on sampling a discrete-time approximation to a continuous time process, such as the Langevin diffusion. When applied to distributions defined on a constrained space, such as the simplex, the time-discretisation error can dominate when we are near the boundary of the space. We demonstrate that while current SGMCMC methods for the simplex perform well in certain cases, they struggle with sparse simplex spaces; when many of the components are close to zero. However, most popular large-scale applications of Bayesian inference on simplex spaces, such as network or topic models, are sparse. We argue that this poor performance is due to the biases of SGMCMC caused by the discretization error. To get around this, we propose the stochastic CIR process, which removes all discretization error and we prove that samples from the stochastic CIR process are asymptotically unbiased. Use of the stochastic CIR process within a SGMCMC algorithm is shown to give substantially better performance for a topic model and a Dirichlet process mixture model than existing SGMCMC approaches.

北京阿比特科技有限公司