亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present a numerical stability analysis of the immersed boundary(IB) method for a special case which is constructed so that Fourier analysis is applicable. We examine the stability of the immersed boundary method with the discrete Fourier transforms defined differently on the fluid grid and the boundary grid. This approach gives accurate theoretical results about the stability boundary since it takes the effects of the spreading kernel of the immersed boundary method on the numerical stability into account. In this paper, the spreading kernel is the standard 4-point IB delta function. A three-dimensional incompressible viscous flow and a no-slip planar boundary are considered. The case of a planar elastic membrane is also analyzed using the same analysis framework and it serves as an example of many possible generalizations of our theory. We present some numerical results and show that the observed stability behaviors are consistent with what are predicted by our theory.

相關內容

The present paper continues our investigation of an implementation of a least-squares collocation method for higher-index differential-algebraic equations. In earlier papers, we were able to substantiate the choice of basis functions and collocation points for a robust implementation as well as algorithms for the solution of the discrete system. The present paper is devoted to an analytic estimation of condition numbers for different components of an implementation. We present error estimations, which show the sources for the different errors.

Let $G=(V,E)$ be an undirected unweighted planar graph. Consider a vector storing the distances from an arbitrary vertex $v$ to all vertices $S = \{ s_1 , s_2 , \ldots , s_k \}$ of a single face in their cyclic order. The pattern of $v$ is obtained by taking the difference between every pair of consecutive values of this vector. In STOC'19, Li and Parter used a VC-dimension argument to show that in planar graphs, the number of distinct patterns, denoted $x$, is only $O(k^3)$. This resulted in a simple compression scheme requiring $\tilde O(\min \{ k^4+|T|, k\cdot |T|\})$ space to encode the distances between $S$ and a subset of terminal vertices $T \subseteq V$. This is known as the Okamura-Seymour metric compression problem. We give an alternative proof of the $x=O(k^3)$ bound that exploits planarity beyond the VC-dimension argument. Namely, our proof relies on cut-cycle duality, as well as on the fact that distances among vertices of $S$ are bounded by $k$. Our method implies the following: (1) An $\tilde{O}(x+k+|T|)$ space compression of the Okamura-Seymour metric, thus improving the compression of Li and Parter to $\tilde O(\min \{k^3+|T|,k \cdot |T| \})$. (2) An optimal $\tilde{O}(k+|T|)$ space compression of the Okamura-Seymour metric, in the case where the vertices of $T$ induce a connected component in $G$. (3) A tight bound of $x = \Theta(k^2)$ for the family of Halin graphs, whereas the VC-dimension argument is limited to showing $x=O(k^3)$.

Since the celebrated works of Russo and Zou (2016,2019) and Xu and Raginsky (2017), it has been well known that the generalization error of supervised learning algorithms can be bounded in terms of the mutual information between their input and the output, given that the loss of any fixed hypothesis has a subgaussian tail. In this work, we generalize this result beyond the standard choice of Shannon's mutual information to measure the dependence between the input and the output. Our main result shows that it is indeed possible to replace the mutual information by any strongly convex function of the joint input-output distribution, with the subgaussianity condition on the losses replaced by a bound on an appropriately chosen norm capturing the geometry of the dependence measure. This allows us to derive a range of generalization bounds that are either entirely new or strengthen previously known ones. Examples include bounds stated in terms of $p$-norm divergences and the Wasserstein-2 distance, which are respectively applicable for heavy-tailed loss distributions and highly smooth loss functions. Our analysis is entirely based on elementary tools from convex analysis by tracking the growth of a potential function associated with the dependence measure and the loss function.

A stabilized finite element method is introduced for the simulation of time-periodic creeping flows, such as those found in the cardiorespiratory systems. The new technique, which is formulated in the frequency rather than time domain, strictly uses real arithmetics and permits the use of similar shape functions for pressure and velocity for ease of implementation. It involves the addition of the Laplacian of pressure to the continuity equation with a complex-valued stabilization parameter that is derived systematically from the momentum equation. The numerical experiments show the excellent accuracy and robustness of the proposed method in simulating flows in complex and canonical geometries for a wide range of conditions. The present method significantly outperforms a traditional solver in terms of both computational cost and scalability, which lowers the overall solution turnover time by several orders of magnitude.

Sparse Principal Component Analysis (PCA) is a prevalent tool across a plethora of subfields of applied statistics. While several results have characterized the recovery error of the principal eigenvectors, these are typically in spectral or Frobenius norms. In this paper, we provide entrywise $\ell_{2,\infty}$ bounds for Sparse PCA under a general high-dimensional subgaussian design. In particular, our results hold for any algorithm that selects the correct support with high probability, those that are sparsistent. Our bound improves upon known results by providing a finer characterization of the estimation error, and our proof uses techniques recently developed for entrywise subspace perturbation theory.

A finite permutation group $G$ on $\Omega$ is called a rank 3 group if it has precisely three orbits in its induced action on $\Omega \times \Omega$. The largest permutation group on $\Omega$ having the same orbits as $G$ on $\Omega \times \Omega$ is called the 2-closure of $G$. We construct a polynomial-time algorithm which given generators of a rank 3 group computes generators of its 2-closure.

This paper presents a hybrid numerical method for linear collisional kinetic equations with diffusive scaling. The aim of the method is to reduce the computational cost of kinetic equations by taking advantage of the lower dimensionality of the asymptotic fluid model while reducing the error induced by the latter approach. It relies on two criteria motivated by a pertubative approach to obtain a dynamic domain decomposition. The first criterion quantifies how far from a local equilibrium in velocity the distribution function of particles is. The second one depends only on the macroscopic quantities that are available on the whole computing domain. Interface conditions are dealt with using a micro-macro decomposition and the method is significantly more efficient than a standard full kinetic approach. Some properties of the hybrid method are also investigated, such as the conservation of mass.

We describe a prototype of a new experimental GeoGebra command and tool, Discover, that analyzes geometric figures for salient patterns, properties, and theorems. This tool is a basic implementation of automated discovery in elementary planar geometry. The paper focuses on the mathematical background of the implementation, as well as methods to avoid combinatorial explosion when storing the interesting properties of a geometric figure.

Deep learning (DL) has become an integral part of solutions to various important problems, which is why ensuring the quality of DL systems is essential. One of the challenges of achieving reliability and robustness of DL software is to ensure that algorithm implementations are numerically stable. DL algorithms require a large amount and a wide variety of numerical computations. A naive implementation of numerical computation can lead to errors that may result in incorrect or inaccurate learning and results. A numerical algorithm or a mathematical formula can have several implementations that are mathematically equivalent, but have different numerical stability properties. Designing numerically stable algorithm implementations is challenging, because it requires an interdisciplinary knowledge of software engineering, DL, and numerical analysis. In this paper, we study two mature DL libraries PyTorch and Tensorflow with the goal of identifying unstable numerical methods and their solutions. Specifically, we investigate which DL algorithms are numerically unstable and conduct an in-depth analysis of the root cause, manifestation, and patches to numerical instabilities. Based on these findings, we launch, the first database of numerical stability issues and solutions in DL. Our findings and provide future references to developers and tool builders to prevent, detect, localize and fix numerically unstable algorithm implementations. To demonstrate that, using {\it DeepStability} we have located numerical stability issues in Tensorflow, and submitted a fix which has been accepted and merged in.

Knowledge distillation is a strategy of training a student network with guide of the soft output from a teacher network. It has been a successful method of model compression and knowledge transfer. However, currently knowledge distillation lacks a convincing theoretical understanding. On the other hand, recent finding on neural tangent kernel enables us to approximate a wide neural network with a linear model of the network's random features. In this paper, we theoretically analyze the knowledge distillation of a wide neural network. First we provide a transfer risk bound for the linearized model of the network. Then we propose a metric of the task's training difficulty, called data inefficiency. Based on this metric, we show that for a perfect teacher, a high ratio of teacher's soft labels can be beneficial. Finally, for the case of imperfect teacher, we find that hard labels can correct teacher's wrong prediction, which explains the practice of mixing hard and soft labels.

北京阿比特科技有限公司