亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this article we analyze the error produced by the removal of an arbitrary knot from a spline function. When a knot has multiplicity greater than one, this implies a reduction of its multiplicity by one unit. In particular, we deduce a very simple formula to compute the error in terms of some neighboring knots and a few control points of the considered spline. Furthermore, we show precisely how this error is related to the jump of a derivative of the spline at the knot. We then use the developed theory to propose efficient and very low-cost local error indicators and adaptive coarsening algorithms. Finally, we present some numerical experiments to illustrate their performance and show some applications.

相關內容

Symmetry is a cornerstone of much of mathematics, and many probability distributions possess symmetries characterized by their invariance to a collection of group actions. Thus, many mathematical and statistical methods rely on such symmetry holding and ostensibly fail if symmetry is broken. This work considers under what conditions a sequence of probability measures asymptotically gains such symmetry or invariance to a collection of group actions. Considering the many symmetries of the Gaussian distribution, this work effectively proposes a non-parametric type of central limit theorem. That is, a Lipschitz function of a high dimensional random vector will be asymptotically invariant to the actions of certain compact topological groups. Applications of this include a partial law of the iterated logarithm for uniformly random points in an $\ell_p^n$-ball and an asymptotic equivalence between classical parametric statistical tests and their randomization counterparts even when invariance assumptions are violated.

In this paper, we study two graph convexity parameters: iteration time and general position number. The iteration time was defined in 1981 in the geodesic convexity, but its computational complexity was so far open. The general position number was defined in the geodesic convexity and proved $\NP$-hard in 2018. We extend these parameters to any graph convexity and prove that the iteration number is $\NP$-hard in the $P_3$ convexity. We use this result to prove that the iteration time is also $\NP$-hard in the geodesic convexity even in graphs with diameter two, a long standing open question. These results are also important since they are the last two missing $\NP$-hardness results regarding the ten most studied graph convexity parameters in the geodesic and $P_3$ convexities. We also prove that the general position number of the monophonic convexity is $W[1]$-hard (parameterized by the size of the solution) and $n^{1-\varepsilon}$-inapproximable in polynomial time for any $\varepsilon>0$ unless $\P=\NP$, even in graphs with diameter two. Finally, we also obtain FPT results on the general position number in the $P_3$ convexity and we prove that it is $W[1]$-hard (parameterized by the size of the solution).

In two and three dimensions, we design and analyze a posteriori error estimators for the mixed Stokes eigenvalue problem. The unknowns on this mixed formulation are the pseudotress, velocity and pressure. With a lowest order mixed finite element scheme, together with a postprocressing technique, we prove that the proposed estimator is reliable and efficient. We illustrate the results with several numerical tests in two and three dimensions in order to assess the performance of the estimator.

This survey is concerned with the power of random information for approximation in the (deterministic) worst-case setting, with special emphasis on information that is obtained independently and identically distributed (iid) from a given distribution on a class of admissible information. We present a general result based on a weighted least squares method and derive consequences for special cases. Improvements are available if the information is "Gaussian" or if we consider iid function values for Sobolev spaces. We include open questions to guide future research on the power of random information in the context of information-based complexity.

Factor models are widely used for dimension reduction in the analysis of multivariate data. This is achieved through decomposition of a p x p covariance matrix into the sum of two components. Through a latent factor representation, they can be interpreted as a diagonal matrix of idiosyncratic variances and a shared variation matrix, that is, the product of a p x k factor loadings matrix and its transpose. If k << p, this defines a sparse factorisation of the covariance matrix. Historically, little attention has been paid to incorporating prior information in Bayesian analyses using factor models where, at best, the prior for the factor loadings is order invariant. In this work, a class of structured priors is developed that can encode ideas of dependence structure about the shared variation matrix. The construction allows data-informed shrinkage towards sensible parametric structures while also facilitating inference over the number of factors. Using an unconstrained reparameterisation of stationary vector autoregressions, the methodology is extended to stationary dynamic factor models. For computational inference, parameter-expanded Markov chain Monte Carlo samplers are proposed, including an efficient adaptive Gibbs sampler. Two substantive applications showcase the scope of the methodology and its inferential benefits.

In this paper, we consider an inverse space-dependent source problem for a time-fractional diffusion equation. To deal with the ill-posedness of the problem, we transform the problem into an optimal control problem with total variational (TV) regularization. In contrast to the classical Tikhonov model incorporating $L^2$ penalty terms, the inclusion of a TV term proves advantageous in reconstructing solutions that exhibit discontinuities or piecewise constancy. The control problem is approximated by a fully discrete scheme, and convergence results are provided within this framework. Furthermore, a lineraed primal-dual iterative algorithm is proposed to solve the discrete control model based on an equivalent saddle-point reformulation, and several numerical experiments are presented to demonstrate the efficiency of the algorithm.

In this article, we focus on the error that is committed when computing the matrix logarithm using the Gauss--Legendre quadrature rules. These formulas can be interpreted as Pad\'e approximants of a suitable Gauss hypergeometric function. Empirical observation tells us that the convergence of these quadratures becomes slow when the matrix is not close to the identity matrix, thus suggesting the usage of an inverse scaling and squaring approach for obtaining a matrix with this property. The novelty of this work is the introduction of error estimates that can be used to select a priori both the number of Legendre points needed to obtain a given accuracy and the number of inverse scaling and squaring to be performed. We include some numerical experiments to show the reliability of the estimates introduced.

The Navier equation is the governing equation of elastic waves, and computing its solution accurately and rapidly has a wide range of applications in geophysical exploration, materials science, etc. In this paper, we focus on the efficient and high-precision numerical algorithm for the time harmonic elastic wave scattering problems from cornered domains via the boundary integral equations in two dimensions. The approach is based on the combination of Nystr\"om discretization, analytical singular integrals and kernel-splitting method, which results in a high-order solver for smooth boundaries. It is then combined with the recursively compressed inverse preconditioning (RCIP) method to solve elastic scattering problems from cornered domains. Numerical experiments demonstrate that the proposed approach achieves high accuracy, with stabilized errors close to machine precision in various geometric configurations. The algorithm is further applied to investigate the asymptotic behavior of density functions associated with boundary integral operators near corners, and the numerical results are highly consistent with the theoretical formulas.

We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.

Hashing has been widely used in approximate nearest search for large-scale database retrieval for its computation and storage efficiency. Deep hashing, which devises convolutional neural network architecture to exploit and extract the semantic information or feature of images, has received increasing attention recently. In this survey, several deep supervised hashing methods for image retrieval are evaluated and I conclude three main different directions for deep supervised hashing methods. Several comments are made at the end. Moreover, to break through the bottleneck of the existing hashing methods, I propose a Shadow Recurrent Hashing(SRH) method as a try. Specifically, I devise a CNN architecture to extract the semantic features of images and design a loss function to encourage similar images projected close. To this end, I propose a concept: shadow of the CNN output. During optimization process, the CNN output and its shadow are guiding each other so as to achieve the optimal solution as much as possible. Several experiments on dataset CIFAR-10 show the satisfying performance of SRH.

北京阿比特科技有限公司