亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We construct admissible polynomial meshes on piecewise polynomial or trigonometric curves of the complex plane, by mapping univariate Chebyshev points. Such meshes can be used for polynomial least-squares, for the extraction of Fekete-like and Leja-like interpolation sets, and also for the evaluation of their Lebesgue constants.

相關內容

We introduce ensembles of stochastic neural networks to approximate the Bayesian posterior, combining stochastic methods such as dropout with deep ensembles. The stochastic ensembles are formulated as families of distributions and trained to approximate the Bayesian posterior with variational inference. We implement stochastic ensembles based on Monte Carlo dropout, DropConnect and a novel non-parametric version of dropout and evaluate them on a toy problem and CIFAR image classification. For both tasks, we test the quality of the posteriors directly against Hamiltonian Monte Carlo simulations. Our results show that stochastic ensembles provide more accurate posterior estimates than other popular baselines for Bayesian inference.

We study the power of randomness in the Number-on-Forehead (NOF) model in communication complexity. We construct an explicit 3-player function $f:[N]^3 \to \{0,1\}$, such that: (i) there exist a randomized NOF protocol computing it that sends a constant number of bits; but (ii) any deterministic or nondeterministic NOF protocol computing it requires sending about $(\log N)^{1/3}$ many bits. This exponentially improves upon the previously best-known such separation. At the core of our proof is an extension of a recent result of the first and third authors on sets of integers without 3-term arithmetic progressions into a non-arithmetic setting.

A well-balanced second-order finite volume scheme is proposed and analyzed for a 2 X 2 system of non-linear partial differential equations which describes the dynamics of growing sandpiles created by a vertical source on a flat, bounded rectangular table in multiple dimensions. To derive a second-order scheme, we combine a MUSCL type spatial reconstruction with strong stability preserving Runge-Kutta time stepping method. The resulting scheme is ensured to be well-balanced through a modified limiting approach that allows the scheme to reduce to well-balanced first-order scheme near the steady state while maintaining the second-order accuracy away from it. The well-balanced property of the scheme is proven analytically in one dimension and demonstrated numerically in two dimensions. Additionally, numerical experiments reveal that the second-order scheme reduces finite time oscillations, takes fewer time iterations for achieving the steady state and gives sharper resolutions of the physical structure of the sandpile, as compared to the existing first-order schemes of the literature.

In this paper, we propose the application of shrinkage strategies to estimate coefficients in the Bell regression models when prior information about the coefficients is available. The Bell regression models are well-suited for modeling count data with multiple covariates. Furthermore, we provide a detailed explanation of the asymptotic properties of the proposed estimators, including asymptotic biases and mean squared errors. To assess the performance of the estimators, we conduct numerical studies using Monte Carlo simulations and evaluate their simulated relative efficiency. The results demonstrate that the suggested estimators outperform the unrestricted estimator when prior information is taken into account. Additionally, we present an empirical application to demonstrate the practical utility of the suggested estimators.

Lawson's iteration is a classical and effective method for solving the linear (polynomial) minimax approximation in the complex plane. Extension of Lawson's iteration for the rational minimax approximation with both computationally high efficiency and theoretical guarantee is challenging. A recent work [L.-H. Zhang, L. Yang, W. H. Yang and Y.-N. Zhang, A convex dual programming for the rational minimax approximation and Lawson's iteration, 2023, arxiv.org/pdf/2308.06991v1] reveals that Lawson's iteration can be viewed as a method for solving the dual problem of the original rational minimax approximation, and a new type of Lawson's iteration was proposed. Such a dual problem is guaranteed to obtain the original minimax solution under Ruttan's sufficient condition, and numerically, the proposed Lawson's iteration was observed to converge monotonically with respect to the dual objective function. In this paper, we perform theoretical convergence analysis for Lawson's iteration for both the linear and rational minimax approximations. In particular, we show that (i) for the linear minimax approximation, the near-optimal Lawson exponent $\beta$ in Lawson's iteration is $\beta=1$, and (ii) for the rational minimax approximation, the proposed Lawson's iteration converges monotonically with respect to the dual objective function for any sufficiently small $\beta>0$, and the convergent solution fulfills the complementary slackness: all nodes associated with positive weights achieve the maximum error.

We present iDARR, a scalable iterative Data-Adaptive RKHS Regularization method, for solving ill-posed linear inverse problems. The method searches for solutions in subspaces where the true solution can be identified, with the data-adaptive RKHS penalizing the spaces of small singular values. At the core of the method is a new generalized Golub-Kahan bidiagonalization procedure that recursively constructs orthonormal bases for a sequence of RKHS-restricted Krylov subspaces. The method is scalable with a complexity of $O(kmn)$ for $m$-by-$n$ matrices with $k$ denoting the iteration numbers. Numerical tests on the Fredholm integral equation and 2D image deblurring show that it outperforms the widely used $L^2$ and $l^2$ norms, producing stable accurate solutions consistently converging when the noise level decays.

Detecting early warning indicators for abrupt dynamical transitions in complex systems or high-dimensional observation data is essential in many real-world applications, such as brain diseases, natural disasters, financial crises, and engineering reliability. To this end, we develop a novel approach: the directed anisotropic diffusion map that captures the latent evolutionary dynamics in the low-dimensional manifold. Then three effective warning signals (Onsager-Machlup Indicator, Sample Entropy Indicator, and Transition Probability Indicator) are derived through the latent coordinates and the latent stochastic dynamical systems. To validate our framework, we apply this methodology to authentic electroencephalogram (EEG) data. We find that our early warning indicators are capable of detecting the tipping point during state transition. This framework not only bridges the latent dynamics with real-world data but also shows the potential ability for automatic labeling on complex high-dimensional time series.

This paper presents a regularized recursive identification algorithm with simultaneous on-line estimation of both the model parameters and the algorithms hyperparameters. A new kernel is proposed to facilitate the algorithm development. The performance of this novel scheme is compared with that of the recursive least-squares algorithm in simulation.

In this paper we consider a nonlinear poroelasticity model that describes the quasi-static mechanical behaviour of a fluid-saturated porous medium whose permeability depends on the divergence of the displacement. Such nonlinear models are typically used to study biological structures like tissues, organs, cartilage and bones, which are known for a nonlinear dependence of their permeability/hydraulic conductivity on solid dilation. We formulate (extend to the present situation) one of the most popular splitting schemes, namely the fixed-stress split method for the iterative solution of the coupled problem. The method is proven to converge linearly for sufficiently small time steps under standard assumptions. The error contraction factor then is strictly less than one, independent of the Lam\'{e} parameters, Biot and storage coefficients if the hydraulic conductivity is a strictly positive, bounded and Lipschitz-continuous function.

For the convolutional neural network (CNN) used for pattern classification, the training loss function is usually applied to the final output of the network, except for some regularization constraints on the network parameters. However, with the increasing of the number of network layers, the influence of the loss function on the network front layers gradually decreases, and the network parameters tend to fall into local optimization. At the same time, it is found that the trained network has significant information redundancy at all stages of features, which reduces the effectiveness of feature mapping at all stages and is not conducive to the change of the subsequent parameters of the network in the direction of optimality. Therefore, it is possible to obtain a more optimized solution of the network and further improve the classification accuracy of the network by designing a loss function for restraining the front stage features and eliminating the information redundancy of the front stage features .For CNN, this article proposes a multi-stage feature decorrelation loss (MFD Loss), which refines effective features and eliminates information redundancy by constraining the correlation of features at all stages. Considering that there are many layers in CNN, through experimental comparison and analysis, MFD Loss acts on multiple front layers of CNN, constrains the output features of each layer and each channel, and performs supervision training jointly with classification loss function during network training. Compared with the single Softmax Loss supervised learning, the experiments on several commonly used datasets on several typical CNNs prove that the classification performance of Softmax Loss+MFD Loss is significantly better. Meanwhile, the comparison experiments before and after the combination of MFD Loss and some other typical loss functions verify its good universality.

北京阿比特科技有限公司