亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Every sufficiently big matrix with small spectral norm has a nearby low-rank matrix if the distance is measured in the maximum norm (Udell \& Townsend, SIAM J Math Data Sci, 2019). We use the Hanson--Wright inequality to improve the estimate of the distance for matrices with incoherent column and row spaces. In numerical experiments with several classes of matrices we study how well the theoretical upper bound describes the approximation errors achieved with the method of alternating projections.

相關內容

This article presents a new algorithm to compute all the roots of two families of polynomials that are of interest for the Mandelbrot set $\mathcal{M}$ : the roots of those polynomials are respectively the parameters $c\in\mathcal{M}$ associated with periodic critical dynamics for $f_c(z)=z^2+c$ (hyperbolic centers) or with pre-periodic dynamics (Misiurewicz-Thurston parameters). The algorithm is based on the computation of discrete level lines that provide excellent starting points for the Newton method. In practice, we observe that these polynomials can be split in linear time of the degree. This article is paired with a code library \citelib{MLib} that implements this algorithm. Using this library and about 723 000 core-hours on the HPC center \textit{Rom\'eo} (Reims), we have successfully found all hyperbolic centers of period $\leq 41$ and all Misiurewicz-Thurston parameters whose period and pre-period sum to $\leq 35$. Concretely, this task involves splitting a tera-polynomial, i.e. a polynomial of degree $\sim10^{12}$, which is orders of magnitude ahead of the previous state of the art. It also involves dealing with the certifiability of our numerical results, which is an issue that we address in detail, both mathematically and along the production chain. The certified database is available to the scientific community. For the smaller periods that can be represented using only hardware arithmetic (floating points FP80), the implementation of our algorithm can split the corresponding polynomials of degree $\sim10^{9}$ in less than one day-core. We complement these benchmarks with a statistical analysis of the separation of the roots, which confirms that no other polynomial in these families can be split without using higher precision arithmetic.

We propose a non-commutative algorithm for multiplying 2x2 matrices using 7 coefficient products. This algorithm reaches simultaneously a better accuracy in practice compared to previously known such fast algorithms, and a time complexity bound with the best currently known leading term (obtained via alternate basis sparsification). To build this algorithm, we consider matrix and tensor norms bounds governing the stability and accuracy of numerical matrix multiplication. First, we reduce those bounds by minimizing a growth factor along the unique orbit of Strassen's 2x2-matrix multiplication tensor decomposition. Second, we develop heuristics for minimizing the number of operations required to realize a given bilinear formula, while further improving its accuracy. Third, we perform an alternate basis sparsification that improves on the time complexity constant and mostly preserves the overall accuracy.

LiNGAM determines the variable order from cause to effect using additive noise models, but it faces challenges with confounding. Previous methods maintained LiNGAM's fundamental structure while trying to identify and address variables affected by confounding. As a result, these methods required significant computational resources regardless of the presence of confounding, and they did not ensure the detection of all confounding types. In contrast, this paper enhances LiNGAM by introducing LiNGAM-MMI, a method that quantifies the magnitude of confounding using KL divergence and arranges the variables to minimize its impact. This method efficiently achieves a globally optimal variable order through the shortest path problem formulation. LiNGAM-MMI processes data as efficiently as traditional LiNGAM in scenarios without confounding while effectively addressing confounding situations. Our experimental results suggest that LiNGAM-MMI more accurately determines the correct variable order, both in the presence and absence of confounding.

Understanding the mechanisms through which neural networks extract statistics from input-label pairs is one of the most important unsolved problems in supervised learning. Prior works have identified that the gram matrices of the weights in trained neural networks of general architectures are proportional to the average gradient outer product of the model, in a statement known as the Neural Feature Ansatz (NFA). However, the reason these quantities become correlated during training is poorly understood. In this work, we explain the emergence of this correlation. We identify that the NFA is equivalent to alignment between the left singular structure of the weight matrices and a significant component of the empirical neural tangent kernels associated with those weights. We establish that the NFA introduced in prior works is driven by a centered NFA that isolates this alignment. We show that the speed of NFA development can be predicted analytically at early training times in terms of simple statistics of the inputs and labels. Finally, we introduce a simple intervention to increase NFA correlation at any given layer, which dramatically improves the quality of features learned.

We derive a model for the optimization of the bending and torsional rigidities of non-homogeneous elastic rods. This is achieved by studying a sharp interface shape optimization problem with perimeter penalization, that treats both rigidities as objectives. We then formulate a phase field approximation of the optimization problem and show the convergence to the aforementioned sharp interface model via $\Gamma$-convergence. In the final part of this work we numerically approximate minimizers of the phase field problem by using a steepest descent approach and relate the resulting optimal shapes to the development of the morphology of plant stems.

The detection of echolocation clicks is key in understanding the intricate behaviors of cetaceans and monitoring their populations. Cetacean species relying on clicks for navigation, foraging and even communications are sperm whales (Physeter macrocephalus) and a variety of dolphin groups. Echolocation clicks are wideband signals of short duration that are often emitted in sequences of varying inter-click-intervals. While datasets and models for clicks exist, the detection and classification of clicks present a significant challenge, mostly due to the diversity of clicks' structures, overlapping signals from simultaneously emitting animals, and the abundance of noise transients from, for example, snapping shrimps and shipping cavitation noise. This paper provides a survey of the many detection and classification methodologies of clicks, ranging from 2002 to 2023. We divide the surveyed techniques into categories by their methodology. Specifically, feature analysis (e.g., phase, ICI and duration), frequency content, energy based detection, supervised and unsupervised machine learning, template matching and adaptive detection approaches. Also surveyed are open access platforms for click detections, and databases openly available for testing. Details of the method applied for each paper are given along with advantages and limitations, and for each category we analyze the remaining challenges. The paper also includes a performance comparison for several schemes over a shared database. Finally, we provide tables summarizing the existing detection schemes in terms of challenges address, methods, detection and classification tools applied, features used and applications.

We perform a quantitative assessment of different strategies to compute the contribution due to surface tension in incompressible two-phase flows using a conservative level set (CLS) method. More specifically, we compare classical approaches, such as the direct computation of the curvature from the level set or the Laplace-Beltrami operator, with an evolution equation for the mean curvature recently proposed in literature. We consider the test case of a static bubble, for which an exact solution for the pressure jump across the interface is available, and the test case of an oscillating bubble, showing pros and cons of the different approaches.

We introduce a method to construct a stochastic surrogate model from the results of dimensionality reduction in forward uncertainty quantification. The hypothesis is that the high-dimensional input augmented by the output of a computational model admits a low-dimensional representation. This assumption can be met by numerous uncertainty quantification applications with physics-based computational models. The proposed approach differs from a sequential application of dimensionality reduction followed by surrogate modeling, as we "extract" a surrogate model from the results of dimensionality reduction in the input-output space. This feature becomes desirable when the input space is genuinely high-dimensional. The proposed method also diverges from the Probabilistic Learning on Manifold, as a reconstruction mapping from the feature space to the input-output space is circumvented. The final product of the proposed method is a stochastic simulator that propagates a deterministic input into a stochastic output, preserving the convenience of a sequential "dimensionality reduction + Gaussian process regression" approach while overcoming some of its limitations. The proposed method is demonstrated through two uncertainty quantification problems characterized by high-dimensional input uncertainties.

We consider nonparametric Bayesian inference in a multidimensional diffusion model with reflecting boundary conditions based on discrete high-frequency observations. We prove a general posterior contraction rate theorem in $L^2$-loss, which is applied to Gaussian priors. The resulting posteriors, as well as their posterior means, are shown to converge to the ground truth at the minimax optimal rate over H\"older smoothness classes in any dimension. Of independent interest and as part of our proofs, we show that certain frequentist penalized least squares estimators are also minimax optimal.

Collecting large quantities of high-quality data is often prohibitively expensive or impractical, and a crucial bottleneck in machine learning. One may instead augment a small set of $n$ data points from the target distribution with data from more accessible sources like public datasets, data collected under different circumstances, or synthesized by generative models. Blurring distinctions, we refer to such data as `surrogate data'. We define a simple scheme for integrating surrogate data into training and use both theoretical models and empirical studies to explore its behavior. Our main findings are: $(i)$ Integrating surrogate data can significantly reduce the test error on the original distribution; $(ii)$ In order to reap this benefit, it is crucial to use optimally weighted empirical risk minimization; $(iii)$ The test error of models trained on mixtures of real and surrogate data is well described by a scaling law. This can be used to predict the optimal weighting and the gain from surrogate data.

北京阿比特科技有限公司