亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Sparse Kaczmarz method is a famous and widely used iterative method for solving the regularized basis pursuit problem. A general scheme of the surrogate hyperplane sparse Kaczmarz method is proposed. In particular, a class of residual-based surrogate hyperplane sparse Kaczmarz method is introduced and the implementations are well discussed. Their convergence theories are proved and the linear convergence rates are studied and compared in details. Numerical experiments verify the efficiency of the proposed methods.

相關內容

It is well-known that decision-making problems from stochastic control can be formulated by means of a forward-backward stochastic differential equation (FBSDE). Recently, the authors of Ji et al. 2022 proposed an efficient deep learning algorithm based on the stochastic maximum principle (SMP). In this paper, we provide a convergence result for this deep SMP-BSDE algorithm and compare its performance with other existing methods. In particular, by adopting a strategy as in Han and Long 2020, we derive a-posteriori estimate, and show that the total approximation error can be bounded by the value of the loss functional and the discretization error. We present numerical examples for high-dimensional stochastic control problems, both in case of drift- and diffusion control, which showcase superior performance compared to existing algorithms.

We propose a parallel (distributed) version of the spectral proper orthogonal decomposition (SPOD) technique. The parallel SPOD algorithm distributes the spatial dimension of the dataset preserving time. This approach is adopted to preserve the non-distributed fast Fourier transform of the data in time, thereby avoiding the associated bottlenecks. The parallel SPOD algorithm is implemented in the PySPOD (//github.com/MathEXLab/PySPOD) library and makes use of the standard message passing interface (MPI) library, implemented in Python via mpi4py (//mpi4py.readthedocs.io/en/stable/). An extensive performance evaluation of the parallel package is provided, including strong and weak scalability analyses. The open-source library allows the analysis of large datasets of interest across the scientific community. Here, we present applications in fluid dynamics and geophysics, that are extremely difficult (if not impossible) to achieve without a parallel algorithm. This work opens the path toward modal analyses of big quasi-stationary data, helping to uncover new unexplored spatio-temporal patterns.

Automatic drum transcription is a critical tool in Music Information Retrieval for extracting and analyzing the rhythm of a music track, but it is limited by the size of the datasets available for training. A popular method used to increase the amount of data is by generating them synthetically from music scores rendered with virtual instruments. This method can produce a virtually infinite quantity of tracks, but empirical evidence shows that models trained on previously created synthetic datasets do not transfer well to real tracks. In this work, besides increasing the amount of data, we identify and evaluate three more strategies that practitioners can use to improve the realism of the generated data and, thus, narrow the synthetic-to-real transfer gap. To explore their efficacy, we used them to build a new synthetic dataset and then we measured how the performance of a model scales and, specifically, at what value it will stagnate when increasing the number of training tracks for different datasets. By doing this, we were able to prove that the aforementioned strategies contribute to make our dataset the one with the most realistic data distribution and the lowest synthetic-to-real transfer gap among the synthetic datasets we evaluated. We conclude by highlighting the limits of training with infinite data in drum transcription and we show how they can be overcome.

In forensic genetics, short tandem repeats (STRs) are used for human identification (HID). Degraded biological trace samples with low amounts of short DNA fragments (low-quality DNA samples) pose a challenge for STR typing. Predefined single nucleotide polymorphisms (SNPs) can be amplified on short PCR fragments and used to generate SNP profiles from low-quality DNA samples. However, the stochastic results from low-quality DNA samples may result in frequent locus drop-outs and insufficient numbers of SNP genotypes for convincing identification of individuals. Shotgun DNA sequencing potentially analyses all DNA fragments in a sample in contrast to the targeted PCR-based sequencing methods and may be applied to DNA samples of very low quality, like heavily compromised crime-scene samples and ancient DNA samples. Here, we developed a statistical model for shotgun sequencing, sequence alignment, and genotype calling. Results from replicated shotgun sequencing of buccal swab (high-quality samples) and hair samples (low-quality samples) were arranged in a genotype-call confusion matrix to estimate the calling error probability by maximum likelihood and Bayesian inference. We developed formulas for calculating the evidential weight as a likelihood ratio (LR) based on data from dynamically selected SNPs from shotgun DNA sequencing. The method accounts for potential genotyping errors. Different genotype quality filters may be applied to account for genotyping errors. An error probability of zero resulted in the forensically commonly used LR formula. When considering a single SNP marker's contribution to the LR, error probabilities larger than zero reduced the LR contribution of matching genotypes and increased the LR in the case of a mismatch. We developed an open-source R package, wgsLR, which implements the method, including estimating the calling error probability and calculating LR values.

Establishing the frequentist properties of Bayesian approaches widens their appeal and offers new understanding. In hypothesis testing, Bayesian model averaging addresses the problem that conclusions are sensitive to variable selection. But Bayesian false discovery rate (FDR) guarantees are contingent on prior assumptions that may be disputed. Here we show that Bayesian model-averaged hypothesis testing is a closed testing procedure that controls the frequentist familywise error rate (FWER) in the strong sense. The rate converges pointwise as the sample size grows and, under some conditions, uniformly. The `Doublethink' method computes simultaneous posterior odds and asymptotic p-values for model-averaged hypothesis testing. We explore its benefits, including post-hoc variable selection, and limitations, including finite-sample inflation, through a Mendelian randomization study and simulations comparing approaches like LASSO, stepwise regression, the Benjamini-Hochberg procedure and e-values.

Numerically solving high-dimensional random parametric PDEs poses a challenging computational problem. It is well-known that numerical methods can greatly benefit from adaptive refinement algorithms, in particular when functional approximations in polynomials are computed as in stochastic Galerkin finite element methods. This work investigates a residual based adaptive algorithm, akin to classical adaptive FEM, used to approximate the solution of the stationary diffusion equation with lognormal coefficients, i.e. with a non-affine parameter dependence of the data. It is known that the refinement procedure is reliable but the theoretical convergence of the scheme for this class of unbounded coefficients remains a challenging open question. This paper advances the theoretical state-of-the-art by providing a quasi-error reduction result for the adaptive solution of the lognormal stationary diffusion problem. The presented analysis generalizes previous results in that guaranteed convergence for uniformly bounded coefficients follows directly as a corollary. Moreover, it highlights the fundamental challenges with unbounded coefficients that cannot be overcome with common techniques. A computational benchmark example illustrates the main theoretical statement.

Inverse problems, such as accelerated MRI reconstruction, are ill-posed and an infinite amount of possible and plausible solutions exist. This may not only lead to uncertainty in the reconstructed image but also in downstream tasks such as semantic segmentation. This uncertainty, however, is mostly not analyzed in the literature, even though probabilistic reconstruction models are commonly used. These models can be prone to ignore plausible but unlikely solutions like rare pathologies. Building on MRI reconstruction approaches based on diffusion models, we add guidance to the diffusion process during inference, generating two meaningfully diverse reconstructions corresponding to an upper and lower bound segmentation. The reconstruction uncertainty can then be quantified by the difference between these bounds, which we coin the 'uncertainty boundary'. We analyzed the behavior of the upper and lower bound segmentations for a wide range of acceleration factors and found the uncertainty boundary to be both more reliable and more accurate compared to repeated sampling. Code is available at //github.com/NikolasMorshuis/SGR

When objects are packed in a cluster, physical interactions are unavoidable. Such interactions emerge because of the objects geometric features; some of these features promote entanglement, while others create repulsion. When entanglement occurs, the cluster exhibits a global, complex behaviour, which arises from the stochastic interactions between objects. We hereby refer to such a cluster as an entangled granular metamaterial. We investigate the geometrical features of the objects which make up the cluster, henceforth referred to as grains, that maximise entanglement. We hypothesise that a cluster composed from grains with high propensity to tangle, will also show propensity to interact with a second cluster of tangled objects. To demonstrate this, we use the entangled granular metamaterials to perform complex robotic picking tasks, where conventional grippers struggle. We employ an electromagnet to attract the metamaterial (ferromagnetic) and drop it onto a second cluster of objects (targets, non-ferromagnetic). When the electromagnet is re-activated, the entanglement ensures that both the metamaterial and the targets are picked, with varying degrees of physical engagement that strongly depend on geometric features. Interestingly, although the metamaterials structural arrangement is random, it creates repeatable and consistent interactions with a second tangled media, enabling robust picking of the latter.

Rational approximation has proven to be a powerful method for solving two-dimensional (2D) fluid problems. At small Reynolds numbers, 2D Stokes flows can be represented by two analytic functions, known as Goursat functions. Xue, Waters and Trefethen [SIAM J. Sci. Comput., 46 (2024), pp. A1214-A1234] recently introduced the LARS algorithm (Lightning-AAA Rational Stokes) for computing 2D Stokes flows in general domains by approximating the Goursat functions using rational functions. In this paper, we introduce a new algorithm for computing 2D Stokes flows in periodic channels using trigonometric rational functions, with poles placed via the AAA-LS algorithm [Costa and Trefethen, European Congr. Math., 2023] in a conformal map of the domain boundary. We apply the algorithm to Poiseuille and Couette problems between various periodic channel geometries, where solutions are computed to at least 6-digit accuracy in less than 1 second. The applicability of the algorithm is highlighted in the computation of the dynamics of fluid particles in unsteady Couette flows.

New differential-recurrence relations for B-spline basis functions are given. Using these relations, a recursive method for finding the Bernstein-B\'{e}zier coefficients of B-spline basis functions over a single knot span is proposed. The algorithm works for any knot sequence and has an asymptotically optimal computational complexity. Numerical experiments show that the new method gives results which preserve a high number of digits when compared to an approach which uses the well-known de Boor-Cox formula.

北京阿比特科技有限公司