亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This article presents a new algorithm to compute all the roots of two families of polynomials that are of interest for the Mandelbrot set $\mathcal{M}$ : the roots of those polynomials are respectively the parameters $c\in\mathcal{M}$ associated with periodic critical dynamics for $f_c(z)=z^2+c$ (hyperbolic centers) or with pre-periodic dynamics (Misiurewicz-Thurston parameters). The algorithm is based on the computation of discrete level lines that provide excellent starting points for the Newton method. In practice, we observe that these polynomials can be split in linear time of the degree. This article is paired with a code library [Mandel] that implements this algorithm. Using this library and about 723 000 core-hours on the HPC center Rom\'eo (Reims), we have successfully found all hyperbolic centers of period $\leq 41$ and all Misiurewicz-Thurston parameters whose period and pre-period sum to $\leq 35$. Concretely, this task involves splitting a tera-polynomial, i.e. a polynomial of degree $\sim10^{12}$, which is orders of magnitude ahead of the previous state of the art. It also involves dealing with the certifiability of our numerical results, which is an issue that we address in detail, both mathematically and along the production chain. The certified database is available to the scientific community. For the smaller periods that can be represented using only hardware arithmetic (floating points FP80), the implementation of our algorithm can split the corresponding polynomials of degree $\sim10^{9}$ in less than one day-core. We complement these benchmarks with a statistical analysis of the separation of the roots, which confirms that no other polynomial in these families can be split without using higher precision arithmetic.

相關內容

Gradient descent is one of the most widely used iterative algorithms in modern statistical learning. However, its precise algorithmic dynamics in high-dimensional settings remain only partially understood, which has therefore limited its broader potential for statistical inference applications. This paper provides a precise, non-asymptotic distributional characterization of gradient descent iterates in a broad class of empirical risk minimization problems, in the so-called mean-field regime where the sample size is proportional to the signal dimension. Our non-asymptotic state evolution theory holds for both general non-convex loss functions and non-Gaussian data, and reveals the central role of two Onsager correction matrices that precisely characterize the non-trivial dependence among all gradient descent iterates in the mean-field regime. Although the Onsager correction matrices are typically analytically intractable, our state evolution theory facilitates a generic gradient descent inference algorithm that consistently estimates these matrices across a broad class of models. Leveraging this algorithm, we show that the state evolution can be inverted to construct (i) data-driven estimators for the generalization error of gradient descent iterates and (ii) debiased gradient descent iterates for inference of the unknown signal. Detailed applications to two canonical models--linear regression and (generalized) logistic regression--are worked out to illustrate model-specific features of our general theory and inference methods.

We study the problem of testing whether the missing values of a potentially high-dimensional dataset are Missing Completely at Random (MCAR). We relax the problem of testing MCAR to the problem of testing the compatibility of a collection of covariance matrices, motivated by the fact that this procedure is feasible when the dimension grows with the sample size. Our first contributions are to define a natural measure of the incompatibility of a collection of correlation matrices, which can be characterised as the optimal value of a Semi-definite Programming (SDP) problem, and to establish a key duality result allowing its practical computation and interpretation. By analysing the concentration properties of the natural plug-in estimator for this measure, we propose a novel hypothesis test, which is calibrated via a bootstrap procedure and demonstrates power against any distribution with incompatible covariance matrices. By considering key examples of missingness structures, we demonstrate that our procedures are minimax rate optimal in certain cases. We further validate our methodology with numerical simulations that provide evidence of validity and power, even when data are heavy tailed. Furthermore, tests of compatibility can be used to test the feasibility of positive semi-definite matrix completion problems with noisy observations, and thus our results may be of independent interest.

Background: The standard regulatory approach to assess replication success is the two-trials rule, requiring both the original and the replication study to be significant with effect estimates in the same direction. The sceptical p-value was recently presented as an alternative method for the statistical assessment of the replicability of study results. Methods: We compare the statistical properties of the sceptical p-value and the two-trials rule. We illustrate the performance of the different methods using real-world evidence emulations of randomized, controlled trials (RCTs) conducted within the RCT DUPLICATE initiative. Results: The sceptical p-value depends not only on the two p-values, but also on sample size and effect size of the two studies. It can be calibrated to have the same Type-I error rate as the two-trials rule, but has larger power to detect an existing effect. In the application to the results from the RCT DUPLICATE initiative, the sceptical p-value leads to qualitatively similar results than the two-trials rule, but tends to show more evidence for treatment effects compared to the two-trials rule. Conclusion: The sceptical p-value represents a valid statistical measure to assess the replicability of study results and is especially useful in the context of real-world evidence emulations.

An extremely schematic model of the forces acting an a sailing yacht equipped with a system of foils is here presented and discussed. The role of the foils is to raise the hull from the water in order to reduce the total resistance and then increase the speed. Some CFD simulations are providing the total resistance of the bare hull at some values of speed and displacement, as well as the characteristics (drag and lift coefficients) of the 2D foil sections used for the appendages. A parametric study has been performed for the characterization of a foil of finite dimensions. The equilibrium of the vertical forces and longitudinal moments, as well as a reduced displacement, is obtained by controlling the pitch angle of the foils. The value of the total resistance of the yacht with foils is then compared with the case without foils, evidencing the speed regime where an advantage is obtained, if any.

This manuscript describes the notions of blocker and interdiction applied to well-known optimization problems. The main interest of these two concepts is the capability to analyze the existence of a combinatorial structure after some modifications. We focus on graph modification, like removing vertices or links in a network. In the interdiction version, we have a budget for modification to reduce as much as possible the size of a given combinatorial structure. Whereas, for the blocker version, we minimize the number of modifications such that the network does not contain a given combinatorial structure. Blocker and interdiction problems have some similarities and can be applied to well-known optimization problems. We consider matching, connectivity, shortest path, max flow, and clique problems. For these problems, we analyze either the blocker version or the interdiction one. Applying the concept of blocker or interdiction to well-known optimization problems can change their complexities. Some optimization problems become harder when one of these two notions is applied. For this reason, we propose some complexity analysis to show when an optimization problem, or the associated decision problem, becomes harder. Another fundamental aspect developed in the manuscript is the use of exact methods to tackle these optimization problems. The main way to solve these problems is to use integer linear programming to model them. An interesting aspect of integer linear programming is the possibility to analyze theoretically the strength of these models, using cutting planes. For most of the problems studied in this manuscript, a polyhedral analysis is performed to prove the strength of inequalities or describe new families of inequalities. The exact algorithms proposed are based on Branch-and-Cut or Branch-and-Price algorithm, where dedicated separation and pricing algorithms are proposed.

Automatic differentiation is everywhere, but there exists only minimal documentation of how it works in complex arithmetic beyond stating "derivatives in $\mathbb{C}^d$" $\cong$ "derivatives in $\mathbb{R}^{2d}$" and, at best, shallow references to Wirtinger calculus. Unfortunately, the equivalence $\mathbb{C}^d \cong \mathbb{R}^{2d}$ becomes insufficient as soon as we need to derive custom gradient rules, e.g., to avoid differentiating "through" expensive linear algebra functions or differential equation simulators. To combat such a lack of documentation, this article surveys forward- and reverse-mode automatic differentiation with complex numbers, covering topics such as Wirtinger derivatives, a modified chain rule, and different gradient conventions while explicitly avoiding holomorphicity and the Cauchy--Riemann equations (which would be far too restrictive). To be precise, we will derive, explain, and implement a complex version of Jacobian-vector and vector-Jacobian products almost entirely with linear algebra without relying on complex analysis or differential geometry. This tutorial is a call to action, for users and developers alike, to take complex values seriously when implementing custom gradient propagation rules -- the manuscript explains how.

The ongoing quest to discover new phenomena at the LHC necessitates the continuous development of algorithms and technologies. Established approaches like machine learning, along with emerging technologies such as quantum computing show promise in the enhancement of experimental capabilities. In this work, we propose a strategy for anomaly detection tasks at the LHC based on unsupervised quantum machine learning, and demonstrate its effectiveness in identifying new phenomena. The designed quantum models, an unsupervised kernel machine and two clustering algorithms, are trained to detect new-physics events using a latent representation of LHC data, generated by an autoencoder designed to accommodate current quantum hardware limitations on problem size. For kernel-based anomaly detection, we implement an instance of the model on a quantum computer, and we identify a regime where it significantly outperforms its classical counterparts. We show that the observed performance enhancement is related to the quantum resources utilised by the model.

Characterizing anomalous diffusion is crucial in order to understand the evolution of complex stochastic systems, from molecular interactions to cellular dynamics. In this work, we characterize the performances regarding such a task of Bi-Mamba, a novel state-space deep-learning architecture articulated with a bidirectional scan mechanism. Our implementation is tested on the AnDi-2 challenge datasets among others. Designed for regression tasks, the Bi-Mamba architecture infers efficiently the effective diffusion coefficient and anomalous exponent from single, short trajectories. As such, our results indicate the potential practical use of the Bi-Mamba architecture for anomalousdiffusion characterization.

Among all the deterministic CholeskyQR-type algorithms, Shifted CholeskyQR3 is specifically designed to address the QR factorization of ill-conditioned matrices. This algorithm introduces a shift parameter $s$ to prevent failure during the initial Cholesky factorization step, making the choice of this parameter critical for the algorithm's effectiveness. Our goal is to identify a smaller $s$ compared to the traditional selection based on $\norm{X}_{2}$. In this research, we propose a new matrix norm called the $g$-norm, which is based on the column properties of $X$. This norm allows us to obtain a reduced shift parameter $s$ for the Shifted CholeskyQR3 algorithm, thereby improving the sufficient condition of $\kappa_{2}(X)$ for this method. We provide rigorous proofs of orthogonality and residuals for the improved algorithm using our proposed $s$. Numerical experiments confirm the enhanced numerical stability of orthogonality and residuals with the reduced $s$. We find that Shifted CholeskyQR3 can effectively handle ill-conditioned $X$ with a larger $\kappa_{2}(X)$ when using our reduced $s$ compared to the original $s$. Furthermore, we compare CPU times with other algorithms to assess performance improvements.

Graph states are fundamental objects in the theory of quantum information due to their simple classical description and rich entanglement structure. They are also intimately related to IQP circuits, which have applications in quantum pseudorandomness and quantum advantage. For us, they are a toy model to understand the relation between circuit connectivity, entanglement structure and computational complexity. In the worst case, a strict dichotomy in the computational universality of such graph states appears as a function of the degree $d$ of a regular graph state [GDH+23]. In this paper, we initiate the study of the average-case complexity of simulating random graph states of varying degree when measured in random product bases and give distinct evidence that a similar complexity-theoretic dichotomy exists in the average case. Specifically, we consider random $d$-regular graph states and prove three distinct results: First, we exhibit two families of IQP circuits of depth $d$ and show that they anticoncentrate for any $2 < d = o(n)$ when measured in a random $X$-$Y$-plane product basis. This implies anticoncentration for random constant-regular graph states. Second, in the regime $d = \Theta(n^c)$ with $c \in (0,1)$, we prove that random $d$-regular graph states contain polynomially large grid graphs as induced subgraphs with high probability. This implies that they are universal resource states for measurement-based computation. Third, in the regime of high degree ($d\sim n/2$), we show that random graph states are not sufficiently entangled to be trivially classically simulable, unlike Haar random states. Proving the three results requires different techniques--the analysis of a classical statistical-mechanics model using Krawtchouck polynomials, graph theoretic analysis using the switching method, and analysis of the ranks of submatrices of random adjacency matrices, respectively.

北京阿比特科技有限公司