亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We formalize a complete proof of the regular case of Fermat's Last Theorem in the Lean4 theorem prover. Our formalization includes a proof of Kummer's lemma, that is the main obstruction to Fermat's Last Theorem for regular primes. Rather than following the modern proof of Kummer's lemma via class field theory, we prove it by using Hilbert's Theorems 90-94 in a way that is more amenable to formalization.

相關內容

We consider the issue of intensification/diversification balance in the context of a memetic algorithm for the multiobjective optimization of investment portfolios with cardinality constraints. We approach this issue in this work by considering the selective application of knowledge-augmented operators (local search and a memory of elite solutions) based on the search epoch in which the algorithm finds itself, hence alternating between unbiased search (guided uniquely by the built-in search mechanics of the algorithm) and focused search (intensified by the use of the problem-aware operators). These operators exploit Sharpe index (a measure of the relationship between return and risk) as a source of problem knowledge. We have conducted a sensibility analysis to determine in which phases of the search the application of these operators leads to better results. Our findings indicate that the resulting algorithm is quite robust in terms of parameterization from the point of view of this problem-specific indicator. Furthermore, it is shown that not only can other non-memetic counterparts be outperformed, but that there is a range of parameters in which the MA is also competitive when not better in terms of standard multiobjective performance indicators.

Meta-analyses are commonly used to provide solid evidence across numerous studies. Traditional moment methods, such as the DerSimonian-Laird method, remain popular in spite of the availability of more accurate alternatives. While moment estimators are simple and intuitive, they are known to underestimate the variance of the overall treatment effect, particularly when the number of studies is small. This underestimation can lead to excessively narrow confidence intervals that do not meet the nominal confidence level, potentially resulting in misleading conclusions. In this study, we improve traditional moment-based meta-analysis methods by incorporating Huber's M-estimation to more accurately capture the distributional characteristics of between-study variance. Our approach enables conservative parameter estimation, even when almost all existing methods lead to underestimation of between-study variance under a small number of studies. Additionally, by deriving the simultaneous distribution of overall treatment effect and between-study variance, we propose facilitating a visual exploration of the relationship between these two quantities. Our method provides more reliable estimators for the overall treatment effect and between-study variance, particularly in situations with few studies. Using simulations and real data analysis, we demonstrate that our approach always yields more conservative results compared to traditional moment methods, and ensures more accurate confidence intervals in meta-analyses.

Human speech exhibits rich and flexible prosodic variations. To address the one-to-many mapping problem from text to prosody in a reasonable and flexible manner, we propose DiffStyleTTS, a multi-speaker acoustic model based on a conditional diffusion module and an improved classifier-free guidance, which hierarchically models speech prosodic features, and controls different prosodic styles to guide prosody prediction. Experiments show that our method outperforms all baselines in naturalness and achieves superior synthesis speed compared to three diffusion-based baselines. Additionally, by adjusting the guiding scale, DiffStyleTTS effectively controls the guidance intensity of the synthetic prosody.

We consider the application of the generalized Convolution Quadrature (gCQ) to approximate the solution of an important class of sectorial problems. The gCQ is a generalization of Lubich's Convolution Quadrature (CQ) that allows for variable steps. The available stability and convergence theory for the gCQ requires non realistic regularity assumptions on the data, which do not hold in many applications of interest, such as the approximation of subdiffusion equations. It is well known that for non smooth enough data the original CQ, with uniform steps, presents an order reduction close to the singularity. We generalize the analysis of the gCQ to data satisfying realistic regularity assumptions and provide sufficient conditions for stability and convergence on arbitrary sequences of time points. We consider the particular case of graded meshes and show how to choose them optimally, according to the behaviour of the data. An important advantage of the gCQ method is that it allows for a fast and memory reduced implementation. We describe how the fast and oblivious gCQ can be implemented and illustrate our theoretical results with several numerical experiments.

We study linear preconditioning in Markov chain Monte Carlo. We consider the class of well-conditioned distributions, for which several mixing time bounds depend on the condition number $\kappa$. First we show that well-conditioned distributions exist for which $\kappa$ can be arbitrarily large and yet no linear preconditioner can reduce it. We then impose two sets of extra assumptions under which a linear preconditioner can significantly reduce $\kappa$. For the random walk Metropolis we further provide upper and lower bounds on the spectral gap with tight $1/\kappa$ dependence. This allows us to give conditions under which linear preconditioning can provably increase the gap. We then study popular preconditioners such as the covariance, its diagonal approximation, the hessian at the mode, and the QR decomposition. We show conditions under which each of these reduce $\kappa$ to near its minimum. We also show that the diagonal approach can in fact \textit{increase} the condition number. This is of interest as diagonal preconditioning is the default choice in well-known software packages. We conclude with a numerical study comparing preconditioners in different models, and showing how proper preconditioning can greatly reduce compute time in Hamiltonian Monte Carlo.

This paper proposes fast randomized algorithms for computing the Kronecker Tensor Decomposition (KTD). The proposed algorithms can decompose a given tensor into the KTD format much faster than the existing state-of-the-art algorithms. Our principal idea is to use the randomization framework to reduce computational complexity significantly. We provide extensive simulations to verify the effectiveness and performance of the proposed randomized algorithms with several orders of magnitude acceleration compared to the deterministic one. Our simulations use synthetics and real-world datasets with applications to tensor completion, video/image compression, image denoising, and image super-resolution

We apply NER to a particular sub-genre of legal texts in German: the genre of legal norms regulating administrative processes in public service administration. The analysis of such texts involves identifying stretches of text that instantiate one of ten classes identified by public service administration professionals. We investigate and compare three methods for performing Named Entity Recognition (NER) to detect these classes: a Rule-based system, deep discriminative models, and a deep generative model. Our results show that Deep Discriminative models outperform both the Rule-based system as well as the Deep Generative model, the latter two roughly performing equally well, outperforming each other in different classes. The main cause for this somewhat surprising result is arguably the fact that the classes used in the analysis are semantically and syntactically heterogeneous, in contrast to the classes used in more standard NER tasks. Deep Discriminative models appear to be better equipped for dealing with this heterogenerity than both generic LLMs and human linguists designing rule-based NER systems.

Change point detection (CPD) methods aim to identify abrupt shifts in the distribution of input data streams. Accurate estimators for this task are crucial across various real-world scenarios. Yet, traditional unsupervised CPD techniques face significant limitations, often relying on strong assumptions or suffering from low expressive power due to inherent model simplicity. In contrast, representation learning methods overcome these drawbacks by offering flexibility and the ability to capture the full complexity of the data without imposing restrictive assumptions. However, these approaches are still emerging in the CPD field and lack robust theoretical foundations to ensure their reliability. Our work addresses this gap by integrating the expressive power of representation learning with the groundedness of traditional CPD techniques. We adopt spectral normalization (SN) for deep representation learning in CPD tasks and prove that the embeddings after SN are highly informative for CPD. Our method significantly outperforms current state-of-the-art methods during the comprehensive evaluation via three standard CPD datasets.

Particle-based methods include a variety of techniques, such as Markov Chain Monte Carlo (MCMC) and Sequential Monte Carlo (SMC), for approximating a probabilistic target distribution with a set of weighted particles. In this paper, we prove that for any set of particles, there is a unique weighting mechanism that minimizes the Kullback-Leibler (KL) divergence of the (particle-based) approximation from the target distribution, when that distribution is discrete -- any other weighting mechanism (e.g. MCMC weighting that is based on particles' repetitions in the Markov chain) is sub-optimal with respect to this divergence measure. Our proof does not require any restrictions either on the target distribution, or the process by which the particles are generated, other than the discreteness of the target. We show that the optimal weights can be determined based on values that any existing particle-based method already computes; As such, with minimal modifications and no extra computational costs, the performance of any particle-based method can be improved. Our empirical evaluations are carried out on important applications of discrete distributions including Bayesian Variable Selection and Bayesian Structure Learning. The results illustrate that our proposed reweighting of the particles improves any particle-based approximation to the target distribution consistently and often substantially.

Combinatorial problems such as combinatorial optimization and constraint satisfaction problems arise in decision-making across various fields of science and technology. In real-world applications, when multiple optimal or constraint-satisfying solutions exist, enumerating all these solutions -- rather than finding just one -- is often desirable, as it provides flexibility in decision-making. However, combinatorial problems and their enumeration versions pose significant computational challenges due to combinatorial explosion. To address these challenges, we propose enumeration algorithms for combinatorial optimization and constraint satisfaction problems using Ising machines. Ising machines are specialized devices designed to efficiently solve combinatorial problems. Typically, they sample low-cost solutions in a stochastic manner. Our enumeration algorithms repeatedly sample solutions to collect all desirable solutions. The crux of the proposed algorithms is their stopping criteria for sampling, which are derived based on probability theory. In particular, the proposed algorithms have theoretical guarantees that the failure probability of enumeration is bounded above by a user-specified value, provided that lower-cost solutions are sampled more frequently and equal-cost solutions are sampled with equal probability. Many physics-based Ising machines are expected to (approximately) satisfy these conditions. As a demonstration, we applied our algorithm using simulated annealing to maximum clique enumeration on random graphs. We found that our algorithm enumerates all maximum cliques in large dense graphs faster than a conventional branch-and-bound algorithm specially designed for maximum clique enumeration. This demonstrates the promising potential of our proposed approach.

北京阿比特科技有限公司