亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The correlated Erd\"os-R\'enyi random graph ensemble is a probability law on pairs of graphs with $n$ vertices, parametrized by their average degree $\lambda$ and their correlation coefficient $s$. It can be used as a benchmark for the graph alignment problem, in which the labels of the vertices of one of the graphs are reshuffled by an unknown permutation; the goal is to infer this permutation and thus properly match the pairs of vertices in both graphs. A series of recent works has unveiled the role of Otter's constant $\alpha$ (that controls the exponential rate of growth of the number of unlabeled rooted trees as a function of their sizes) in this problem: for $s>\sqrt{\alpha}$ and $\lambda$ large enough it is possible to recover in a time polynomial in $n$ a positive fraction of the hidden permutation. The exponent of this polynomial growth is however quite large and depends on the other parameters, which limits the range of applications of the algorithm. In this work we present a family of faster algorithms for this task, show through numerical simulations that their accuracy is only slightly reduced with respect to the original one, and conjecture that they undergo, in the large $\lambda$ limit, phase transitions at modified Otter's thresholds $\sqrt{\widehat{\alpha}}>\sqrt{\alpha}$, with $\widehat{\alpha}$ related to the enumeration of a restricted family of trees.

相關內容

In the study of extremes, the presence of asymptotic independence signifies that extreme events across multiple variables are probably less likely to occur together. Although well-understood in a bivariate context, the concept remains relatively unexplored when addressing the nuances of joint occurrence of extremes in higher dimensions. In this paper, we propose a notion of mutual asymptotic independence to capture the behavior of joint extremes in dimensions larger than two and contrast it with the classical notion of (pairwise) asymptotic independence. Furthermore, we define $k$-wise asymptotic independence which lies in between pairwise and mutual asymptotic independence. The concepts are compared using examples of Archimedean, Gaussian and Marshall-Olkin copulas among others. Notably, for the popular Gaussian copula, we provide explicit conditions on the correlation matrix for mutual asymptotic independence to hold; moreover, we are able to compute exact tail orders for various tail events.

A non-linear complex system governed by multi-spatial and multi-temporal physics scales cannot be fully understood with a single diagnostic, as each provides only a partial view and much information is lost during data extraction. Combining multiple diagnostics also results in imperfect projections of the system's physics. By identifying hidden inter-correlations between diagnostics, we can leverage mutual support to fill in these gaps, but uncovering these inter-correlations analytically is too complex. We introduce a groundbreaking machine learning methodology to address this issue. Our multimodal approach generates super resolution data encompassing multiple physics phenomena, capturing detailed structural evolution and responses to perturbations previously unobservable. This methodology addresses a critical problem in fusion plasmas: the Edge Localized Mode (ELM), a plasma instability that can severely damage reactor walls. One method to stabilize ELM is using resonant magnetic perturbation to trigger magnetic islands. However, low spatial and temporal resolution of measurements limits the analysis of these magnetic islands due to their small size, rapid dynamics, and complex interactions within the plasma. With super-resolution diagnostics, we can experimentally verify theoretical models of magnetic islands for the first time, providing unprecedented insights into their role in ELM stabilization. This advancement aids in developing effective ELM suppression strategies for future fusion reactors like ITER and has broader applications, potentially revolutionizing diagnostics in fields such as astronomy, astrophysics, and medical imaging.

Geological parameterization entails the representation of a geomodel using a small set of latent variables and a mapping from these variables to grid-block properties such as porosity and permeability. Parameterization is useful for data assimilation (history matching), as it maintains geological realism while reducing the number of variables to be determined. Diffusion models are a new class of generative deep-learning procedures that have been shown to outperform previous methods, such as generative adversarial networks, for image generation tasks. Diffusion models are trained to "denoise", which enables them to generate new geological realizations from input fields characterized by random noise. Latent diffusion models, which are the specific variant considered in this study, provide dimension reduction through use of a low-dimensional latent variable. The model developed in this work includes a variational autoencoder for dimension reduction and a U-net for the denoising process. Our application involves conditional 2D three-facies (channel-levee-mud) systems. The latent diffusion model is shown to provide realizations that are visually consistent with samples from geomodeling software. Quantitative metrics involving spatial and flow-response statistics are evaluated, and general agreement between the diffusion-generated models and reference realizations is observed. Stability tests are performed to assess the smoothness of the parameterization method. The latent diffusion model is then used for ensemble-based data assimilation. Two synthetic "true" models are considered. Significant uncertainty reduction, posterior P$_{10}$-P$_{90}$ forecasts that generally bracket observed data, and consistent posterior geomodels, are achieved in both cases.

This paper presents a multivariate normal integral expression for the joint survival function of the cumulated components of any multinomial random vector. This result can be viewed as a multivariate analog of Equation (7) from Carter & Pollard (2004), who improved Tusn\'ady's inequality. Our findings are based on a crucial relationship between the joint survival function of the cumulated components of any multinomial random vector and the joint cumulative distribution function of a corresponding Dirichlet distribution. We offer two distinct proofs: the first expands the logarithm of the Dirichlet density, while the second employs Laplace's method applied to the Dirichlet integral.

Confidence assessments of semantic segmentation algorithms in remote sensing are important. It is a desirable property of models to a priori know if they produce an incorrect output. Evaluations of the confidence assigned to the estimates of models for the task of classification in Earth Observation (EO) are crucial as they can be used to achieve improved semantic segmentation performance and prevent high error rates during inference and deployment. The model we develop, the Confidence Assessments of classification algorithms for Semantic segmentation (CAS) model, performs confidence evaluations at both the segment and pixel levels, and outputs both labels and confidence. The outcome of this work has important applications. The main application is the evaluation of EO Foundation Models on semantic segmentation downstream tasks, in particular land cover classification using satellite Copernicus Sentinel-2 data. The evaluation shows that the proposed model is effective and outperforms other alternative baseline models.

This paper is devoted to the study of the MaxMinDegree Arborescence (MMDA) problem in layered directed graphs of depth $\ell\le O(\log n/\log \log n)$, which is a special case of the Santa Claus problem. Obtaining a poly-logarithmic approximation for MMDA in polynomial time is of high interest as it is the main obstacle towards the same guarantee for the general Santa Claus problem, which is itself a necessary condition to eventually improve the long-standing 2-approximation for makespan scheduling on unrelated machines by Lenstra, Shmoys, and Tardos [FOCS'87]. The only ways we have to solve the MMDA problem within an $O(\text{polylog}(n))$ factor is via a ``round-and-condition'' algorithm using the $(\ell-1)^{th}$ level of the Sherali-Adams hierarchy, or via a ``recursive greedy'' algorithm which also has quasi-polynomial time. However, very little is known about the limitations of these techniques, and it is even plausible that the round-and-condition algorithm could obtain the same approximation guarantee with only $1$ round of Sherali-Adams, which would imply a polynomial-time algorithm. As a main result, we construct an MMDA instance of depth $3$ for which an integrality gap of $n^{\Omega(1)}$ survives $1$ round of the Sherali-Adams hierarchy. This result is best possible since it is known that after only $2$ rounds the gap is at most poly-logarithmic on depth-3 graphs. Second, we show that our instance can be ``lifted'' via a simple trick to MMDA instances of any depth $\ell\in \Omega(1)\cap o(\log n/\log \log n)$, for which we conjecture that an integrality gap of $n^{\Omega(1/\ell)}$ survives $\Omega(\ell)$ rounds of Sherali-Adams. We show a number of intermediate results towards this conjecture, which also suggest that our construction is a significant challenge to the techniques used so far for Santa Claus.

The multiparameter matrix pencil problem (MPP) is a generalization of the one-parameter MPP: given a set of $m\times n$ complex matrices $A_0,\ldots, A_r$, with $m\ge n+r-1$, it is required to find all complex scalars $\lambda_0,\ldots,\lambda_r$, not all zero, such that the matrix pencil $A(\lambda)=\sum_{i=0}^r\lambda_iA_i$ loses column rank and the corresponding nonzero complex vector $x$ such that $A(\lambda)x=0$. This problem is related to the well-known multiparameter eigenvalue problem except that there is only one pencil and, crucially, the matrices are not necessarily square. In this paper, we give a full solution to the two-parameter MPP. Firstly, an inflation process is implemented to show that the two-parameter MPP is equivalent to a set of three $m^2\times n^2$ simultaneous one-parameter MPPs. These problems are given in terms of Kronecker commutator operators (involving the original matrices) which exhibit several symmetries. These symmetries are analysed and are then used to deflate the dimensions of the one-parameter MPPs to $\frac{m(m-1)}{2}\times\frac{n(n+1)}{2}$ thus simplifying their numerical solution. In the case that $m=n+1$ it is shown that the two-parameter MPP has at least one solution and generically $\frac{n(n+1)}{2}$ solutions and furthermore that, under a rank assumption, the Kronecker determinant operators satisfy a commutativity property. This is then used to show that the two-parameter MPP is equivalent to a set of three simultaneous eigenvalue problems. A general solution algorithm is presented and numerical examples are given to outline the procedure of the proposed algorithm.

We study the positive logic FO+ on finite words, and its fragments, pursuing and refining the work initiated in [Kuperberg 2023]. First, we transpose notorious logic equivalences into positive first-order logic: FO+ is equivalent to LTL+ , and its two-variable fragment FO2+ with (resp. without) successor available is equivalent to UTL+ with (resp. without) the "next" operator X available. This shows that despite previous negative results, the class of FO+-definable languages exhibits some form of robustness. We then exhibit an example of an FO-definable monotone language on one predicate, that is not FO+-definable, refining the example from [Kuperberg 2023] with 3 predicates. Moreover, we show that such a counter-example cannot be FO2-definable.

We determine the best n-term approximation of generalized Wiener model classes in a Hilbert space $H $. This theory is then applied to several special cases.

We consider the problem of function approximation by two-layer neural nets with random weights that are "nearly Gaussian" in the sense of Kullback-Leibler divergence. Our setting is the mean-field limit, where the finite population of neurons in the hidden layer is replaced by a continuous ensemble. We show that the problem can be phrased as global minimization of a free energy functional on the space of (finite-length) paths over probability measures on the weights. This functional trades off the $L^2$ approximation risk of the terminal measure against the KL divergence of the path with respect to an isotropic Brownian motion prior. We characterize the unique global minimizer and examine the dynamics in the space of probability measures over weights that can achieve it. In particular, we show that the optimal path-space measure corresponds to the F\"ollmer drift, the solution to a McKean-Vlasov optimal control problem closely related to the classic Schr\"odinger bridge problem. While the F\"ollmer drift cannot in general be obtained in closed form, thus limiting its potential algorithmic utility, we illustrate the viability of the mean-field Langevin diffusion as a finite-time approximation under various conditions on entropic regularization. Specifically, we show that it closely tracks the F\"ollmer drift when the regularization is such that the minimizing density is log-concave.

北京阿比特科技有限公司