亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the algorithmic problem of finding large \textit{balanced} independent sets in sparse random bipartite graphs, and more generally the problem of finding independent sets with specified proportions of vertices on each side of the bipartition. In a bipartite graph it is trivial to find an independent set of density at least half (take one of the partition classes). In contrast, in a random bipartite graph of average degree $d$, the largest balanced independent sets (containing equal number of vertices from each class) are typically of density $(2+o_d(1)) \frac{\log d}{d}$. Can we find such large balanced independent sets in these graphs efficiently? By utilizing the overlap gap property and the low-degree algorithmic framework, we prove that local and low-degree algorithms (even those that know the bipartition) cannot find balanced independent sets of density greater than $(1+\epsilon) \frac{\log d}{d}$ for any $\epsilon>0$ fixed and $d$ large but constant. This factor $2$ statistical--computational gap between what exists and what local algorithms can achieve is analogous to the gap for finding large independent sets in (non-bipartite) random graphs. Our results therefor suggest that this gap is pervasive in many models, and that hard computational problems can lurk inside otherwise tractable ones. A particularly striking aspect of the gap in bipartite graphs is that the algorithm achieving the lower bound is extremely simple and can be implemented as a $1$-local algorithm and a degree-$1$ polynomial (a linear function).

相關內容

We propose a decoder-only language model, \textit{VoxtLM}, that can perform four tasks: speech recognition, speech synthesis, text generation, and speech continuation. VoxtLM integrates text vocabulary with discrete speech tokens from self-supervised speech features and uses special tokens to enable multitask learning. Compared to a single-task model, VoxtLM exhibits a significant improvement in speech synthesis, with improvements in both speech intelligibility from 28.9 to 5.6 and objective quality from 2.68 to 3.90. VoxtLM also improves speech generation and speech recognition performance over the single-task counterpart. VoxtLM is trained with publicly available data and training recipes and model checkpoints will be open-sourced to make fully reproducible work.

This paper presents a reduced algorithm to the classical projection method for the solution of $d$-dimensional quasiperiodic problems, particularly Schr\"{o}dinger eigenvalue problems. Using the properties of the Schr\"{o}dinger operator in higher-dimensional space via a projection matrix of size $d\times n$, we rigorously prove that the generalized Fourier coefficients of the eigenfunctions decay exponentially along a fixed direction associated with the projection matrix. An efficient reduction strategy of the basis space is then proposed to reduce the degrees of freedom from $O(N^{n})$ to $O(N^{n-d}D^d)$, where $N$ is the number of Fourier grids in one dimension and the truncation coefficient $D$ is much less than $N$. Correspondingly, the computational complexity of the proposed algorithm for solving the first $k$ eigenpairs using the Krylov subspace method decreases from $O(kN^{2n})$ to $O(kN^{2(n-d)}D^{2d})$. Rigorous error estimates of the proposed reduced projection method are provided, indicating that a small $D$ is sufficient to achieve the same level of accuracy as the classical projection method. We present numerical examples of quasiperiodic Schr\"{o}dinger eigenvalue problems in one and two dimensions to demonstrate the accuracy and efficiency of our proposed method.

We present a nonparametric graphical model. Our model uses an undirected graph that represents conditional independence for general random variables defined by the conditional dependence coefficient (Azadkia and Chatterjee (2021)). The set of edges of the graph are defined as $E=\{(i,j):R_{i,j}\neq 0\}$, where $R_{i,j}$ is the conditional dependence coefficient for $X_i$ and $X_j$ given $(X_1,\ldots,X_p) \backslash \{X_{i},X_{j}\}$. We propose a graph structure learning by two steps selection procedure: first, we compute the matrix of sample version of the conditional dependence coefficient $\widehat{R_{i,j}}$; next, for some prespecificated threshold $\lambda>0$ we choose an edge $\{i,j\}$ if $ \left|\widehat{R_{i,j}} \right| \geq \lambda.$ The graph recovery structure has been evaluated on artificial and real datasets. We also applied a slight modification of our graph recovery procedure for learning partial correlation graphs for the elliptical distribution.

The Johnson--Lindenstrauss (JL) lemma is a powerful tool for dimensionality reduction in modern algorithm design. The lemma states that any set of high-dimensional points in a Euclidean space can be flattened to lower dimensions while approximately preserving pairwise Euclidean distances. Random matrices satisfying this lemma are called JL transforms (JLTs). Inspired by existing $s$-hashing JLTs with exactly $s$ nonzero elements on each column, the present work introduces an ensemble of sparse matrices encompassing so-called $s$-hashing-like matrices whose expected number of nonzero elements on each column is~$s$. The independence of the sub-Gaussian entries of these matrices and the knowledge of their exact distribution play an important role in their analyses. Using properties of independent sub-Gaussian random variables, these matrices are demonstrated to be JLTs, and their smallest and largest singular values are estimated non-asymptotically using a technique from geometric functional analysis. As the dimensions of the matrix grow to infinity, these singular values are proved to converge almost surely to fixed quantities (by using the universal Bai--Yin law), and in distribution to the Gaussian orthogonal ensemble (GOE) Tracy--Widom law after proper rescalings. Understanding the behaviors of extreme singular values is important in general because they are often used to define a measure of stability of matrix algorithms. For example, JLTs were recently used in derivative-free optimization algorithmic frameworks to select random subspaces in which are constructed random models or poll directions to achieve scalability, whence estimating their smallest singular value in particular helps determine the dimension of these subspaces.

Existing deepfake speech detection systems lack generalizability to unseen attacks (i.e., samples generated by generative algorithms not seen during training). Recent studies have explored the use of universal speech representations to tackle this issue and have obtained inspiring results. These works, however, have focused on innovating downstream classifiers while leaving the representation itself untouched. In this study, we argue that characterizing the long-term temporal dynamics of these representations is crucial for generalizability and propose a new method to assess representation dynamics. Indeed, we show that different generative models generate similar representation dynamics patterns with our proposed method. Experiments on the ASVspoof 2019 and 2021 datasets validate the benefits of the proposed method to detect deepfakes from methods unseen during training, significantly improving on several benchmark methods.

Solving multiphysics-based inverse problems for geological carbon storage monitoring can be challenging when multimodal time-lapse data are expensive to collect and costly to simulate numerically. We overcome these challenges by combining computationally cheap learned surrogates with learned constraints. Not only does this combination lead to vastly improved inversions for the important fluid-flow property, permeability, it also provides a natural platform for inverting multimodal data including well measurements and active-source time-lapse seismic data. By adding a learned constraint, we arrive at a computationally feasible inversion approach that remains accurate. This is accomplished by including a trained deep neural network, known as a normalizing flow, which forces the model iterates to remain in-distribution, thereby safeguarding the accuracy of trained Fourier neural operators that act as surrogates for the computationally expensive multiphase flow simulations involving partial differential equation solves. By means of carefully selected experiments, centered around the problem of geological carbon storage, we demonstrate the efficacy of the proposed constrained optimization method on two different data modalities, namely time-lapse well and time-lapse seismic data. While permeability inversions from both these two modalities have their pluses and minuses, their joint inversion benefits from either, yielding valuable superior permeability inversions and CO2 plume predictions near, and far away, from the monitoring wells.

Standard multiparameter eigenvalue problems (MEPs) are systems of $k\ge 2$ linear $k$-parameter square matrix pencils. Recently, a new form of multiparameter eigenvalue problems has emerged: a rectangular MEP (RMEP) with only one multivariate rectangular matrix pencil, where we are looking for combinations of the parameters for which the rank of the pencil is not full. Applications include finding the optimal least squares autoregressive moving average (ARMA) model and the optimal least squares realization of autonomous linear time-invariant (LTI) dynamical system. For linear and polynomial RMEPs, we give the number of solutions and show how these problems can be solved numerically by a transformation into a standard MEP. For the transformation we provide new linearizations for quadratic multivariate matrix polynomials with a specific structure of monomials and consider mixed systems of rectangular and square multivariate matrix polynomials. This numerical approach seems computationally considerably more attractive than the block Macaulay method, the only other currently available numerical method for polynomial RMEPs.

While text-conditional 3D object generation and manipulation have seen rapid progress, the evaluation of coherence between generated 3D shapes and input textual descriptions lacks a clear benchmark. The reason is twofold: a) the low quality of the textual descriptions in the only publicly available dataset of text-shape pairs; b) the limited effectiveness of the metrics used to quantitatively assess such coherence. In this paper, we propose a comprehensive solution that addresses both weaknesses. Firstly, we employ large language models to automatically refine textual descriptions associated with shapes. Secondly, we propose a quantitative metric to assess text-to-shape coherence, through cross-attention mechanisms. To validate our approach, we conduct a user study and compare quantitatively our metric with existing ones. The refined dataset, the new metric and a set of text-shape pairs validated by the user study comprise a novel, fine-grained benchmark that we publicly release to foster research on text-to-shape coherence of text-conditioned 3D generative models. Benchmark available at //cvlab-unibo.github.io/CrossCoherence-Web/.

Although the applications of Non-Homogeneous Poisson Processes to model and study the threshold overshoots of interest in different time series of measurements have proven to provide good results, they needed to be complemented with an efficient and automatic diagnostic technique to establish the location of the change-points, which, when taken into account, make the estimated model fit poorly in regards of the information contained in the real model. For this reason, we propose a new method to solve the segmentation uncertainty of the time series of measurements, where the emission distribution of exceedances of a specific threshold is the focus of investigation. One of the great contributions of the present algorithm is that all the days that overflowed are candidates to be a change-point, so all the possible configurations of overflow days are the possible chromosomes, which will unite to have offspring. Under the heuristics of a genetic algorithm, the solution to the problem of finding such change points will be guaranteed to be non-local and the best possible one, reducing wasted machine time evaluating the least likely chromosomes to be a solution to the problem. The analytical evaluation technique will be by means of the Minimum Description Length (\textit{MDL}) as the objective function, which is the joint posterior distribution function of the parameters of each regime and the change points that determines them and which account as well for the influence of the presence of said times.

The sequential composition of propositional logic programs has been recently introduced. This paper studies the sequential {\em decomposition} of programs by studying Green's relations $\mathcal{L,R,J}$ -- well-known in semigroup theory -- between programs. In a broader sense, this paper is a further step towards an algebraic theory of logic programming.

北京阿比特科技有限公司