亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The solutions of the equation f^{ (p--1) }+ f^p = h^p in the unknown function f overan algebraic function field of characteristic p are very closely linked to the structure and fac-torisations of linear differential operators with coefficients in function fields of characteristic p.However, while being able to solve this equation over general algebraic function fields is necessaryeven for operators with rational coefficients, no general resolution method has been developed.We present an algorithm for testing the existence of solutions in polynomial time in the ``size''of h and an algorithm based on the computation of Riemann-Roch spaces and the selection ofelements in the divisor class group, for computing solutions of size polynomial in the ``size'' of hin polynomial time in the size of h and linear in the characteristic p, and discuss its applicationsto the factorisation of linear differential operators in positive characteristic p.

相關內容

Face recognition applications have grown in parallel with the size of datasets, complexity of deep learning models and computational power. However, while deep learning models evolve to become more capable and computational power keeps increasing, the datasets available are being retracted and removed from public access. Privacy and ethical concerns are relevant topics within these domains. Through generative artificial intelligence, researchers have put efforts into the development of completely synthetic datasets that can be used to train face recognition systems. Nonetheless, the recent advances have not been sufficient to achieve performance comparable to the state-of-the-art models trained on real data. To study the drift between the performance of models trained on real and synthetic datasets, we leverage a massive attribute classifier (MAC) to create annotations for four datasets: two real and two synthetic. From these annotations, we conduct studies on the distribution of each attribute within all four datasets. Additionally, we further inspect the differences between real and synthetic datasets on the attribute set. When comparing through the Kullback-Leibler divergence we have found differences between real and synthetic samples. Interestingly enough, we have verified that while real samples suffice to explain the synthetic distribution, the opposite could not be further from being true.

Differential equations are pivotal in modeling and understanding the dynamics of various systems, offering insights into their future states through parameter estimation fitted to time series data. In fields such as economy, politics, and biology, the observation data points in the time series are often independently obtained (i.e., Repeated Cross-Sectional (RCS) data). With RCS data, we found that traditional methods for parameter estimation in differential equations, such as using mean values of time trajectories or Gaussian Process-based trajectory generation, have limitations in estimating the shape of parameter distributions, often leading to a significant loss of data information. To address this issue, we introduce a novel method, Estimation of Parameter Distribution (EPD), providing accurate distribution of parameters without loss of data information. EPD operates in three main steps: generating synthetic time trajectories by randomly selecting observed values at each time point, estimating parameters of a differential equation that minimize the discrepancy between these trajectories and the true solution of the equation, and selecting the parameters depending on the scale of discrepancy. We then evaluated the performance of EPD across several models, including exponential growth, logistic population models, and target cell-limited models with delayed virus production, demonstrating its superiority in capturing the shape of parameter distributions. Furthermore, we applied EPD to real-world datasets, capturing various shapes of parameter distributions rather than a normal distribution. These results effectively address the heterogeneity within systems, marking a substantial progression in accurately modeling systems using RCS data.

We characterize the geometry and topology of the set of all weight vectors for which a linear neural network computes the same linear transformation $W$. This set of weight vectors is called the fiber of $W$ (under the matrix multiplication map), and it is embedded in the Euclidean weight space of all possible weight vectors. The fiber is an algebraic variety that is not necessarily a manifold. We describe a natural way to stratify the fiber--that is, to partition the algebraic variety into a finite set of manifolds of varying dimensions called strata. We call this set of strata the rank stratification. We derive the dimensions of these strata and the relationships by which they adjoin each other. Although the strata are disjoint, their closures are not. Our strata satisfy the frontier condition: if a stratum intersects the closure of another stratum, then the former stratum is a subset of the closure of the latter stratum. Each stratum is a manifold of class $C^\infty$ embedded in weight space, so it has a well-defined tangent space and normal space at every point (weight vector). We show how to determine the subspaces tangent to and normal to a specified stratum at a specified point on the stratum, and we construct elegant bases for those subspaces. To help achieve these goals, we first derive what we call a Fundamental Theorem of Linear Neural Networks, analogous to what Strang calls the Fundamental Theorem of Linear Algebra. We show how to decompose each layer of a linear neural network into a set of subspaces that show how information flows through the neural network. Each stratum of the fiber represents a different pattern by which information flows (or fails to flow) through the neural network. The topology of a stratum depends solely on this decomposition. So does its geometry, up to a linear transformation in weight space.

A U(1)-connection graph $G$ is a graph in which each oriented edge is endowed with a unit complex number, the latter being conjugated under orientation flip. We consider cycle-rooted spanning forests (CRSFs), a particular kind of spanning subgraphs of $G$ that have recently found computational applications as randomized spectral sparsifiers. In this context, CRSFs are drawn from a determinantal measure. Under a condition on the connection, Kassel and Kenyon gave an elegant algorithm, named CyclePopping, to sample from this distribution. The algorithm is an extension of the celebrated algorithm of Wilson that uses a loop-erased random walk to sample uniform spanning trees. In this paper, we give an alternative, elementary proof of correctness of CyclePopping for CRSF sampling; we fill the gaps of a proof sketch by Kassel, who was himself inspired by Marchal's proof of the correctness of Wilson's original algorithm. One benefit of the full proof \`a la Marchal is that we obtain a concise expression for the law of the number of steps to complete the sampling procedure, shedding light on practical situations where the algorithm is expected to run fast. Furthermore, we show how to extend the proof to more general distributions over CRSFs, which are not determinantal. The correctness of CyclePopping is known even in the non-determinantal case from the work of Kassel and Kenyon, so our merit is only to provide an alternate proof. One interest of this alternate proof is again to provide the distribution of the time complexity of the algorithm, in terms of a Poisson point process on the graph loops, or equivalently as a Poisson process on pyramids of cycles, a combinatorial notion introduced by Viennot. Finally, we strive to make the connections to loop measures and combinatorial structures as explicit as possible, to provide a reference for future extensions of the algorithm and its analysis.

The LCP array is an important tool in stringology, allowing to speed up pattern matching algorithms and enabling compact representations of the suffix tree. Recently, Conte et al. [DCC 2023] and Cotumaccio et al. [SPIRE 2023] extended the definition of this array to Wheeler DFAs and, ultimately, to arbitrary labeled graphs, proving that it can be used to efficiently solve matching statistics queries on the graph's paths. In this paper, we provide the first efficient algorithm building the LCP array of a directed labeled graph with $n$ nodes and $m$ edges labeled over an alphabet of size $\sigma$. After arguing that the natural generalization of a compact-space LCP-construction algorithm by Beller et al. [J. Discrete Algorithms 2013] runs in time $\Omega(n\sigma)$, we present a new algorithm based on dynamic range stabbing building the LCP array in $O(n\log \sigma)$ time and $O(n\log\sigma)$ bits of working space.

We give a simple, greedy $O(n^{\omega+0.5})=O(n^{2.872})$-time algorithm to list-decode planted cliques in a semirandom model introduced in [CSV17] (following [FK01]) that succeeds whenever the size of the planted clique is $k\geq O(\sqrt{n} \log^2 n)$. In the model, the edges touching the vertices in the planted $k$-clique are drawn independently with probability $p=1/2$ while the edges not touching the planted clique are chosen by an adversary in response to the random choices. Our result shows that the computational threshold in the semirandom setting is within a $O(\log^2 n)$ factor of the information-theoretic one [Ste17] thus resolving an open question of Steinhardt. This threshold also essentially matches the conjectured computational threshold for the well-studied special case of fully random planted clique. All previous algorithms [CSV17, MMT20, BKS23] in this model are based on rather sophisticated rounding algorithms for entropy-constrained semidefinite programming relaxations and their sum-of-squares strengthenings and the best known guarantee is a $n^{O(1/\epsilon)}$-time algorithm to list-decode planted cliques of size $k \geq \tilde{O}(n^{1/2+\epsilon})$. In particular, the guarantee trivializes to quasi-polynomial time if the planted clique is of size $O(\sqrt{n} \operatorname{polylog} n)$. Our algorithm achieves an almost optimal guarantee with a surprisingly simple greedy algorithm. The prior state-of-the-art algorithmic result above is based on a reduction to certifying bounds on the size of unbalanced bicliques in random graphs -- closely related to certifying the restricted isometry property (RIP) of certain random matrices and known to be hard in the low-degree polynomial model. Our key idea is a new approach that relies on the truth of -- but not efficient certificates for -- RIP of a new class of matrices built from the input graphs.

We consider an expected-value ranking and selection (R&S) problem where all k solutions' simulation outputs depend on a common parameter whose uncertainty can be modeled by a distribution. We define the most probable best (MPB) to be the solution that has the largest probability of being optimal with respect to the distribution and design an efficient sequential sampling algorithm to learn the MPB when the parameter has a finite support. We derive the large deviations rate of the probability of falsely selecting the MPB and formulate an optimal computing budget allocation problem to find the rate-maximizing static sampling ratios. The problem is then relaxed to obtain a set of optimality conditions that are interpretable and computationally efficient to verify. We devise a series of algorithms that replace the unknown means in the optimality conditions with their estimates and prove the algorithms' sampling ratios achieve the conditions as the simulation budget increases. Furthermore, we show that the empirical performances of the algorithms can be significantly improved by adopting the kernel ridge regression for mean estimation while achieving the same asymptotic convergence results. The algorithms are benchmarked against a state-of-the-art contextual R&S algorithm and demonstrated to have superior empirical performances.

The conjugate function method is an algorithm for numerical computation of conformal mappings for simply and multiply connected domains. In this paper, the conjugate function method is extended to cover conformal mappings between Riemannian surfaces. The main challenge addressed here is the connection between Laplace--Beltrami equations on surfaces and the computation of the conformal modulus of a quadrilateral. We consider mappings of simply, doubly, and multiply connected domains. The numerical computation is based on an $hp$-adaptive finite element method. The key advantage of our approach is that it allows highly accurate computations of mappings on surfaces, including domains of complex boundary geometry involving strong singularities and cusps. The efficacy of the proposed method is illustrated via an extensive set of numerical experiments including error estimates.

We give a simpler analysis of the ascending auction of Bikhchandani, de Vries, Schummer, and Vohra to sell a welfare-maximizing base of a matroid at Vickrey prices. The new proofs for economic efficiency and the charge of Vickrey prices only require a few matroid folklore theorems, therefore shortening the analysis of the design goals of the auction significantly.

Variational Physics-Informed Neural Networks (VPINNs) utilize a variational loss function to solve partial differential equations, mirroring Finite Element Analysis techniques. Traditional hp-VPINNs, while effective for high-frequency problems, are computationally intensive and scale poorly with increasing element counts, limiting their use in complex geometries. This work introduces FastVPINNs, a tensor-based advancement that significantly reduces computational overhead and improves scalability. Using optimized tensor operations, FastVPINNs achieve a 100-fold reduction in the median training time per epoch compared to traditional hp-VPINNs. With proper choice of hyperparameters, FastVPINNs surpass conventional PINNs in both speed and accuracy, especially in problems with high-frequency solutions. Demonstrated effectiveness in solving inverse problems on complex domains underscores FastVPINNs' potential for widespread application in scientific and engineering challenges, opening new avenues for practical implementations in scientific machine learning.

北京阿比特科技有限公司