亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We obtain new upper and lower bounds on the number of unit perimeter triangles spanned by points in the plane. We also establish improved bounds in the special case where the point set is a section of the integer grid.

相關內容

The limitations of turbulence closure models in the context of Reynolds-averaged NavierStokes (RANS) simulations play a significant part in contributing to the uncertainty of Computational Fluid Dynamics (CFD). Perturbing the spectral representation of the Reynolds stress tensor within physical limits is common practice in several commercial and open-source CFD solvers, in order to obtain estimates for the epistemic uncertainties of RANS turbulence models. Recent research revealed, that there is a need for moderating the amount of perturbed Reynolds stress tensor tensor to be considered due to upcoming stability issues of the solver. In this paper we point out that the consequent common implementation can lead to unintended states of the resulting perturbed Reynolds stress tensor. The combination of eigenvector perturbation and moderation factor may actually result in moderated eigenvalues, which are not linearly dependent on the originally unperturbed and fully perturbed eigenvalues anymore. Hence, the computational implementation is no longer in accordance with the conceptual idea of the Eigenspace Perturbation Framework. We verify the implementation of the conceptual description with respect to its self-consistency. Adequately representing the basic concept results in formulating a computational implementation to improve self-consistency of the Reynolds stress tensor perturbation

Coresets for $k$-means and $k$-median problems yield a small summary of the data, which preserve the clustering cost with respect to any set of $k$ centers. Recently coresets have also been constructed for constrained $k$-means and $k$-median problems. However, the notion of coresets has the drawback that (i) they can only be applied in settings where the input points are allowed to have weights, and (ii) in general metric spaces, the size of the coresets can depend logarithmically on the number of points. The notion of weak coresets, which have less stringent requirements than coresets, has been studied in the context of classical $k$-means and $k$-median problems. A weak coreset is a pair $(J,S)$ of subsets of points, where $S$ acts as a summary of the point set and $J$ as a set of potential centers. This pair satisfies the properties that (i) $S$ is a good summary of the data as long as the $k$ centers are chosen from $J$ only, and (ii) there is a good choice of $k$ centers in $J$ with cost close to the optimal cost. We develop this framework, which we call universal weak coresets, for constrained clustering settings. In conjunction with recent coreset constructions for constrained settings, our designs give greater data compression, are conceptually simpler, and apply to a wide range of constrained $k$-median and $k$-means problems.

Traditional inverse rendering techniques are based on textured meshes, which naturally adapts to modern graphics pipelines, but costly differentiable multi-bounce Monte Carlo (MC) ray tracing poses challenges for modeling global illumination. Recently, neural fields has demonstrated impressive reconstruction quality but falls short in modeling indirect illumination. In this paper, we introduce a simple yet efficient inverse rendering framework that combines the strengths of both methods. Specifically, given pre-trained neural field representing the scene, we can obtain an initial estimate of the signed distance field (SDF) and create a Neural Radiance Cache (NRC), an enhancement over the traditional radiance cache used in real-time rendering. By using the former to initialize differentiable marching tetrahedrons (DMTet) and the latter to model indirect illumination, we can compute the global illumination via single-bounce differentiable MC ray tracing and jointly optimize the geometry, material, and light through back propagation. Experiments demonstrate that, compared to previous methods, our approach effectively prevents indirect illumination effects from being baked into materials, thus obtaining the high-quality reconstruction of triangle mesh, Physically-Based (PBR) materials, and High Dynamic Range (HDR) light probe.

In Bayesian inference, the approximation of integrals of the form $\psi = \mathbb{E}_{F}{l(X)} = \int_{\chi} l(\mathbf{x}) d F(\mathbf{x})$ is a fundamental challenge. Such integrals are crucial for evidence estimation, which is important for various purposes, including model selection and numerical analysis. The existing strategies for evidence estimation are classified into four categories: deterministic approximation, density estimation, importance sampling, and vertical representation (Llorente et al., 2020). In this paper, we show that the Riemann sum estimator due to Yakowitz (1978) can be used in the context of nested sampling (Skilling, 2006) to achieve a $O(n^{-4})$ rate of convergence, faster than the usual Ergodic Central Limit Theorem. We provide a brief overview of the literature on the Riemann sum estimators and the nested sampling algorithm and its connections to vertical likelihood Monte Carlo. We provide theoretical and numerical arguments to show how merging these two ideas may result in improved and more robust estimators for evidence estimation, especially in higher dimensional spaces. We also briefly discuss the idea of simulating the Lorenz curve that avoids the problem of intractable $\Lambda$ functions, essential for the vertical representation and nested sampling.

This paper investigates the relationship between the universal approximation property of deep neural networks and topological characteristics of datasets. Our primary contribution is to introduce data topology-dependent upper bounds on the network width. Specifically, we first show that a three-layer neural network, applying a ReLU activation function and max pooling, can be designed to approximate an indicator function over a compact set, one that is encompassed by a tight convex polytope. This is then extended to a simplicial complex, deriving width upper bounds based on its topological structure. Further, we calculate upper bounds in relation to the Betti numbers of select topological spaces. Finally, we prove the universal approximation property of three-layer ReLU networks using our topological approach. We also verify that gradient descent converges to the network structure proposed in our study.

It is proved that the Weisfeiler-Leman dimension of the class of permutation graphs is at most 18. Previously it was only known that this dimension is finite (Gru{\ss}ien, 2017).

We consider the problem of estimating the optimal transport map between two probability distributions, $P$ and $Q$ in $\mathbb R^d$, on the basis of i.i.d. samples. All existing statistical analyses of this problem require the assumption that the transport map is Lipschitz, a strong requirement that, in particular, excludes any examples where the transport map is discontinuous. As a first step towards developing estimation procedures for discontinuous maps, we consider the important special case where the data distribution $Q$ is a discrete measure supported on a finite number of points in $\mathbb R^d$. We study a computationally efficient estimator initially proposed by Pooladian and Niles-Weed (2021), based on entropic optimal transport, and show in the semi-discrete setting that it converges at the minimax-optimal rate $n^{-1/2}$, independent of dimension. Other standard map estimation techniques both lack finite-sample guarantees in this setting and provably suffer from the curse of dimensionality. We confirm these results in numerical experiments, and provide experiments for other settings, not covered by our theory, which indicate that the entropic estimator is a promising methodology for other discontinuous transport map estimation problems.

For two graphs $G$ and $F$, the extremal number of $F$ in $G$, denoted by {ex}$(G,F)$, is the maximum number of edges in a spanning subgraph of $G$ not containing $F$ as a subgraph. Determining {ex}$(K_n,F)$ for a given graph $F$ is a classical extremal problem in graph theory. In 1962, Erd\H{o}s determined {ex}$(K_n,kK_3)$, which generalized Mantel's Theorem. On the other hand, in 1974, {Bollob\'{a}s}, Erd\H{o}s, and Straus determined {ex}$(K_{n_1,n_2,\dots,n_r},K_t)$, which extended Tur\'{a}n's Theorem to complete multipartite graphs. { In this paper,} we determine {ex}$(K_{n_1,n_2,\dots,n_r},kK_3)$ for $r\ge 4$ and $10k-4\le n_1+4k\le n_2\le n_3\le \cdots \le n_r$.

Let C be an arbitrary simple-root cyclic code and let G be the subgroup of Aut(C) (the automorphism group of C) generated by the multiplier, the cyclic shift and the scalar multiplications. To the best of our knowledge, the subgroup G is the largest subgroup of Aut(C). In this paper, an explicit formula, in some cases an upper bound, for the number of orbits of G on C\{0} is established. An explicit upper bound on the number of non-zero weights of C is consequently derived and a necessary and sufficient condition for the code C meeting the bound is exhibited. Many examples are presented to show that our new upper bounds are tight and are strictly less than the upper bounds in [Chen and Zhang, IEEE-TIT, 2023]. In addition, for two special classes of cyclic codes, smaller upper bounds on the number of non-zero weights of such codes are obtained by replacing G with larger subgroups of the automorphism groups of these codes. As a byproduct, our main results suggest a new way to find few-weight cyclic codes.

We consider the problem of explaining the predictions of graph neural networks (GNNs), which otherwise are considered as black boxes. Existing methods invariably focus on explaining the importance of graph nodes or edges but ignore the substructures of graphs, which are more intuitive and human-intelligible. In this work, we propose a novel method, known as SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained GNN model and an input graph, our SubgraphX explains its predictions by efficiently exploring different subgraphs with Monte Carlo tree search. To make the tree search more effective, we propose to use Shapley values as a measure of subgraph importance, which can also capture the interactions among different subgraphs. To expedite computations, we propose efficient approximation schemes to compute Shapley values for graph data. Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly and directly. Experimental results show that our SubgraphX achieves significantly improved explanations, while keeping computations at a reasonable level.

小貼士
登錄享
相關主題
北京阿比特科技有限公司