亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we propose a definition of the distributional Riemann curvature tensor in dimension $N\geq 2$ if the underlying metric tensor $g$ defined on a triangulation $\mathcal{T}$ possesses only single-valued tangential-tangential components on codimension 1 simplices. We analyze the convergence of the curvature approximation in the $H^{-2}$-norm if a sequence of interpolants $g_h$ of polynomial order $k\geq 0$ of a smooth metric $g$ is given. We show that for dimension $N=2$ convergence rates of order $\mathcal{O}(h^{k+1})$ are obtained. For $N\geq 3$ convergence holds only in the case $k\geq 1$. Numerical examples demonstrate that our theoretical results are sharp. By choosing appropriate test functions we show that the distributional Gauss and scalar curvature in 2D respectively any dimension are obtained. Further, a first definition of the distributional Ricci curvature tensor in arbitrary dimension is derived, for which our analysis is applicable.

相關內容

In this paper, we formulate and analyse a symmetric low-regularity integrator for solving the nonlinear Klein-Gordon equation in the $d$-dimensional space with $d=1,2,3$. The integrator is constructed based on the two-step trigonometric method and the proposed integrator has a simple form. Error estimates are rigorously presented to show that the integrator can achieve second-order time accuracy in the energy space under the regularity requirement in $H^{1+\frac{d}{4}}\times H^{\frac{d}{4}}$. Moreover, the time symmetry of the scheme ensures the good long-time energy conservation which is rigorously proved by the technique of modulated Fourier expansions. A numerical test is presented and the numerical results demonstrate the superiorities of the new integrator over some existing methods.

In this paper we revisit the classical problem of classification, but impose privacy constraints. Under such constraints, the raw data $(X_1,Y_1),\ldots,(X_n,Y_n)$ cannot be directly observed, and all classifiers are functions of the randomised outcome of a suitable local differential privacy mechanism. The statistician is free to choose the form of this privacy mechanism, and here we add Laplace distributed noise to a discretisation of the location of each feature vector $X_i$ and to its label $Y_i$. The classification rule is the privatized version of the well-studied partitioning classification rule. In addition to the standard Lipschitz and margin conditions, a novel characteristic is introduced, by which the exact rate of convergence of the classification error probability is calculated, both for non-private and private data.

Advances in large language models (LLMs) have driven an explosion of interest about their societal impacts. Much of the discourse around how they will impact social equity has been cautionary or negative, focusing on questions like "how might LLMs be biased and how would we mitigate those biases?" This is a vital discussion: the ways in which AI generally, and LLMs specifically, can entrench biases have been well-documented. But equally vital, and much less discussed, is the more opportunity-focused counterpoint: "what promising applications do LLMs enable that could promote equity?" If LLMs are to enable a more equitable world, it is not enough just to play defense against their biases and failure modes. We must also go on offense, applying them positively to equity-enhancing use cases to increase opportunities for underserved groups and reduce societal discrimination. There are many choices which determine the impact of AI, and a fundamental choice very early in the pipeline is the problems we choose to apply it to. If we focus only later in the pipeline -- making LLMs marginally more fair as they facilitate use cases which intrinsically entrench power -- we will miss an important opportunity to guide them to equitable impacts. Here, we highlight the emerging potential of LLMs to promote equity by presenting four newly possible, promising research directions, while keeping risks and cautionary points in clear view.

Given an undirected graph $G=(V,E)$ (i.e. the conflict graph) where $V$ is a set of $n$ vertices (representing the jobs), processing times $p \colon V \to \mathbb{Z}_>$, and $m\geq 2$ identical machines the Parallel Machine Scheduling with Conflicts (PMC) consists in finding an assignment $c \colon V \to [m]:=\{1,\ldots, m\}$ with $c(u)\neq c(v)$ for all $\{u,v\} \in E$ that minimizes the makespan $\max_{k \in [m]} \sum_{v \in V \colon c(v)=k} p(v)$. First we consider the natural assignment formulation for PMC using binary variables indexed by the jobs and machines, and discuss how to reduce the symmetries in such model. Then we propose a compact mixed integer linear programming formulation for PMC to tackle the issues related to symmetry and unbalanced enumeration tree associated with the assignment model. The proposed formulation for PMC uses a set of representative jobs (one in each machine) to express feasible solutions of the problem, and it is based on the representatives model for the vertex coloring problem. We present a polyhedral study of the associated polytope, and show classes of valid inequalities inherited from the stable set polytope. We describe branch-and-cut algorithms for PMC, and report on preliminary computational experiments with benchmark instances.

A posteriori reduced-order models, e.g. proper orthogonal decomposition, are essential to affordably tackle realistic parametric problems. They rely on a trustful training set, that is a family of full-order solutions (snapshots) representative of all possible outcomes of the parametric problem. Having such a rich collection of snapshots is not, in many cases, computationally viable. A strategy for data augmentation, designed for parametric laminar incompressible flows, is proposed to enrich poorly populated training sets. The goal is to include in the new, artificial snapshots emerging features, not present in the original basis, that do enhance the quality of the reduced-order solution. The methodologies devised are based on exploiting basic physical principles, such as mass and momentum conservation, to devise physically-relevant, artificial snapshots at a fraction of the cost of additional full-order solutions. Interestingly, the numerical results show that the ideas exploiting only mass conservation (i.e., incompressibility) are not producing significant added value with respect to the standard linear combinations of snapshots. Conversely, accounting for the linearized momentum balance via the Oseen equation does improve the quality of the resulting approximation and therefore is an effective data augmentation strategy in the framework of viscous incompressible laminar flows.

Subspace codes are the $q$-analog of binary block codes in the Hamming metric. Here the codewords are vector spaces over a finite field. They have e.g. applications in random linear network coding, distributed storage, and cryptography. In this chapter we survey known constructions and upper bounds for subspace codes.

A {\em packing coloring} of a graph $G$ is a mapping assigning a positive integer (a color) to every vertex of $G$ such that every two vertices of color $k$ are at distance at least $k+1$. The least number of colors needed for a packing coloring of $G$ is called the {\em packing chromatic number} of $G$. In this paper, we continue the study of the packing chromatic number of hypercubes and we improve the upper bounds reported by Torres and Valencia-Pabon ({\em P. Torres, M. Valencia-Pabon, The packing chromatic number of hypercubes, Discrete Appl. Math. 190--191 (2015), 127--140}) by presenting recursive constructions of subsets of distant vertices making use of the properties of the extended Hamming codes. We also answer in negative a question on packing coloring of Cartesian products raised by Bre\v{s}ar, Klav\v{z}ar, and Rall ({\em Problem 5, Bre\v{s}ar et al., On the packing chromatic number of Cartesian products, hexagonal lattice, and trees. Discrete Appl. Math. 155 (2007), 2303--2311.}).

In this paper, we study the stability and convergence of a fully discrete finite difference scheme for the initial value problem associated with the Korteweg-De Vries (KdV) equation. We employ the Crank-Nicolson method for temporal discretization and establish that the scheme is $L^2$-conservative. The convergence analysis reveals that utilizing inherent Kato's local smoothing effect, the proposed scheme converges to a classical solution for sufficiently regular initial data $u_0 \in H^{3}(\mathbb{R})$ and to a weak solution in $L^2(0,T;L^2_{\text{loc}}(\mathbb{R}))$ for non-smooth initial data $u_0 \in L^2(\mathbb{R})$. Optimal convergence rates in both time and space for the devised scheme are derived. The theoretical results are justified through several numerical illustrations.

Deep neural networks (DNNs) often fail silently with over-confident predictions on out-of-distribution (OOD) samples, posing risks in real-world deployments. Existing techniques predominantly emphasize either the feature representation space or the gradient norms computed with respect to DNN parameters, yet they overlook the intricate gradient distribution and the topology of classification regions. To address this gap, we introduce GRadient-aware Out-Of-Distribution detection in interpolated manifolds (GROOD), a novel framework that relies on the discriminative power of gradient space to distinguish between in-distribution (ID) and OOD samples. To build this space, GROOD relies on class prototypes together with a prototype that specifically captures OOD characteristics. Uniquely, our approach incorporates a targeted mix-up operation at an early intermediate layer of the DNN to refine the separation of gradient spaces between ID and OOD samples. We quantify OOD detection efficacy using the distance to the nearest neighbor gradients derived from the training set, yielding a robust OOD score. Experimental evaluations substantiate that the introduction of targeted input mix-upamplifies the separation between ID and OOD in the gradient space, yielding impressive results across diverse datasets. Notably, when benchmarked against ImageNet-1k, GROOD surpasses the established robustness of state-of-the-art baselines. Through this work, we establish the utility of leveraging gradient spaces and class prototypes for enhanced OOD detection for DNN in image classification.

The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.

北京阿比特科技有限公司