亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present $\mathcal{X}^3$ (pronounced XCube), a novel generative model for high-resolution sparse 3D voxel grids with arbitrary attributes. Our model can generate millions of voxels with a finest effective resolution of up to $1024^3$ in a feed-forward fashion without time-consuming test-time optimization. To achieve this, we employ a hierarchical voxel latent diffusion model which generates progressively higher resolution grids in a coarse-to-fine manner using a custom framework built on the highly efficient VDB data structure. Apart from generating high-resolution objects, we demonstrate the effectiveness of XCube on large outdoor scenes at scales of 100m$\times$100m with a voxel size as small as 10cm. We observe clear qualitative and quantitative improvements over past approaches. In addition to unconditional generation, we show that our model can be used to solve a variety of tasks such as user-guided editing, scene completion from a single scan, and text-to-3D. More results and details can be found at //research.nvidia.com/labs/toronto-ai/xcube/.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · 優化器 · Lipschitz · INFORMS · 約束優化 ·
2024 年 1 月 29 日

We present the Trust Region Adversarial Functional Subdifferential (TRAFS) algorithm for constrained optimization of nonsmooth convex Lipschitz functions. Unlike previous methods that assume a subgradient oracle model, we work with the functional subdifferential defined as a set of subgradients that simultaneously captures sufficient local information for effective minimization while being easy to compute for a wide range of functions. In each iteration, TRAFS finds the best step vector in an $\ell_2$-bounded trust region by considering the worst bound given by the functional subdifferential. TRAFS finds an approximate solution with an absolute error up to $\epsilon$ in $\mathcal{O}\left( \epsilon^{-1}\right)$ or $\mathcal{O}\left(\epsilon^{-0.5} \right)$ iterations depending on whether the objective function is strongly convex, compared to the previously best-known bounds of $\mathcal{O}\left(\epsilon^{-2}\right)$ and $\mathcal{O}\left(\epsilon^{-1}\right)$ in these settings. TRAFS makes faster progress if the functional subdifferential satisfies a locally quadratic property; as a corollary, TRAFS achieves linear convergence (i.e., $\mathcal{O}\left(\log \epsilon^{-1}\right)$) for strongly convex smooth functions. In the numerical experiments, TRAFS is on average 39.1x faster and solves twice as many problems compared to the second-best method.

Deep neural network approximation of nonlinear operators, commonly referred to as DeepONet, has proven capable of approximating PDE backstepping designs in which a single Goursat-form PDE governs a single feedback gain function. In boundary control of coupled PDEs, coupled Goursat-form PDEs govern two or more gain kernels -- a PDE structure unaddressed thus far with DeepONet. In this note, we open the subject of approximating systems of gain kernel PDEs for hyperbolic PDE plants by considering a simple counter-convecting $2\times 2$ coupled system in whose control a $2\times 2$ kernel PDE systems in Goursat form arises. Applications include oil drilling, Saint-Venant model of shallow water waves, and Aw-Rascle-Zhang model of stop-and-go instability in congested traffic flow. In this paper we establish the continuity of the mapping from (a total of five) plant PDE functional coefficients to the kernel PDE solutions, prove the existence of an arbitrarily close DeepONet approximation to the kernel PDEs, and establish that the DeepONet-approximated gains guarantee stabilization when replacing the exact backstepping gain kernels. Taking into account anti-collocated boundary actuation and sensing, our $L^2$\emph{-Globally-exponentially} stabilizing (GES) approximate gain kernel-based output feedback design implies the deep learning of both the controller's and the observer's gains. Moreover, the encoding of the output-feedback law into DeepONet ensures \emph{semi-global practical exponential stability (SG-PES).} The DeepONet operator speeds up the computation of the controller gains by multiple orders of magnitude. Its theoretically proven stabilizing capability is demonstrated through simulations.

Vision-language foundation models like CLIP have revolutionized the field of artificial intelligence. Nevertheless, VLM models supporting multi-language, e.g., in both Chinese and English, have lagged due to the relative scarcity of large-scale pretraining datasets. Toward this end, we introduce a comprehensive bilingual (Chinese-English) dataset BM-6B with over 6 billion image-text pairs, aimed at enhancing multimodal foundation models to well understand images in both languages. To handle such a scale of dataset, we propose a novel grouped aggregation approach for image-text contrastive loss computation, which reduces the communication overhead and GPU memory demands significantly, facilitating a 60% increase in training speed. We pretrain a series of bilingual image-text foundation models with an enhanced fine-grained understanding ability on BM-6B, the resulting models, dubbed as $M^2$-Encoders (pronounced "M-Square"), set new benchmarks in both languages for multimodal retrieval and classification tasks. Notably, Our largest $M^2$-Encoder-10B model has achieved top-1 accuracies of 88.5% on ImageNet and 80.7% on ImageNet-CN under a zero-shot classification setting, surpassing previously reported SoTA methods by 2.2% and 21.1%, respectively. The $M^2$-Encoder series represents one of the most comprehensive bilingual image-text foundation models to date, so we are making it available to the research community for further exploration and development.

This paper extends the fixed source capability of analytical 1D multigroup $S_N$ equations to solve eigenvalue problems on coarse mesh.

We propose an original approach to investigate the linearity of $\mathbb{Z}_{2^L}$-linear codes, i.e., codes obtained as the image of the generalized Gray map applied to $\mathbb{Z}_{2^L}$-additive codes. To accomplish that, we define two related binary codes: the associated and concatenated, where one could perform a straightforward analysis of the Schur product between their codewords and determine the linearity of the respective $\mathbb{Z}_{2^L}$-linear code. This work expands on previous contributions from the literature, where the linearity was established with respect to the kernel of a code and/or operations on $\mathbb{Z}_{2^L}$. The $\mathbb{Z}_{2^L}$-additive codes we apply the Gray map and check the linearity are the well-known Hadamard, simplex, and MacDonald codes. We also present families of Reed-Muller and cyclic codes that yield to linear $\mathbb{Z}_{2^L}$-linear codes and perform a computational verification of our proposed method applied to other $\mathbb{Z}_{2^L}$-additive codes.

This paper proposes an $\alpha$-leakage measure for $\alpha\in[0,\infty)$ by a cross entropy interpretation of R{\'{e}}nyi entropy. While R\'{e}nyi entropy was originally defined as an $f$-mean for $f(t) = \exp((1-\alpha)t)$, we reveal that it is also a $\tilde{f}$-mean cross entropy measure for $\tilde{f}(t) = \exp(\frac{1-\alpha}{\alpha}t)$. Minimizing this R\'{e}nyi cross-entropy gives R\'{e}nyi entropy, by which the prior and posterior uncertainty measures are defined corresponding to the adversary's knowledge gain on sensitive attribute before and after data release, respectively. The $\alpha$-leakage is proposed as the difference between $\tilde{f}$-mean prior and posterior uncertainty measures, which is exactly the Arimoto mutual information. This not only extends the existing $\alpha$-leakage from $\alpha \in [1,\infty)$ to the overall R{\'{e}}nyi order range $\alpha \in [0,\infty)$ in a well-founded way with $\alpha=0$ referring to nonstochastic leakage, but also reveals that the existing maximal leakage is a $\tilde{f}$-mean of an elementary $\alpha$-leakage for all $\alpha \in [0,\infty)$, which generalizes the existing pointwise maximal leakage.

We propose a novel approach to nonlinear functional regression, called the Mapping-to-Parameter function model, which addresses complex and nonlinear functional regression problems in parameter space by employing any supervised learning technique. Central to this model is the mapping of function data from an infinite-dimensional function space to a finite-dimensional parameter space. This is accomplished by concurrently approximating multiple functions with a common set of B-spline basis functions by any chosen order, with their knot distribution determined by the Iterative Local Placement Algorithm, a newly proposed free knot placement algorithm. In contrast to the conventional equidistant knot placement strategy that uniformly distributes knot locations based on a predefined number of knots, our proposed algorithms determine knot location according to the local complexity of the input or output functions. The performance of our knot placement algorithms is shown to be robust in both single-function approximation and multiple-function approximation contexts. Furthermore, the effectiveness and advantage of the proposed prediction model in handling both function-on-scalar regression and function-on-function regression problems are demonstrated through several real data applications, in comparison with four groups of state-of-the-art methods.

The $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-additive codes are subgroups of $\mathbb{Z}_2^{\alpha_1} \times \mathbb{Z}_4^{\alpha_2} \times \mathbb{Z}_8^{\alpha_3}$. A $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-linear Hadamard code is a Hadamard code which is the Gray map image of a $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-additive code. A recursive construction of $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-additive Hadamard codes of type $(\alpha_1,\alpha_2, \alpha_3;t_1,t_2, t_3)$ with $\alpha_1 \neq 0$, $\alpha_2 \neq 0$, $\alpha_3 \neq 0$, $t_1\geq 1$, $t_2 \geq 0$, and $t_3\geq 1$ is known. In this paper, we generalize some known results for $\mathbb{Z}_2\mathbb{Z}_4$-linear Hadamard codes to $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-linear Hadamard codes with $\alpha_1 \neq 0$, $\alpha_2 \neq 0$, and $\alpha_3 \neq 0$. First, we show for which types the corresponding $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-linear Hadamard codes of length $2^t$ are nonlinear. For these codes, we compute the kernel and its dimension, which allows us to give a partial classification of these codes. Moreover, for $3 \leq t \leq 11$, we give a complete classification by providing the exact amount of nonequivalent such codes. We also prove the existence of several families of infinite such nonlinear $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-linear Hadamard codes, which are not equivalent to any other constructed $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-linear Hadamard code, nor to any $\mathbb{Z}_2\mathbb{Z}_4$-linear Hadamard code, nor to any previously constructed $\mathbb{Z}_{2^s}$-linear Hadamard code with $s\geq 2$, with the same length $2^t$.

In Linear Logic ($\mathsf{LL}$), the exponential modality $!$ brings forth a distinction between non-linear proofs and linear proofs, where linear means using an argument exactly once. Differential Linear Logic ($\mathsf{DiLL}$) is an extension of Linear Logic which includes additional rules for $!$ which encode differentiation and the ability of linearizing proofs. On the other hand, Graded Linear Logic ($\mathsf{GLL}$) is a variation of Linear Logic in such a way that $!$ is now indexed over a semiring $R$. This $R$-grading allows for non-linear proofs of degree $r \in R$, such that the linear proofs are of degree $1 \in R$. There has been recent interest in combining these two variations of $\mathsf{LL}$ together and developing Graded Differential Linear Logic ($\mathsf{GDiLL}$). In this paper we present a sequent calculus for $\mathsf{GDiLL}$, as well as introduce its categorical semantics, which we call graded differential categories, using both coderelictions and deriving transformations. We prove that symmetric powers always give graded differential categories, and provide other examples of graded differential categories. We also discuss graded versions of (monoidal) coalgebra modalities, additive bialgebra modalities, and the Seely isomorphisms, as well as their implementations in the sequent calculus of $\mathsf{GDiLL}$.

Given a metric space $(V, d)$ along with an integer $k$, the $k$-Median problem asks to open $k$ centers $C \subseteq V$ to minimize $\sum_{v \in V} d(v, C)$, where $d(v, C) := \min_{c \in C} d(v, c)$. While the best-known approximation ratio of $2.613$ holds for the more general supplier version where an additional set $F \subseteq V$ is given with the restriction $C \subseteq F$, the best known hardness for these two versions are $1+1/e \approx 1.36$ and $1+2/e \approx 1.73$ respectively, using the same reduction from Max $k$-Coverage. We prove the following two results separating them. First, we show a $1.546$-parameterized approximation algorithm that runs in time $f(k) n^{O(1)}$. Since $1+2/e$ is proved to be the optimal approximation ratio for the supplier version in the parameterized setting, this result separates the original $k$-Median from the supplier version. Next, we prove a $1.416$-hardness for polynomial-time algorithms assuming the Unique Games Conjecture. This is achieved via a new fine-grained hardness of Max-$k$-Coverage for small set sizes. Our upper bound and lower bound are derived from almost the same expression, with the only difference coming from the well-known separation between the powers of LP and SDP on (hypergraph) vertex cover.

北京阿比特科技有限公司