亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The reachability problem for vector addition systems with states (VASS) has been shown to be \textsc{Ackermann}-complete. For every $k\geq 3$, a completeness result for the $k$-dimensional VASS reachability problem is not yet available. It is shown in this paper that the $3$-dimensional VASS reachability problem is in \textsc{Tower}, improving upon the current best upper bound $\mathbf{F}_7$ established by Leroux and Schmidt in 2019.

相關內容

Deep reinforcement learning (RL) is notoriously impractical to deploy due to sample inefficiency. Meta-RL directly addresses this sample inefficiency by learning to perform few-shot learning when a distribution of related tasks is available for meta-training. While many specialized meta-RL methods have been proposed, recent work suggests that end-to-end learning in conjunction with an off-the-shelf sequential model, such as a recurrent network, is a surprisingly strong baseline. However, such claims have been controversial due to limited supporting evidence, particularly in the face of prior work establishing precisely the opposite. In this paper, we conduct an empirical investigation. While we likewise find that a recurrent network can achieve strong performance, we demonstrate that the use of hypernetworks is crucial to maximizing their potential. Surprisingly, when combined with hypernetworks, the recurrent baselines that are far simpler than existing specialized methods actually achieve the strongest performance of all methods evaluated.

Discovering governing equations from data is important to many scientific and engineering applications. Despite promising successes, existing methods are still challenged by data sparsity as well as noise issues, both of which are ubiquitous in practice. Moreover, state-of-the-art methods lack uncertainty quantification and/or are costly in training. To overcome these limitations, we propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS). We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises. We combine it with a Bayesian spike-and-slab prior -- an ideal Bayesian sparse distribution -- for effective operator selection and uncertainty quantification. We develop an expectation propagation expectation-maximization (EP-EM) algorithm for efficient posterior inference and function estimation. To overcome the computational challenge of kernel regression, we place the function values on a mesh and induce a Kronecker product construction, and we use tensor algebra methods to enable efficient computation and optimization. We show the significant advantages of KBASS on a list of benchmark ODE and PDE discovery tasks.

Matrix-variate distributions are a recent addition to the model-based clustering field, thereby making it possible to analyze data in matrix form with complex structure such as images and time series. Due to its recent appearance, there is limited literature on matrix-variate data, with even less on dealing with outliers in these models. An approach for clustering matrix-variate normal data with outliers is discussed. The approach, which uses the distribution of subset log-likelihoods, extends the OCLUST algorithm to matrix-variate normal data and uses an iterative approach to detect and trim outliers.

We propose a new distributed-computing model, inspired by permissionless distributed systems such as Bitcoin and Ethereum, that allows studying permissionless consensus in a mathematically regular setting. Like in the sleepy model of Pass and Shi, we consider a synchronous, round-by-round message-passing system in which the set of online processors changes each round. Unlike the sleepy model, the set of processors may be infinite. Moreover, processors never fail; instead, an adversary can temporarily or permanently impersonate some processors. Finally, processors have access to a strong form of message-authentication that authenticates not only the sender of a message but also the round in which the message was sent. Assuming that, each round, the adversary impersonates less than 1/2 of the online processors, we present two consensus algorithms. The first ensures deterministic safety and constant latency in expectation, assuming a probabilistic leader-election oracle. The second ensures deterministic safety and deterministic liveness assuming irrevocable impersonation and eventually-stabilizing participation. The model is unrealistic in full generality. However, if we assume finitely many processes and that the set of faulty processes remains constant, the model coincides with a practically-motivated model: the static version of the sleepy model.

Different notions of the consistency of obligations collapse in standard deontic logic. In justification logics, which feature explicit reasons for obligations, the situation is different. Their strength depends on a constant specification and on the available set of operations for combining different reasons. We present different consistency principles in justification logic and compare their logical strength. We propose a novel semantics for which justification logics with the explicit version of axiom D, jd, are complete for arbitrary constant specifications. We then discuss the philosophical implications with regard to some deontic paradoxes.

In this paper, we prove the following non-linear generalization of the classical Sylvester-Gallai theorem. Let $\mathbb{K}$ be an algebraically closed field of characteristic $0$, and $\mathcal{F}=\{F_1,\cdots,F_m\} \subset \mathbb{K}[x_1,\cdots,x_N]$ be a set of irreducible homogeneous polynomials of degree at most $d$ such that $F_i$ is not a scalar multiple of $F_j$ for $i\neq j$. Suppose that for any two distinct $F_i,F_j\in \mathcal{F}$, there is $k\neq i,j$ such that $F_k\in \mathrm{rad}(F_i,F_j)$. We prove that such radical SG configurations must be low dimensional. More precisely, we show that there exists a function $\lambda : \mathbb{N} \to \mathbb{N}$, independent of $\mathbb{K},N$ and $m$, such that any such configuration $\mathcal{F}$ must satisfy $$ \dim (\mathrm{span}_{\mathbb{K}}{\mathcal{F}}) \leq \lambda(d). $$ Our result confirms a conjecture of Gupta [Gup14, Conjecture 2] and generalizes the quadratic and cubic Sylvester-Gallai theorems of [S20,OS22]. Our result takes us one step closer towards the first deterministic polynomial time algorithm for the Polynomial Identity Testing (PIT) problem for depth-4 circuits of bounded top and bottom fanins. Our result, when combined with the Stillman uniformity type results of [AH20a,DLL19,ESS21], yields uniform bounds for several algebraic invariants such as projective dimension, Betti numbers and Castelnuovo-Mumford regularity of ideals generated by radical SG configurations.

Physically informed neural networks (PINNs) are a promising emerging method for solving differential equations. As in many other deep learning approaches, the choice of PINN design and training protocol requires careful craftsmanship. Here, we suggest a comprehensive theoretical framework that sheds light on this important problem. Leveraging an equivalence between infinitely over-parameterized neural networks and Gaussian process regression (GPR), we derive an integro-differential equation that governs PINN prediction in the large data-set limit -- the neurally-informed equation. This equation augments the original one by a kernel term reflecting architecture choices and allows quantifying implicit bias induced by the network via a spectral decomposition of the source term in the original differential equation.

Quantization has emerged as a promising direction for model compression. Recently, data-free quantization has been widely studied as a promising method to avoid privacy concerns, which synthesizes images as an alternative to real training data. Existing methods use classification loss to ensure the reliability of the synthesized images. Unfortunately, even if these images are well-classified by the pre-trained model, they still suffer from low semantics and homogenization issues. Intuitively, these low-semantic images are sensitive to perturbations, and the pre-trained model tends to have inconsistent output when the generator synthesizes an image with poor semantics. To this end, we propose Robustness-Guided Image Synthesis (RIS), a simple but effective method to enrich the semantics of synthetic images and improve image diversity, further boosting the performance of downstream data-free compression tasks. Concretely, we first introduce perturbations on input and model weight, then define the inconsistency metrics at feature and prediction levels before and after perturbations. On the basis of inconsistency on two levels, we design a robustness optimization objective to enhance the semantics of synthetic images. Moreover, we also make our approach diversity-aware by forcing the generator to synthesize images with small correlations in the label space. With RIS, we achieve state-of-the-art performance for various settings on data-free quantization and can be extended to other data-free compression tasks.

Basis splines enable a time-continuous feasibility check with a finite number of constraints. Constraints apply to the whole trajectory for motion planning applications that require a collision-free and dynamically feasible trajectory. Existing motion planners that rely on gradient-based optimization apply time scaling to implement a shrinking planning horizon. They neither guarantee a recursively feasible trajectory nor enable reaching two terminal manifold parts at different time scales. This paper proposes a nonlinear optimization problem that addresses the drawbacks of existing approaches. Therefore, the spline breakpoints are included in the optimization variables. Transformations between spline bases are implemented so a sparse problem formulation is achieved. A strategy for breakpoint removal enables the convergence into a terminal manifold. The evaluation in an overtaking scenario shows the influence of the breakpoint number on the solution quality and the time required for optimization.

Generalization to out-of-distribution (OOD) data is a capability natural to humans yet challenging for machines to reproduce. This is because most learning algorithms strongly rely on the i.i.d.~assumption on source/target data, which is often violated in practice due to domain shift. Domain generalization (DG) aims to achieve OOD generalization by using only source data for model learning. Since first introduced in 2011, research in DG has made great progresses. In particular, intensive research in this topic has led to a broad spectrum of methodologies, e.g., those based on domain alignment, meta-learning, data augmentation, or ensemble learning, just to name a few; and has covered various vision applications such as object recognition, segmentation, action recognition, and person re-identification. In this paper, for the first time a comprehensive literature review is provided to summarize the developments in DG for computer vision over the past decade. Specifically, we first cover the background by formally defining DG and relating it to other research fields like domain adaptation and transfer learning. Second, we conduct a thorough review into existing methods and present a categorization based on their methodologies and motivations. Finally, we conclude this survey with insights and discussions on future research directions.

北京阿比特科技有限公司