亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the expressive power of subrecursive probabilistic higher-order calculi. More specifically, we show that endowing a very expressive deterministic calculus like G\"odel's $\mathbb{T}$ with various forms of probabilistic choice operators may result in calculi which are not equivalent as for the class of distributions they give rise to, although they all guarantee almost-sure termination. Along the way, we introduce a probabilistic variation of the classic reducibility technique, and we prove that the simplest form of probabilistic choice leaves the expressive power of $\mathbb{T}$ essentially unaltered. The paper ends with some observations about the functional expressive power: expectedly, all the considered calculi capture the functions which $\mathbb{T}$ itself represents, at least when standard notions of observations are considered.

相關內容

In a prophet inequality problem, $n$ boxes arrive online, each containing some value that is drawn independently from a known distribution. Upon the arrival of a box, its value is realized, and an online algorithm decides, immediately and irrevocably, whether to accept it or proceed to the next box. Clearly, an online algorithm that knows the arrival order may be more powerful than an online algorithm that is unaware of the order. Despite the growing interest in the role of the arrival order on the performance of online algorithms, the effect of knowledge of the order has been overlooked thus far. Our goal in this paper is to quantify the loss due to unknown order. We define the order competitive ratio as the worst-case ratio between the performance of the best order-unaware and the best order-aware algorithms. We study the order competitive ratio for two objective functions, namely (i) max-expectation: maximizing the expected accepted value, and (ii) max-probability: maximizing the probability of accepting the box with the largest value. For the max-expectation objective, we're golden: we give a deterministic order-unaware algorithm that achieves an order competitive ratio of the inverse of the golden ratio (i.e., $1/\phi \approx 0.618$). For the max-probability objective, we give a deterministic order-unaware algorithm that achieves an order competitive ratio of $\ln \frac{1}{\lambda} \approx 0.806$ (where $\lambda$ is the unique solution to $\frac{x}{1-x}= \ln \frac{1}{x}$). Both results are tight. Our algorithms are inevitably adaptive and go beyond single-threshold algorithms.

Diffusion probabilistic models (DPMs) represent a class of powerful generative models. Despite their success, the inference of DPMs is expensive since it generally needs to iterate over thousands of timesteps. A key problem in the inference is to estimate the variance in each timestep of the reverse process. In this work, we present a surprising result that both the optimal reverse variance and the corresponding optimal KL divergence of a DPM have analytic forms w.r.t. its score function. Building upon it, we propose Analytic-DPM, a training-free inference framework that estimates the analytic forms of the variance and KL divergence using the Monte Carlo method and a pretrained score-based model. Further, to correct the potential bias caused by the score-based model, we derive both lower and upper bounds of the optimal variance and clip the estimate for a better result. Empirically, our analytic-DPM improves the log-likelihood of various DPMs, produces high-quality samples, and meanwhile enjoys a 20x to 80x speed up.

We present a classical algorithm that, for any $D$-dimensional geometrically-local, quantum circuit $C$ of polylogarithmic-depth, and any bit string $x \in {0,1}^n$, can compute the quantity $|<x|C|0^{\otimes n}>|^2$ to within any inverse-polynomial additive error in quasi-polynomial time, for any fixed dimension $D$. This is an extension of the result [CC21], which originally proved this result for $D = 3$. To see why this is interesting, note that, while the $D = 1$ case of this result follows from standard use of Matrix Product States, known for decades, the $D = 2$ case required novel and interesting techniques introduced in [BGM19]. Extending to the case $D = 3$ was even more laborious and required further new techniques introduced in [CC21]. Our work here shows that, while handling each new dimension has historically required a new insight, and fixed algorithmic primitive, based on known techniques for $D \leq 3$, we can now handle any fixed dimension $D > 3$. Our algorithm uses the Divide-and-Conquer framework of [CC21] to approximate the desired quantity via several instantiations of the same problem type, each involving $D$-dimensional circuits on about half the number of qubits as the original. This division step is then applied recursively, until the width of the recursively decomposed circuits in the $D^{th}$ dimension is so small that they can effectively be regarded as $(D-1)$-dimensional problems by absorbing the small width in the $D^{th}$ dimension into the qudit structure at the cost of a moderate increase in runtime. The main technical challenge lies in ensuring that the more involved portions of the recursive circuit decomposition and error analysis from [CC21] still hold in higher dimensions, which requires small modifications to the analysis in some places.

We propose a new discretization method for PDEs on moving domains in the setting of unfitted finite element methods, which is provably higher-order accurate in space and time. In the considered setting, the physical domain that evolves essentially arbitrarily through a time-independent computational background domain, is represented by a level set function. For the time discretization, the application of standard time stepping schemes that are based on finite difference approximations of the time derivative is not directly possible, as the degrees of freedom may get active or inactive across such a finite difference stencil in time. In [Lehrenfeld, Olshanskii. An Eulerian finite element method for PDEs in time-dependent domains. ESAIM: M2AN, 53:585--614, 2019] this problem is overcome by extending the discrete solution at every timestep to a sufficiently large neighborhood so that all the degrees of freedom that are relevant at the next time step stay active. But that paper focuses on low-order methods. We advance these results with introducing and analyzing realizable techniques for the extension to higher order. To obtain higher-order convergence in space and time, we combine the BDF time stepping with the isoparametric unfitted FEM. The latter has been used and analyzed for several stationary problems before. However, for moving domains the key ingredient in the method, the transformation of the underlying mesh, becomes time-dependent which gives rise to some technical issues. We treat these with special care, carry out an a priori error analysis and two numerical experiments.

We initiate the study of parameterized complexity of $\textsf{QMA}$ problems in terms of the number of non-Clifford gates in the problem description. We show that for the problem of parameterized quantum circuit satisfiability, there exists a classical algorithm solving the problem with a runtime scaling exponentially in the number of non-Clifford gates but only polynomially with the system size. This result follows from our main result, that for any Clifford + $t$ $T$-gate quantum circuit satisfiability problem, the search space of optimal witnesses can be reduced to a stabilizer subspace isomorphic to at most $t$ qubits (independent of the system size). Furthermore, we derive new lower bounds on the $T$-count of circuit satisfiability instances and the $T$-count of the $W$-state assuming the classical exponential time hypothesis ($\textsf{ETH}$). Lastly, we explore the parameterized complexity of the quantum non-identity check problem.

Evolutionary algorithms have been successfully applied to attacking Physically Unclonable Functions (PUFs). CMA-ES is recognized as the most powerful option for a type of attack called the reliability attack. While there is no reason to doubt the performance of CMA-ES, the lack of comparison with different metaheuristics and results for the challenge-response pair-based attack leaves open questions if there are better-suited metaheuristics for the problem. In this paper, we take a step back and systematically evaluate several metaheuristics for the challenge-response pair-based attack on strong PUFs. Our results confirm that CMA-ES has the best performance, but we also note several other algorithms with similar performance while having smaller computational costs. More precisely, if we provide a sufficient number of challenge-response pairs to train the algorithm, various configurations show good results. Consequently, we conclude that EAs represent a strong option for challenge-response pair-based attacks on PUFs.

We introduce the problem of finding a set $B$ of $k$ points in $[0,1]^n$ such that the expected cost of the cheapest point in $B$ that dominates a random point from $[0,1]^n$ is minimized. We study the case where the coordinates of the random points are independently distributed and the cost function is linear. This problem arises naturally in various application areas where customers' requests are satisfied based on predefined products, each corresponding to a subset of features. We show that the problem is NP-hard already for $k=2$ when each coordinate is drawn from $\{0,1\}$, and obtain an FPTAS for general fixed $k$ under mild assumptions on the distributions.

We show $\textsf{EOPL}=\textsf{PLS}\cap\textsf{PPAD}$. Here the class $\textsf{EOPL}$ consists of all total search problems that reduce to the End-of-Potential-Line problem, which was introduced in the works by Hubacek and Yogev (SICOMP 2020) and Fearnley et al. (JCSS 2020). In particular, our result yields a new simpler proof of the breakthrough collapse $\textsf{CLS}=\textsf{PLS}\cap\textsf{PPAD}$ by Fearnley et al. (STOC 2021). We also prove a companion result $\textsf{SOPL}=\textsf{PLS}\cap\textsf{PPADS}$, where $\textsf{SOPL}$ is the class associated with the Sink-of-Potential-Line problem.

A fundamental computation for statistical inference and accurate decision-making is to compute the marginal probabilities or most probable states of task-relevant variables. Probabilistic graphical models can efficiently represent the structure of such complex data, but performing these inferences is generally difficult. Message-passing algorithms, such as belief propagation, are a natural way to disseminate evidence amongst correlated variables while exploiting the graph structure, but these algorithms can struggle when the conditional dependency graphs contain loops. Here we use Graph Neural Networks (GNNs) to learn a message-passing algorithm that solves these inference tasks. We first show that the architecture of GNNs is well-matched to inference tasks. We then demonstrate the efficacy of this inference approach by training GNNs on a collection of graphical models and showing that they substantially outperform belief propagation on loopy graphs. Our message-passing algorithms generalize out of the training set to larger graphs and graphs with different structure.

Embedding methods which enforce a partial order or lattice structure over the concept space, such as Order Embeddings (OE) (Vendrov et al., 2016), are a natural way to model transitive relational data (e.g. entailment graphs). However, OE learns a deterministic knowledge base, limiting expressiveness of queries and the ability to use uncertainty for both prediction and learning (e.g. learning from expectations). Probabilistic extensions of OE (Lai and Hockenmaier, 2017) have provided the ability to somewhat calibrate these denotational probabilities while retaining the consistency and inductive bias of ordered models, but lack the ability to model the negative correlations found in real-world knowledge. In this work we show that a broad class of models that assign probability measures to OE can never capture negative correlation, which motivates our construction of a novel box lattice and accompanying probability measure to capture anticorrelation and even disjoint concepts, while still providing the benefits of probabilistic modeling, such as the ability to perform rich joint and conditional queries over arbitrary sets of concepts, and both learning from and predicting calibrated uncertainty. We show improvements over previous approaches in modeling the Flickr and WordNet entailment graphs, and investigate the power of the model.

北京阿比特科技有限公司