亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Session types enable the static verification of message-passing programs. A session type specifies a channel's protocol as sequences of messages. Prior work established a minimality result: every process typable with standard session types can be compiled down to a process typable using minimal session types: session types without sequencing construct. This result justifies session types in terms of themselves; it holds for a higher-order session \pi-calculus, where values are abstractions (functions from names to processes). This paper establishes a minimality result but now for the session \pi-calculus, the language in which values are names and for which session types have been more widely studied. This new minimality result for the session \pi-calculus can be obtained by composing existing results. We develop associated optimizations of this result, and establish its static and dynamic correctness.

相關內容

Processing 是一門開源編程語(yu)言和與之配套(tao)的集(ji)成(cheng)開發環(huan)境(IDE)的名稱。Processing 在電子藝(yi)術和視(shi)覺設(she)計社區被(bei)用來教授編程基礎,并運用于大(da)量的新(xin)媒(mei)體(ti)和互(hu)動藝(yi)術作(zuo)品中。

Our aim is to develop dynamic data structures that support $k$-nearest neighbors ($k$-NN) queries for a set of $n$ point sites in $O(f(n) + k)$ time, where $f(n)$ is some polylogarithmic function of $n$. The key component is a general query algorithm that allows us to find the $k$-NN spread over $t$ substructures simultaneously, thus reducing a $O(tk)$ term in the query time to $O(k)$. Combining this technique with the logarithmic method allows us to turn any static $k$-NN data structure into a data structure supporting both efficient insertions and queries. For the fully dynamic case, this technique allows us to recover the deterministic, worst-case, $O(\log^2n/\log\log n +k)$ query time for the Euclidean distance claimed before, while preserving the polylogarithmic update times. We adapt this data structure to also support fully dynamic \emph{geodesic} $k$-NN queries among a set of sites in a simple polygon. For this purpose, we design a shallow cutting based, deletion-only $k$-NN data structure. More generally, we obtain a dynamic $k$-NN data structure for any type of distance functions for which we can build vertical shallow cuttings. We apply all of our methods in the plane for the Euclidean distance, the geodesic distance, and general, constant-complexity, algebraic distance functions.

By adapting Salomaa's complete proof system for equality of regular expressions under the language semantics, Milner (1984) formulated a sound proof system for bisimilarity of regular expressions under the process interpretation he introduced. He asked whether this system is complete. Proof-theoretic arguments attempting to show completeness of this equational system are complicated by the presence of a non-algebraic rule for solving fixed-point equations by using star iteration. We characterize the derivational power that the fixed-point rule adds to the purely equational part $\text{Mil$^{\boldsymbol{-}}$}$ of Milner's system $\text{$\text{Mil}$}$: it corresponds to the power of coinductive proofs over $\text{Mil$^{\boldsymbol{-}}$}$ that have the form of finite process graphs with the loop existence and elimination property $\text{LEE}$. We define a variant system $\text{cMil}$ by replacing the fixed-point rule in $\text{Mil}$ with a rule that permits $\text{LEE}$-shaped circular derivations in $\text{Mil$^{\boldsymbol{-}}$}$ from previously derived equations as a premise. With this rule alone we also define the variant system $\text{CLC}$ for merely combining $\text{LEE}$-shaped coinductive proofs over $\text{Mil$^{\boldsymbol{-}}$}$. We show that both $\text{cMil}$ and $\text{CLC}$ have proof interpretations in $\text{Mil}$, and vice versa. As this correspondence links, in both directions, derivability in $\text{Mil}$ with derivation trees of process graphs, it widens the space for graph-based approaches to finding a completeness proof of Milner's system. This report is the extended version of a paper with the same title presented at CALCO 2021.

Approximation of the value functions in value-based deep reinforcement learning systems induces overestimation bias, resulting in suboptimal policies. We show that when the reinforcement signals received by the agents have a high variance, deep actor-critic approaches that overcome the overestimation bias lead to a substantial underestimation bias. We introduce a parameter-free, novel deep Q-learning variant to reduce this underestimation bias for continuous control. By obtaining fixed weights in computing the critic objective as a linear combination of the approximate critic functions, our Q-value update rule integrates the concepts of Clipped Double Q-learning and Maxmin Q-learning. We test the performance of our improvement on a set of MuJoCo and Box2D continuous control tasks and find that it improves the state-of-the-art and outperforms the baseline algorithms in the majority of the environments.

We prove tight H\"olderian error bounds for all $p$-cones. Surprisingly, the exponents differ in several ways from those that have been previously conjectured; moreover, they illuminate $p$-cones as a curious example of a class of objects that possess properties in 3 dimensions that they do not in 4 or more. Using our error bounds, we analyse least squares problems with $p$-norm regularization, where our results enable us to compute the corresponding KL exponents for previously inaccessible values of $p$. Another application is a (relatively) simple proof that most $p$-cones are neither self-dual nor homogeneous. Our error bounds are obtained under the framework of facial residual functions and we expand it by establishing for general cones an optimality criterion under which the resulting error bound must be tight.

Modern high-dimensional point process data, especially those from neuroscience experiments, often involve observations from multiple conditions and/or experiments. Networks of interactions corresponding to these conditions are expected to share many edges, but also exhibit unique, condition-specific ones. However, the degree of similarity among the networks from different conditions is generally unknown. Existing approaches for multivariate point processes do not take these structures into account and do not provide inference for jointly estimated networks. To address these needs, we propose a joint estimation procedure for networks of high-dimensional point processes that incorporates easy-to-compute weights in order to data-adaptively encourage similarity between the estimated networks. We also propose a powerful hierarchical multiple testing procedure for edges of all estimated networks, which takes into account the data-driven similarity structure of the multi-experiment networks. Compared to conventional multiple testing procedures, our proposed procedure greatly reduces the number of tests and results in improved power, while tightly controlling the family-wise error rate. Unlike existing procedures, our method is also free of assumptions on dependency between tests, offers flexibility on p-values calculated along the hierarchy, and is robust to misspecification of the hierarchical structure. We verify our theoretical results via simulation studies and demonstrate the application of the proposed procedure using neuronal spike train data.

We consider the control of McKean-Vlasov dynamics whose coefficients have mean field interactions in the state and control. We show that for a class of linear-convex mean field control problems, the unique optimal open-loop control admits the optimal 1/2-H\"{o}lder regularity in time. Consequently, we prove that the value function can be approximated by one with piecewise constant controls and discrete-time state processes arising from Euler-Maruyama time stepping, up to an order 1/2 error, and the optimal control can be approximated up to an order 1/4 error. These results are novel even for the case without mean field interaction.

Replanners are efficient methods for solving non-deterministic planning problems. Despite showing good scalability, existing replanners often fail to solve problems involving a large number of misleading plans, i.e., weak plans that do not lead to strong solutions, however, due to their minimal lengths, are likely to be found at every replanning iteration. The poor performance of replanners in such problems is due to their all-outcome determinization. That is, when compiling from non-deterministic to classical, they include all compiled classical operators in a single deterministic domain which leads replanners to continually generate misleading plans. We introduce an offline replanner, called Safe-Planner (SP), that relies on a single-outcome determinization to compile a non-deterministic domain to a set of classical domains, and ordering heuristics for ranking the obtained classical domains. The proposed single-outcome determinization and the heuristics allow for alternating between different classical domains. We show experimentally that this approach can allow SP to avoid generating misleading plans but to generate weak plans that directly lead to strong solutions. The experiments show that SP outperforms state-of-the-art non-deterministic solvers by solving a broader range of problems. We also validate the practical utility of SP in real-world non-deterministic robotic tasks.

We consider the task of estimating a conditional density using i.i.d. samples from a joint distribution, which is a fundamental problem with applications in both classification and uncertainty quantification for regression. For joint density estimation, minimax rates have been characterized for general density classes in terms of uniform (metric) entropy, a well-studied notion of statistical capacity. When applying these results to conditional density estimation, the use of uniform entropy -- which is infinite when the covariate space is unbounded and suffers from the curse of dimensionality -- can lead to suboptimal rates. Consequently, minimax rates for conditional density estimation cannot be characterized using these classical results. We resolve this problem for well-specified models, obtaining matching (within logarithmic factors) upper and lower bounds on the minimax Kullback--Leibler risk in terms of the empirical Hellinger entropy for the conditional density class. The use of empirical entropy allows us to appeal to concentration arguments based on local Rademacher complexity, which -- in contrast to uniform entropy -- leads to matching rates for large, potentially nonparametric classes and captures the correct dependence on the complexity of the covariate space. Our results require only that the conditional densities are bounded above, and do not require that they are bounded below or otherwise satisfy any tail conditions.

In a conventional voice conversion (VC) framework, a VC model is often trained with a clean dataset consisting of speech data carefully recorded and selected by minimizing background interference. However, collecting such a high-quality dataset is expensive and time-consuming. Leveraging crowd-sourced speech data in training is more economical. Moreover, for some real-world VC scenarios such as VC in video and VC-based data augmentation for speech recognition systems, the background sounds themselves are also informative and need to be maintained. In this paper, to explore VC with the flexibility of handling background sounds, we propose a noisy-to-noisy (N2N) VC framework composed of a denoising module and a VC module. With the proposed framework, we can convert the speaker's identity while preserving the background sounds. Both objective and subjective evaluations are conducted, and the results reveal the effectiveness of the proposed framework.

To rapidly learn a new task, it is often essential for agents to explore efficiently -- especially when performance matters from the first timestep. One way to learn such behaviour is via meta-learning. Many existing methods however rely on dense rewards for meta-training, and can fail catastrophically if the rewards are sparse. Without a suitable reward signal, the need for exploration during meta-training is exacerbated. To address this, we propose HyperX, which uses novel reward bonuses for meta-training to explore in approximate hyper-state space (where hyper-states represent the environment state and the agent's task belief). We show empirically that HyperX meta-learns better task-exploration and adapts more successfully to new tasks than existing methods.

北京阿比特科技有限公司