亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this chapter, we study the consistency of the value of information$\unicode{x2014}$a semantic metric that claims to determine the right piece of information in networked control systems tasks$\unicode{x2014}$in a lossy and delayed communication regime. Our analysis begins with a focus on state estimation, and subsequently extends to feedback control. To that end, we make a causal tradeoff between the packet rate and the mean square error. Associated with this tradeoff, we demonstrate the existence of an optimal policy profile, comprising a symmetric threshold scheduling policy based on the value of information for the encoder and a non-Gaussian linear estimation policy for the decoder. Our structural results assert that the scheduling policy is expressible in terms of $3d-1$ variables related to the source and the channel, where $d$ is the time delay, and that the estimation policy incorporates no residual related to signaling. We then construct an optimal control policy by exploiting the separation principle.

相關內容

As the capabilities of large language models (LLMs) have expanded dramatically, aligning these models with human values presents a significant challenge, posing potential risks during deployment. Traditional alignment strategies rely heavily on human intervention, such as Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF), or on the self-alignment capacities of LLMs, which usually require a strong LLM's emergent ability to improve its original bad answer. To address these challenges, we propose a novel self-alignment method that utilizes a Chain of Thought (CoT) approach, termed AlignCoT. This method encompasses stages of Question Analysis, Answer Guidance, and Safe Answer production. It is designed to enable LLMs to generate high-quality, safe responses throughout various stages of their development. Furthermore, we introduce the Mixture of insighTful Experts (MoTE) architecture, which applies the mixture of experts to enhance each component of the AlignCoT process, markedly increasing alignment efficiency. The MoTE approach not only outperforms existing methods in aligning LLMs with human values but also highlights the benefits of using self-generated data, revealing the dual benefits of improved alignment and training efficiency.

Posterior sampling, i.e., exponential mechanism to sample from the posterior distribution, provides $\varepsilon$-pure differential privacy (DP) guarantees and does not suffer from potentially unbounded privacy breach introduced by $(\varepsilon,\delta)$-approximate DP. In practice, however, one needs to apply approximate sampling methods such as Markov chain Monte Carlo (MCMC), thus re-introducing the unappealing $\delta$-approximation error into the privacy guarantees. To bridge this gap, we propose the Approximate SAample Perturbation (abbr. ASAP) algorithm which perturbs an MCMC sample with noise proportional to its Wasserstein-infinity ($W_\infty$) distance from a reference distribution that satisfies pure DP or pure Gaussian DP (i.e., $\delta=0$). We then leverage a Metropolis-Hastings algorithm to generate the sample and prove that the algorithm converges in $W_\infty$ distance. We show that by combining our new techniques with a localization step, we obtain the first nearly linear-time algorithm that achieves the optimal rates in the DP-ERM problem with strongly convex and smooth losses.

Behavioural distances of transition systems modelled as coalgebras for endofunctors generalize the traditional notions of behavioural equivalence to a quantitative setting, in which states are equipped with a measure of how (dis)similar they are. Endowing transition systems with such distances essentially relies on the ability to lift functors describing the one-step behavior of the transition systems to the category of pseudometric spaces. We consider the Kantorovich lifting of a functor on quantale-valued relations, which subsumes equivalences, preorders and (directed) metrics. We use tools from fibred category theory, which allow one to see the Kantorovich lifting as arising from an appropriate fibred adjunction. Our main contributions are compositionality results for the Kantorovich lifting, where we show that that the lifting of a composed functor coincides with the composition of the liftings. In addition we describe how to lift distributive laws in the case where one of the two functors is polynomial. These results are essential ingredients for adopting up-to-techniques to the case of quantale-valued behavioural distances. Up-to techniques are a well-known coinductive technique for efficiently showing lower bounds for behavioural distances. We conclude by illustrating the results of our paper in two case studies.

We consider the synthesis problem on timed automata with B\"uchi objectives, where delay choices made by a controller are subjected to small perturbations. Usually, the controller needs to avoid punctual guards, such as testing the equality of a clock to a constant. In this work, we generalize to a robustness setting that allows for punctual transitions in the automaton to be taken by controller with no perturbation. In order to characterize cycles that resist perturbations in our setting, we introduce a new structural requirement on the reachability relation along an accepting cycle of the automaton. This property is formulated on the region abstraction, and generalizes the existing characterization of winning cycles in the absence of punctual guards. We show that the problem remains within PSPACE despite the presence of punctual guards.

With the proliferation of large language models (LLMs), the comprehensive alignment of such models across multiple tasks has emerged as a critical area of research. Existing alignment methodologies primarily address single task, such as multi-turn dialogue, coding, mathematical problem-solving, and tool usage. However, AI-driven products that leverage language models usually necessitate a fusion of these abilities to function effectively in real-world scenarios. Moreover, the considerable computational resources required for proper alignment of LLMs underscore the need for a more robust, efficient, and encompassing approach to multi-task alignment, ensuring improved generative performance. In response to these challenges, we introduce a novel technique termed Mixture-of-Instructions (MoI), which employs a strategy of instruction concatenation combined with diverse system prompts to boost the alignment efficiency of language models. We have also compiled a diverse set of seven benchmark datasets to rigorously evaluate the alignment efficacy of the MoI-enhanced language model. Our methodology was applied to the open-source Qwen-7B-chat model, culminating in the development of Qwen-SFT-MoI. This enhanced model demonstrates significant advancements in generative capabilities across coding, mathematics, and tool use tasks.

The Hayashi-Yoshida (\HY)-estimator exhibits an intrinsic, telescoping property that leads to an often overlooked computational bias, which we denote,formulaic or intrinsic bias. This formulaic bias results in data loss by cancelling out potentially relevant data points, the nonextant data points. This paper attempts to formalize and quantify the data loss arising from this bias. In particular, we highlight the existence of nonextant data points via a concrete example, and prove necessary and sufficient conditions for the telescoping property to induce this type of formulaic bias.Since this type of bias is nonexistent when inputs, i.e., observation times, $\Pi^{(1)} :=(t_i^{(1)})_{i=0,1,\ldots}$ and $\Pi^{(2)} :=(t_j^{(2)})_{j=0,1,\ldots}$, are synchronous, we introduce the (a,b)-asynchronous adversary. This adversary generates inputs $\Pi^{(1)}$ and $\Pi^{(2)}$ according to two independent homogenous Poisson processes with rates a>0 and b>0, respectively. We address the foundational questions regarding cumulative minimal (or least) average data point loss, and determine the values for a and b. We prove that for equal rates a=b, the minimal average cumulative data loss over both inputs is attained and amounts to 25\%. We present an algorithm, which is based on our theorem, for computing the exact number of nonextant data points given inputs $\Pi^{(1)}$ and $\Pi^{(2)}$, and suggest alternative methods. Finally, we use simulated data to empirically compare the (cumulative) average data loss of the (\HY)-estimator.

Exponential histograms, with bins of the form $\left\{ \left(\rho^{k-1},\rho^{k}\right]\right\} _{k\in\mathbb{Z}}$, for $\rho>1$, straightforwardly summarize the quantiles of streaming data sets (Masson et al. 2019). While they guarantee the relative accuracy of their estimates, they appear to use only $\log n$ values to summarize $n$ inputs. We study four aspects of exponential histograms -- size, accuracy, occupancy, and largest gap size -- when inputs are i.i.d. $\mathrm{Exp}\left(\lambda\right)$ or i.i.d. $\mathrm{Pareto}\left(\nu,\beta\right)$, taking $\mathrm{Exp}\left(\lambda\right)$ (or, $\mathrm{Pareto}\left(\nu,\beta\right)$) to represent all light- (or, heavy-) tailed distributions. We show that, in these settings, size grows like $\log n$ and takes on a Gumbel distribution as $n$ grows large. We bound the missing mass to the right of the histogram and the mass of its final bin and show that occupancy grows apace with size. Finally, we approximate the size of the largest number of consecutive, empty bins. Our study gives a deeper and broader view of this low-memory approach to quantile estimation.

We consider the task of locally correcting, and locally list-correcting, multivariate linear functions over the domain $\{0,1\}^n$ over arbitrary fields and more generally Abelian groups. Such functions form error-correcting codes of relative distance $1/2$ and we give local-correction algorithms correcting up to nearly $1/4$-fraction errors making $\widetilde{\mathcal{O}}(\log n)$ queries. This query complexity is optimal up to $\mathrm{poly}(\log\log n)$ factors. We also give local list-correcting algorithms correcting $(1/2 - \varepsilon)$-fraction errors with $\widetilde{\mathcal{O}}_{\varepsilon}(\log n)$ queries. These results may be viewed as natural generalizations of the classical work of Goldreich and Levin whose work addresses the special case where the underlying group is $\mathbb{Z}_2$. By extending to the case where the underlying group is, say, the reals, we give the first non-trivial locally correctable codes (LCCs) over the reals (with query complexity being sublinear in the dimension (also known as message length)). The central challenge in constructing the local corrector is constructing "nearly balanced vectors" over $\{-1,1\}^n$ that span $1^n$ -- we show how to construct $\mathcal{O}(\log n)$ vectors that do so, with entries in each vector summing to $\pm1$. The challenge to the local-list-correction algorithms, given the local corrector, is principally combinatorial, i.e., in proving that the number of linear functions within any Hamming ball of radius $(1/2-\varepsilon)$ is $\mathcal{O}_{\varepsilon}(1)$. Getting this general result covering every Abelian group requires integrating a variety of known methods with some new combinatorial ingredients analyzing the structural properties of codewords that lie within small Hamming balls.

Knowledge graph reasoning (KGR), aiming to deduce new facts from existing facts based on mined logic rules underlying knowledge graphs (KGs), has become a fast-growing research direction. It has been proven to significantly benefit the usage of KGs in many AI applications, such as question answering and recommendation systems, etc. According to the graph types, the existing KGR models can be roughly divided into three categories, \textit{i.e.,} static models, temporal models, and multi-modal models. The early works in this domain mainly focus on static KGR and tend to directly apply general knowledge graph embedding models to the reasoning task. However, these models are not suitable for more complex but practical tasks, such as inductive static KGR, temporal KGR, and multi-modal KGR. To this end, multiple works have been developed recently, but no survey papers and open-source repositories comprehensively summarize and discuss models in this important direction. To fill the gap, we conduct a survey for knowledge graph reasoning tracing from static to temporal and then to multi-modal KGs. Concretely, the preliminaries, summaries of KGR models, and typical datasets are introduced and discussed consequently. Moreover, we discuss the challenges and potential opportunities. The corresponding open-source repository is shared on GitHub: //github.com/LIANGKE23/Awesome-Knowledge-Graph-Reasoning.

Detection and recognition of text in natural images are two main problems in the field of computer vision that have a wide variety of applications in analysis of sports videos, autonomous driving, industrial automation, to name a few. They face common challenging problems that are factors in how text is represented and affected by several environmental conditions. The current state-of-the-art scene text detection and/or recognition methods have exploited the witnessed advancement in deep learning architectures and reported a superior accuracy on benchmark datasets when tackling multi-resolution and multi-oriented text. However, there are still several remaining challenges affecting text in the wild images that cause existing methods to underperform due to there models are not able to generalize to unseen data and the insufficient labeled data. Thus, unlike previous surveys in this field, the objectives of this survey are as follows: first, offering the reader not only a review on the recent advancement in scene text detection and recognition, but also presenting the results of conducting extensive experiments using a unified evaluation framework that assesses pre-trained models of the selected methods on challenging cases, and applies the same evaluation criteria on these techniques. Second, identifying several existing challenges for detecting or recognizing text in the wild images, namely, in-plane-rotation, multi-oriented and multi-resolution text, perspective distortion, illumination reflection, partial occlusion, complex fonts, and special characters. Finally, the paper also presents insight into the potential research directions in this field to address some of the mentioned challenges that are still encountering scene text detection and recognition techniques.

北京阿比特科技有限公司