亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we provide a first-ever epistemic formulation of stabilizing agreement, defined as the non-terminating variant of the well established consensus problem. In stabilizing agreements, agents are given (possibly different) initial values, with the goal to eventually always decide on the same value. While agents are allowed to change their decisions finitely often, they are required to agree on the same value eventually. We capture these properties in temporal epistemic logic and we use the Runs and Systems framework to formally reason about stabilizing agreement problems. We then epistemically formalize the conditions for solving stabilizing agreement, and identify the knowledge that the agents acquire during any execution to choose a particular value under our system assumptions. This first formalization of a sufficient condition for solving stabilizing agreement sets the stage for a planned necessary and sufficient epistemic characterization of stabilizing agreement.

相關內容

Solving quantum impurity problems may advance our understanding of strongly correlated electron physics, but its development in multi-impurity systems has been greatly hindered due to the presence of shared bath. Here, we propose a general operation strategy to disentangle the shared bath into multiple auxiliary baths and relate the problem to a spectral decomposition problem of function matrix for applying the numerical renormalization group (NRG). We prove exactly that such decomposition is possible for models satisfying (block) circulant symmetry, and show how to construct the auxiliary baths for arbitrary impurity configuration by mapping its graph structure to the subgraph of a regular impurity configuration. We further propose an approximate decomposition algorithm to reduce the number of auxiliary baths and save the computational workload. Our work reveals a deep connection between quantum impurity problems and the graph theory, and provides a general scheme to extend the NRG applications for realistic multi-impurity systems.

In this paper, we propose two communication efficient decentralized optimization algorithms over a general directed multi-agent network. The first algorithm, termed Compressed Push-Pull (CPP), combines the gradient tracking Push-Pull method with communication compression. We show that CPP is applicable to a general class of unbiased compression operators and achieves linear convergence rate for strongly convex and smooth objective functions. The second algorithm is a broadcast-like version of CPP (B-CPP), and it also achieves linear convergence rate under the same conditions on the objective functions. B-CPP can be applied in an asynchronous broadcast setting and further reduce communication costs compared to CPP. Numerical experiments complement the theoretical analysis and confirm the effectiveness of the proposed methods.

In this paper, we consider the fundamental problem of testing for monotone trend in a time series. While the term "trend" is commonly used and has an intuitive meaning, it is first crucial to specify its exact meaning in a hypothesis testing context. A commonly used well-known test is the Mann-Kendall test, which we show does not offer Type 1 error control even in large samples. On the other hand, by an appropriate studentization of the Mann-Kendall statistic, we construct permutation tests that offer asymptotic error control quite generally, but retain the exactness property of permutation tests for i.i.d. observations. We also introduce "local" Mann-Kendall statistics as a means of testing for local rather than global trend in a time series. Similar properties of permutation tests are obtained for these tests as well.

In this paper, we investigate the millimeter-wave (mmWave) near-field beam training problem to find the correct beam direction. In order to address the high complexity and low identification accuracy of existing beam training techniques, we propose an efficient hashing multi-arm beam (HMB) training scheme for the near-field scenario. Specifically, we first design a set of sparse bases based on the polar domain sparsity of the near-field channel. Then, the random hash functions are chosen to construct the near-field multi-arm beam training codebook. Each multi-arm beam codeword is scanned in a time slot until all the predefined codewords are traversed. Finally, the soft decision and voting methods are applied to distinguish the signal from different base stations and obtain correctly aligned beams. Simulation results show that our proposed near-field HMB training method can reduce the beam training overhead to the logarithmic level, and achieve 96.4% identification accuracy of exhaustive beam training. Moreover, we also verify applicability under the far-field scenario.

In this paper, we study classification and regression error bounds for inhomogenous data that are independent but not necessarily identically distributed. First, we consider classification of data in the presence of non-stationary noise and establish ergodic type sufficient conditions that guarantee the achievability of the Bayes error bound, using universal rules. We then perform a similar analysis for $k$-nearest neighbour regression and obtain optimal error bounds for the same. Finally, we illustrate applications of our results in the context of wireless networks.

In this paper we define a class of polynomial functors suited for constructing coalgebras representing processes in which uncertainty plays an important role. In these polynomial functors we include upper and lower probability measures, finitely additive probability measures, plausibilty measures (and their duals, belief functions), and possibility measures. We give axioms and inference rules for the associated system of coalgebraic modal logic, and construct the canonical coalgebras to prove a completeness result.

In this paper, we investigate the problem of resource allocation for fluid antenna relay (FAR) system with antenna location optimization. In the considered model, each user transmits information to a base station (BS) with help of FAR. The antenna location of the FAR is flexible and can be adapted to dynamic location distribution of the users. We formulate a sum rate maximization problem through jointly optimizing the antenna location and bandwidth allocation with meeting the minimum rate requirements, total bandwidth budget, and feasible antenna region constraints. To solve this problem, we obtain the optimal bandwidth in closed form. Based on the optimal bandwidth, the original problem is reduced to the antenna location optimization problem and an alternating algorithm is proposed. Simulation results verify the effectiveness of the proposed algorithm and the sum rate can be increased by up to 125% compared to the conventional schemes.

In this paper, we propose Latent Relation Language Models (LRLMs), a class of language models that parameterizes the joint distribution over the words in a document and the entities that occur therein via knowledge graph relations. This model has a number of attractive properties: it not only improves language modeling performance, but is also able to annotate the posterior probability of entity spans for a given text through relations. Experiments demonstrate empirical improvements over both a word-based baseline language model and a previous approach that incorporates knowledge graph information. Qualitative analysis further demonstrates the proposed model's ability to learn to predict appropriate relations in context.

In this paper, we introduce the Reinforced Mnemonic Reader for machine reading comprehension tasks, which enhances previous attentive readers in two aspects. First, a reattention mechanism is proposed to refine current attentions by directly accessing to past attentions that are temporally memorized in a multi-round alignment architecture, so as to avoid the problems of attention redundancy and attention deficiency. Second, a new optimization approach, called dynamic-critical reinforcement learning, is introduced to extend the standard supervised method. It always encourages to predict a more acceptable answer so as to address the convergence suppression problem occurred in traditional reinforcement learning algorithms. Extensive experiments on the Stanford Question Answering Dataset (SQuAD) show that our model achieves state-of-the-art results. Meanwhile, our model outperforms previous systems by over 6% in terms of both Exact Match and F1 metrics on two adversarial SQuAD datasets.

In this paper, we propose a conceptually simple and geometrically interpretable objective function, i.e. additive margin Softmax (AM-Softmax), for deep face verification. In general, the face verification task can be viewed as a metric learning problem, so learning large-margin face features whose intra-class variation is small and inter-class difference is large is of great importance in order to achieve good performance. Recently, Large-margin Softmax and Angular Softmax have been proposed to incorporate the angular margin in a multiplicative manner. In this work, we introduce a novel additive angular margin for the Softmax loss, which is intuitively appealing and more interpretable than the existing works. We also emphasize and discuss the importance of feature normalization in the paper. Most importantly, our experiments on LFW BLUFR and MegaFace show that our additive margin softmax loss consistently performs better than the current state-of-the-art methods using the same network architecture and training dataset. Our code has also been made available at //github.com/happynear/AMSoftmax

北京阿比特科技有限公司