亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In distributed ledger technologies (DLTs) with a directed acyclic graph (DAG) data structure, a block-issuing node can decide where to append new blocks and, consequently, how the DAG grows. This DAG data structure is typically decomposed into two pools of blocks, dependent on whether another block already references them. The unreferenced blocks are called the tips. Due to network delay, nodes can perceive the set of tips differently, giving rise to local tip pools. We present a new mathematical model to analyse the stability of the different local perceptions of the tip pools and allow heterogeneous and random network delay in the underlying peer-to-peer communication layer. Under natural assumptions, we prove that the number of tips is ergodic, converges to a stationary distribution, and provide quantitative bounds on the tip pool sizes. We conclude our study with agent-based simulations to illustrate the convergence of the tip pool sizes and the pool sizes' dependence on the communication delay and degree of centralization.

相關內容

IEEE圖像處理事務涵蓋了新穎的理論,算法和體系結構,可在各種應用中形成、捕獲、處理、通信、分析和顯示圖像、視頻和多維信號。感興趣的主題包括但不限于數學、統計和感知建模、表示、形成、編碼、過濾、增強、還原、渲染、半色調、搜索和分析圖像、視頻和多維信號。感興趣的應用包括圖像和視頻通信、電子成像、生物醫學成像、圖像和視頻系統以及遙感。 官網地址:

We study recovery of amplitudes and nodes of a finite impulse train from noisy frequency samples. This problem is known as super-resolution under sparsity constraints and has numerous applications. An especially challenging scenario occurs when the separation between Dirac pulses is smaller than the Nyquist-Shannon-Rayleigh limit. Despite large volumes of research and well-established worst-case recovery bounds, there is currently no known computationally efficient method which achieves these bounds in practice. In this work we combine the well-known Prony's method for exponential fitting with a recently established decimation technique for analyzing the super-resolution problem in the above mentioned regime. We show that our approach attains optimal asymptotic stability in the presence of noise, and has lower computational complexity than the current state of the art methods.

In distributed model predictive control (MPC), the control input at each sampling time is computed by solving a large-scale optimal control problem (OCP) over a finite horizon using distributed algorithms. Typically, such algorithms require several (virtually, infinite) communication rounds between the subsystems to converge, which is a major drawback both computationally and from an energetic perspective (for wireless systems). Motivated by these challenges, we propose a suboptimal distributed MPC scheme in which the total communication burden is distributed also in time, by maintaining a running solution estimate for the large-scale OCP and updating it at each sampling time. We demonstrate that, under some regularity conditions, the resulting suboptimal MPC control law recovers the qualitative robust stability properties of optimal MPC, if the communication budget at each sampling time is large enough.

Automotive softwarization is progressing and future cars are expected to operate a Service-Oriented Architecture on multipurpose compute units, which are interconnected via a high-speed Ethernet backbone. The AUTOSAR architecture foresees a universal middleware called SOME/IP that provides the service primitives, interfaces, and application protocols on top of Ethernet and IP. SOME/IP lacks a robust security architecture, even though security is an essential in future Internet-connected vehicles. In this paper, we augment the SOME/IP service discovery with an authentication and certificate management scheme based on DNSSEC and DANE. We argue that the deployment of well-proven, widely tested standard protocols should serve as an appropriate basis for a robust and reliable security infrastructure in cars. Our solution enables on-demand service authentication in offline scenarios, easy online updates, and remains free of attestation collisions. We evaluate our extension of the common vsomeip stack and find performance values that fully comply with car operations.

We study the convergence of a family of numerical integration methods where the numerical integral is formulated as a finite matrix approximation to a multiplication operator. For bounded functions, the convergence has already been established using the theory of strong operator convergence. In this article, we consider unbounded functions and domains which pose several difficulties compared to the bounded case. A natural choice of method for this study is the theory of strong resolvent convergence which has previously been mostly applied to study the convergence of approximations of differential operators. The existing theory already includes convergence theorems that can be used as proofs as such for a limited class of functions and extended for wider class of functions in terms of function growth or discontinuity. The extended results apply to all self-adjoint operators, not just multiplication operators. We also show how Jensen's operator inequality can be used to analyse the convergence of an improper numerical integral of a function bounded by an operator convex function.

The paper introduces the DIverse MultiPLEx Generalized Dot Product Graph (DIMPLE-GDPG) network model where all layers of the network have the same collection of nodes and follow the Generalized Dot Product Graph (GDPG) model. In addition, all layers can be partitioned into groups such that the layers in the same group are embedded in the same ambient subspace but otherwise all matrices of connection probabilities can be different. In a common particular case, where layers of the network follow the Stochastic Block Model (SBM), this setting implies that the groups of layers have common community structures but all matrices of block connection probabilities can be different. We refer to this version as the DIMPLE model. While the DIMPLE-GDPG model generalizes the COmmon Subspace Independent Edge (COSIE) random graph model developed in \cite{JMLR:v22:19-558}, the DIMPLE model includes a wide variety of SBM-equipped multilayer network models as its particular cases. In the paper, we introduce novel algorithms for the recovery of similar groups of layers, for the estimation of the ambient subspaces in the groups of layers in the DIMPLE-GDPG setting, and for the within-layer clustering in the case of the DIMPLE model. We study the accuracy of those algorithms, both theoretically and via computer simulations. The advantages of the new models are demonstrated using real data examples.

This paper contributes tail bounds of the age-of-information of a general class of parallel systems and explores their potential. Parallel systems arise in relevant cases, such as in multi-band mobile networks, multi-technology wireless access, or multi-path protocols, just to name a few. Typically, control over each communication channel is limited and random service outages and congestion cause buffering that impairs the age-of-information. The parallel use of independent channels promises a remedy, since outages on one channel may be compensated for by another. Surprisingly, for the well-known case of M$\mid$M$\mid$1 queues we find the opposite: pooling capacity in one channel performs better than a parallel system with the same total capacity. A generalization is not possible since there are no solutions for other types of parallel queues at hand. In this work, we prove a dual representation of age-of-information in min-plus algebra that connects to queueing models known from the theory of effective bandwidth/capacity and the stochastic network calculus. Exploiting these methods, we derive tail bounds of the age-of-information of parallel G$\mid$G$\mid$1 queues. In addition to parallel classical queues, we investigate Markov channels where, depending on the memory of the channel, we show the true advantage of parallel systems. We continue to investigate this new finding and provide insight into when capacity should be pooled in one channel or when independent parallel channels perform better. We complement our analysis with simulation results and evaluate different update policies, scheduling policies, and the use of heterogeneous channels that is most relevant for latest multi-band networks.

Attention is the crucial cognitive ability that limits and selects what information we observe. Previous work by Bolander et al. (2016) proposes a model of attention based on dynamic epistemic logic (DEL) where agents are either fully attentive or not attentive at all. While introducing the realistic feature that inattentive agents believe nothing happens, the model does not represent the most essential aspect of attention: its selectivity. Here, we propose a generalization that allows for paying attention to subsets of atomic formulas. We introduce the corresponding logic for propositional attention, and show its axiomatization to be sound and complete. We then extend the framework to account for inattentive agents that, instead of assuming nothing happens, may default to a specific truth-value of what they failed to attend to (a sort of prior concerning the unattended atoms). This feature allows for a more cognitively plausible representation of the inattentional blindness phenomenon, where agents end up with false beliefs due to their failure to attend to conspicuous but unexpected events. Both versions of the model define attention-based learning through appropriate DEL event models based on a few and clear edge principles. While the size of such event models grow exponentially both with the number of agents and the number of atoms, we introduce a new logical language for describing event models syntactically and show that using this language our event models can be represented linearly in the number of agents and atoms. Furthermore, representing our event models using this language is achieved by a straightforward formalisation of the aforementioned edge principles.

The Euler characteristic transform (ECT) is a signature from topological data analysis (TDA) which summarises shapes embedded in Euclidean space. Compared with other TDA methods, the ECT is fast to compute and it is a sufficient statistic for a broad class of shapes. However, small perturbations of a shape can lead to large distortions in its ECT. In this paper, we propose a new metric on compact one-dimensional shapes and prove that the ECT is stable with respect to this metric. Crucially, our result uses curvature, rather than the size of a triangulation of an underlying shape, to control stability. We further construct a computationally tractable statistical estimator of the ECT based on the theory of Gaussian processes. We use our stability result to prove that our estimator is consistent on shapes perturbed by independent ambient noise; i.e., the estimator converges to the true ECT as the sample size increases.

Spatio-temporal forecasting is challenging attributing to the high nonlinearity in temporal dynamics as well as complex location-characterized patterns in spatial domains, especially in fields like weather forecasting. Graph convolutions are usually used for modeling the spatial dependency in meteorology to handle the irregular distribution of sensors' spatial location. In this work, a novel graph-based convolution for imitating the meteorological flows is proposed to capture the local spatial patterns. Based on the assumption of smoothness of location-characterized patterns, we propose conditional local convolution whose shared kernel on nodes' local space is approximated by feedforward networks, with local representations of coordinate obtained by horizon maps into cylindrical-tangent space as its input. The established united standard of local coordinate system preserves the orientation on geography. We further propose the distance and orientation scaling terms to reduce the impacts of irregular spatial distribution. The convolution is embedded in a Recurrent Neural Network architecture to model the temporal dynamics, leading to the Conditional Local Convolution Recurrent Network (CLCRN). Our model is evaluated on real-world weather benchmark datasets, achieving state-of-the-art performance with obvious improvements. We conduct further analysis on local pattern visualization, model's framework choice, advantages of horizon maps and etc.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

北京阿比特科技有限公司