亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Classical distributed estimation scenarios typically assume timely and reliable exchanges of information over the sensor network. This paper, in contrast, considers single time-scale distributed estimation via a sensor network subject to transmission time-delays. The proposed discrete-time networked estimator consists of two steps: (i) consensus on (delayed) a-priori estimates, and (ii) measurement update. The sensors only share their a-priori estimates with their out-neighbors over (possibly) time-delayed transmission links. The delays are assumed to be fixed over time, heterogeneous, and known. We assume distributed observability instead of local observability, which significantly reduces the communication/sensing loads on sensors. Using the notions of augmented matrices and Kronecker product, the convergence of the proposed estimator over strongly-connected networks is proved for a specific upper-bound on the time-delay.

相關內容

We present distributed methods for jointly optimizing Intelligent Reflecting Surface (IRS) phase-shifts and beamformers in a cellular network. The proposed schemes require knowledge of only the intra-cell training sequences and corresponding received signals without explicit channel estimation. Instead, an SINR objective is estimated via sample means and maximized directly. This automatically includes and mitigates both intra- and inter-cell interference provided that the uplink training is synchronized across cells. Different schemes are considered that limit the set of known training sequences from interferers. With MIMO links an iterative synchronous bi-directional training scheme jointly optimizes the IRS parameters with the beamformers and combiners. Simulation results show that the proposed distributed methods show a modest performance degradation compared to centralized channel estimation schemes, which estimate and exchange all cross-channels between cells, and perform significantly better than channel estimation schemes which ignore the inter-cell interference.

In this paper we study the maximum degree of interaction which may emerge in distributed systems. It is assumed that a distributed system is represented by a graph of nodes interacting over edges. Each node has some amount of data. The intensity of interaction over an edge is proportional to the product of the amounts of data in each node at either end of the edge. The maximum sum of interactions over the edges is searched for. This model can be extended to other interacting entities. For bipartite graphs and odd-length cycles we prove that the greatest degree of interaction emerge when the whole data is concentrated in an arbitrary pair of neighbors. Equal partitioning of the load is shown to be optimum for complete graphs. Finally, we show that in general graphs for maximum interaction the data should be distributed equally between the nodes of the largest clique in the graph. We also present in this context a result of Motzkin and Straus from 1965 for the maximal interaction objective.

We consider the problem of nonparametric estimation of the drift and diffusion coefficients of a Stochastic Differential Equation (SDE), based on $n$ independent replicates $\left\{X_i(t)\::\: t\in [0,1]\right\}_{1 \leq i \leq n}$, observed sparsely and irregularly on the unit interval, and subject to additive noise corruption. By \textit{sparse} we intend to mean that the number of measurements per path can be arbitrary (as small as two), and remain constant with respect to $n$. We focus on time-inhomogeneous SDE of the form $dX_t = \mu(t)X_t^{\alpha}dt + \sigma(t)X_t^{\beta}dW_t$, where $\alpha \in \{0,1\}$ and $\beta \in \{0,1/2,1\}$, which includes prominent examples such as Brownian motion, Ornstein-Uhlenbeck process, geometric Brownian motion, and Brownian bridge. Our estimators are constructed by relating the local (drift/diffusion) parameters of the diffusion to their global parameters (mean/covariance, and their derivatives) by means of an apparently novel PDE. This allows us to use methods inspired by functional data analysis, and pool information across the sparsely measured paths. The methodology we develop is fully non-parametric and avoids any functional form specification on the time-dependency of either the drift function or the diffusion function. We establish almost sure uniform asymptotic convergence rates of the proposed estimators as the number of observed curves $n$ grows to infinity. Our rates are non-asymptotic in the number of measurements per path, explicitly reflecting how different sampling frequency might affect the speed of convergence. Our framework suggests possible further fruitful interactions between FDA and SDE methods in problems with replication.

The combination of neural network potential (NNP) with molecular simulations plays an important role in an efficient and thorough understanding of a molecular system's potential energy surface (PES). However, grasping the interplay between input features and their local contribution to NNP is growingly evasive due to heavy featurization. In this work, we suggest an end-to-end model which directly predicts per-atom energy from the coordinates of particles, avoiding expert-guided featurization of the network input. Employing self-attention as the main workhorse, our model is intrinsically equivariant under the permutation operation, resulting in the invariance of the total potential energy. We tested our model against several challenges in molecular simulation problems, including periodic boundary condition (PBC), $n$-body interaction, and binary composition. Our model yielded stable predictions in all tested systems with errors significantly smaller than the potential energy fluctuation acquired from molecular dynamics simulations. Thus, our work provides a minimal baseline model that encodes complex interactions in a condensed phase system to facilitate the data-driven analysis of physicochemical systems.

We focus on the problem of manifold estimation: given a set of observations sampled close to some unknown submanifold $M$, one wants to recover information about the geometry of $M$. Minimax estimators which have been proposed so far all depend crucially on the a priori knowledge of some parameters quantifying the underlying distribution generating the sample (such as bounds on its density), whereas those quantities will be unknown in practice. Our contribution to the matter is twofold: first, we introduce a one-parameter family of manifold estimators $(\hat{M}_t)_{t\geq 0}$ based on a localized version of convex hulls, and show that for some choice of $t$, the corresponding estimator is minimax on the class of models of $C^2$ manifolds introduced in [Genovese et al., Manifold estimation and singular deconvolution under Hausdorff loss]. Second, we propose a completely data-driven selection procedure for the parameter $t$, leading to a minimax adaptive manifold estimator on this class of models. This selection procedure actually allows us to recover the Hausdorff distance between the set of observations and $M$, and can therefore be used as a scale parameter in other settings, such as tangent space estimation.

In this paper several related estimation problems are addressed from a Bayesian point of view and optimal estimators are obtained for each of them when some natural loss functions are considered. Namely, we are interested in estimating a regression curve. Simultaneously, the estimation problems of a conditional distribution function, or a conditional density, or even the conditional distribution itself, are considered. All these problems are posed in a sufficiently general framework to cover continuous and discrete, univariate and multivariate, parametric and non-parametric cases, without the need to use a specific prior distribution. The loss functions considered come naturally from the quadratic error loss function comonly used in estimating a real function of the unknown parameter. The cornerstone of the mentioned Bayes estimators is the posterior predictive distribution. Some examples are provided to illustrate these results.

Nonlinear time fractional partial differential equations are widely used in modeling and simulations. In many applications, there are high contrast changes in media properties. For solving these problems, one often uses coarse spatial grid for spatial resolution. For temporal discretization, implicit methods are often used. For implicit methods, though the time step can be relatively large, the equations are difficult to compute due to the nonlinearity and the fact that one deals with large-scale systems. On the other hand, the discrete system in explicit methods are easier to compute but it requires small time steps. In this work, we propose the partially explicit scheme following earlier works on developing partially explicit methods for nonlinear diffusion equations. In this scheme, the diffusion term is treated partially explicitly and the reaction term is treated fully explicitly. With the appropriate construction of spaces and stability analysis, we find that the required time step in our proposed scheme scales as the coarse mesh size, which creates a great saving in computing. The main novelty of this work is the extension of our earlier works for diffusion equations to time fractional diffusion equations. For the case of fractional diffusion equations, the constraints on time steps are more severe and the proposed methods alleviate this since the time step in partially explicit method scales as the coarse mesh size. We present stability results. Numerical results are presented where we compare our proposed partially explicit methods with a fully implicit approach. We show that our proposed approach provides similar results, while treating many degrees of freedom in nonlinear terms explicitly.

Alternating Direction Method of Multipliers (ADMM) is a widely used tool for machine learning in distributed settings, where a machine learning model is trained over distributed data sources through an interactive process of local computation and message passing. Such an iterative process could cause privacy concerns of data owners. The goal of this paper is to provide differential privacy for ADMM-based distributed machine learning. Prior approaches on differentially private ADMM exhibit low utility under high privacy guarantee and often assume the objective functions of the learning problems to be smooth and strongly convex. To address these concerns, we propose a novel differentially private ADMM-based distributed learning algorithm called DP-ADMM, which combines an approximate augmented Lagrangian function with time-varying Gaussian noise addition in the iterative process to achieve higher utility for general objective functions under the same differential privacy guarantee. We also apply the moments accountant method to bound the end-to-end privacy loss. The theoretical analysis shows that DP-ADMM can be applied to a wider class of distributed learning problems, is provably convergent, and offers an explicit utility-privacy tradeoff. To our knowledge, this is the first paper to provide explicit convergence and utility properties for differentially private ADMM-based distributed learning algorithms. The evaluation results demonstrate that our approach can achieve good convergence and model accuracy under high end-to-end differential privacy guarantee.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.

北京阿比特科技有限公司