亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Data dissemination is a fundamental task in distributed computing. This paper studies broadcast problems in various innovative models where the communication network connecting $n$ processes is dynamic (e.g., due to mobility or failures) and controlled by an adversary. In the first model, the processes transitively communicate their ids in synchronous rounds along a rooted tree given in each round by the adversary whose goal is to maximize the number of rounds until at least one id is known by all processes. Previous research has shown a $\lceil{\frac{3n-1}{2}}\rceil-2$ lower bound and an $O(n\log\log n)$ upper bound. We show the first linear upper bound for this problem, namely $\lceil{(1 + \sqrt 2) n-1}\rceil \approx 2.4n$. We extend these results to the setting where the adversary gives in each round $k$-disjoint forests and their goal is to maximize the number of rounds until there is a set of $k$ ids such that each process knows of at least one of them. We give a $\left\lceil{\frac{3(n-k)}{2}}\right\rceil-1$ lower bound and a $\frac{\pi^2+6}{6}n+1 \approx 2.6n$ upper bound for this problem. Finally, we study the setting where the adversary gives in each round a directed graph with $k$ roots and their goal is to maximize the number of rounds until there exist $k$ ids that are known by all processes. We give a $\left\lceil{\frac{3(n-3k)}{2}}\right\rceil+2$ lower bound and a $\lceil { (1+\sqrt{2})n}\rceil+k-1 \approx 2.4n+k$ upper bound for this problem. For the two latter problems no upper or lower bounds were previously known.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網(wang)絡會(hui)議(yi)。 Publisher:IFIP。 SIT:

Approximating convex bodies is a fundamental question in geometry and has a wide variety of applications. Consider a convex body $K$ of diameter $\Delta$ in $\textbf{R}^d$ for fixed $d$. The objective is to minimize the number of vertices (alternatively, the number of facets) of an approximating polytope for a given Hausdorff error $\varepsilon$. It is known from classical results of Dudley (1974) and Bronshteyn and Ivanov (1976) that $\Theta((\Delta/\varepsilon)^{(d-1)/2})$ vertices (alternatively, facets) are both necessary and sufficient. While this bound is tight in the worst case, that of Euclidean balls, it is far from optimal for skinny convex bodies. A natural way to characterize a convex object's skinniness is in terms of its relationship to the Euclidean ball. Given a convex body $K$, define its \emph{volume diameter} $\Delta_d$ to be the diameter of a Euclidean ball of the same volume as $K$, and define its \emph{surface diameter} $\Delta_{d-1}$ analogously for surface area. It follows from generalizations of the isoperimetric inequality that $\Delta \geq \Delta_{d-1} \geq \Delta_d$. Arya, da Fonseca, and Mount (SoCG 2012) demonstrated that the diameter-based bound could be made surface-area sensitive, improving the above bound to $O((\Delta_{d-1}/\varepsilon)^{(d-1)/2})$. In this paper, we strengthen this by proving the existence of an approximation with $O((\Delta_d/\varepsilon)^{(d-1)/2})$ facets.

In fixed budget bandit identification, an algorithm sequentially observes samples from several distributions up to a given final time. It then answers a query about the set of distributions. A good algorithm will have a small probability of error. While that probability decreases exponentially with the final time, the best attainable rate is not known precisely for most identification tasks. We show that if a fixed budget task admits a complexity, defined as a lower bound on the probability of error which is attained by a single algorithm on all bandit problems, then that complexity is determined by the best non-adaptive sampling procedure for that problem. We show that there is no such complexity for several fixed budget identification tasks including Bernoulli best arm identification with two arms: there is no single algorithm that attains everywhere the best possible rate.

We contribute to a better understanding of the class of functions that can be represented by a neural network with ReLU activations and a given architecture. Using techniques from mixed-integer optimization, polyhedral theory, and tropical geometry, we provide a mathematical counterbalance to the universal approximation theorems which suggest that a single hidden layer is sufficient for learning any function. In particular, we investigate whether the class of exactly representable functions strictly increases by adding more layers (with no restrictions on size). As a by-product of our investigations, we settle an old conjecture about piecewise linear functions by Wang and Sun (2005) in the affirmative. We also present upper bounds on the sizes of neural networks required to represent functions with logarithmic depth.

Many machine learning problems can be framed in the context of estimating functions, and often these are time-dependent functions that are estimated in real-time as observations arrive. Gaussian processes (GPs) are an attractive choice for modeling real-valued nonlinear functions due to their flexibility and uncertainty quantification. However, the typical GP regression model suffers from several drawbacks: 1) Conventional GP inference scales $O(N^{3})$ with respect to the number of observations; 2) Updating a GP model sequentially is not trivial; and 3) Covariance kernels typically enforce stationarity constraints on the function, while GPs with non-stationary covariance kernels are often intractable to use in practice. To overcome these issues, we propose a sequential Monte Carlo algorithm to fit infinite mixtures of GPs that capture non-stationary behavior while allowing for online, distributed inference. Our approach empirically improves performance over state-of-the-art methods for online GP estimation in the presence of non-stationarity in time-series data. To demonstrate the utility of our proposed online Gaussian process mixture-of-experts approach in applied settings, we show that we can sucessfully implement an optimization algorithm using online Gaussian process bandits.

We investigate the problem of bandits with expert advice when the experts are fixed and known distributions over the actions. Improving on previous analyses, we show that the regret in this setting is controlled by information-theoretic quantities that measure the similarity between experts. In some natural special cases, this allows us to obtain the first regret bound for EXP4 that can get arbitrarily close to zero if the experts are similar enough. While for a different algorithm, we provide another bound that describes the similarity between the experts in terms of the KL-divergence, and we show that this bound can be smaller than the one of EXP4 in some cases. Additionally, we provide lower bounds for certain classes of experts showing that the algorithms we analyzed are nearly optimal in some cases.

A graph is $2$-planar if it has local crossing number two, that is, it can be drawn in the plane such that every edge has at most two crossings. A graph is maximal $2$-planar if no edge can be added such that the resulting graph remains $2$-planar. A $2$-planar graph on $n$ vertices has at most $5n-10$ edges, and some (maximal) $2$-planar graphs -- referred to as optimal $2$-planar -- achieve this bound. However, in strong contrast to maximal planar graphs, a maximal $2$-planar graph may have fewer than the maximum possible number of edges. In this paper, we determine the minimum edge density of maximal $2$-planar graphs by proving that every maximal $2$-planar graph on $n\ge 5$ vertices has at least $2n$ edges. We also show that this bound is tight, up to an additive constant. The lower bound is based on an analysis of the degree distribution in specific classes of drawings of the graph. The upper bound construction is verified by carefully exploring the space of admissible drawings using computer support.

(Stochastic) bilevel optimization is a frequently encountered problem in machine learning with a wide range of applications such as meta-learning, hyper-parameter optimization, and reinforcement learning. Most of the existing studies on this problem only focused on analyzing the convergence or improving the convergence rate, while little effort has been devoted to understanding its generalization behaviors. In this paper, we conduct a thorough analysis on the generalization of first-order (gradient-based) methods for the bilevel optimization problem. We first establish a fundamental connection between algorithmic stability and generalization error in different forms and give a high probability generalization bound which improves the previous best one from $\bigO(\sqrt{n})$ to $\bigO(\log n)$, where $n$ is the sample size. We then provide the first stability bounds for the general case where both inner and outer level parameters are subject to continuous update, while existing work allows only the outer level parameter to be updated. Our analysis can be applied in various standard settings such as strongly-convex-strongly-convex (SC-SC), convex-convex (C-C), and nonconvex-nonconvex (NC-NC). Our analysis for the NC-NC setting can also be extended to a particular nonconvex-strongly-convex (NC-SC) setting that is commonly encountered in practice. Finally, we corroborate our theoretical analysis and demonstrate how iterations can affect the generalization error by experiments on meta-learning and hyper-parameter optimization.

Network connectivity exposes the network infrastructure and assets to vulnerabilities that attackers can exploit. Protecting network assets against attacks requires the application of security countermeasures. Nevertheless, employing countermeasures incurs costs, such as monetary costs, along with time and energy to prepare and deploy the countermeasures. Thus, an Intrusion Response System (IRS) shall consider security and QoS costs when dynamically selecting the countermeasures to address the detected attacks. This has motivated us to formulate a joint Security-vs-QoS optimization problem to select the best countermeasures in an IRS. The problem is then transformed into a matching game-theoretical model. Considering the monetary costs and attack coverage constraints, we first derive the theoretical upper bound for the problem and later propose stable matching-based solutions to address the trade-off. The performance of the proposed solution, considering different settings, is validated over a series of simulations.

We consider a binary supervised learning classification problem where instead of having data in a finite-dimensional Euclidean space, we observe measures on a compact space $\mathcal{X}$. Formally, we observe data $D_N = (\mu_1, Y_1), \ldots, (\mu_N, Y_N)$ where $\mu_i$ is a measure on $\mathcal{X}$ and $Y_i$ is a label in $\{0, 1\}$. Given a set $\mathcal{F}$ of base-classifiers on $\mathcal{X}$, we build corresponding classifiers in the space of measures. We provide upper and lower bounds on the Rademacher complexity of this new class of classifiers that can be expressed simply in terms of corresponding quantities for the class $\mathcal{F}$. If the measures $\mu_i$ are uniform over a finite set, this classification task boils down to a multi-instance learning problem. However, our approach allows more flexibility and diversity in the input data we can deal with. While such a framework has many possible applications, this work strongly emphasizes on classifying data via topological descriptors called persistence diagrams. These objects are discrete measures on $\mathbb{R}^2$, where the coordinates of each point correspond to the range of scales at which a topological feature exists. We will present several classifiers on measures and show how they can heuristically and theoretically enable a good classification performance in various settings in the case of persistence diagrams.

The determinant lower bound of Lovasz, Spencer, and Vesztergombi [European Journal of Combinatorics, 1986] is a powerful general way to prove lower bounds on the hereditary discrepancy of a set system. In their paper, Lovasz, Spencer, and Vesztergombi asked if hereditary discrepancy can also be bounded from above by a function of the hereditary discrepancy. This was answered in the negative by Hoffman, and the largest known multiplicative gap between the two quantities for a set system of $m$ substes of a universe of size $n$ is on the order of $\max\{\log n, \sqrt{\log m}\}$. On the other hand, building on work of Matou\v{s}ek [Proceedings of the AMS, 2013], recently Jiang and Reis [SOSA, 2022] showed that this gap is always bounded up to constants by $\sqrt{\log(m)\log(n)}$. This is tight when $m$ is polynomial in $n$, but leaves open what happens for large $m$. We show that the bound of Jiang and Reis is tight for nearly the entire range of $m$. Our proof relies on a technique of amplifying discrepancy via taking Kronecker products, and on discrepancy lower bounds for a set system derived from the discrete Haar basis.

北京阿比特科技有限公司