亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A consensus mechanism is proposed to facilitate radio spectrum sharing with accountability in a network of multiple operators, a subset of which may even be adversarial. A distributed ledger is used to securely record and keep track of the state of consensus on spectrum usage, including interference incidents and the corresponding responsible parties. A key challenge is that the operators generally do not have initial agreement due to noise in their analog measurements. To meet this challenge, two categories of spectrum-sharing solutions are studied in detail. The first category employs an exact Byzantine fault tolerant (BFT) agreement model; the second category utilizes an approximate BFT agreement model. This paper also delves into the application of consensus protocols to the specific context of low Earth orbit (LEO) non-geostationary satellite networks, also known as mega-constellations.

相關內容

Distributed deep neural networks (DNNs) have been shown to reduce the computational burden of mobile devices and decrease the end-to-end inference latency in edge computing scenarios. While distributed DNNs have been studied, to the best of our knowledge the resilience of distributed DNNs to adversarial action still remains an open problem. In this paper, we fill the existing research gap by rigorously analyzing the robustness of distributed DNNs against adversarial action. We cast this problem in the context of information theory and introduce two new measurements for distortion and robustness. Our theoretical findings indicate that (i) assuming the same level of information distortion, latent features are always more robust than input representations; (ii) the adversarial robustness is jointly determined by the feature dimension and the generalization capability of the DNN. To test our theoretical findings, we perform extensive experimental analysis by considering 6 different DNN architectures, 6 different approaches for distributed DNN and 10 different adversarial attacks to the ImageNet-1K dataset. Our experimental results support our theoretical findings by showing that the compressed latent representations can reduce the success rate of adversarial attacks by 88% in the best case and by 57% on the average compared to attacks to the input space.

Algorithmic recourse -- providing recommendations to those affected negatively by the outcome of an algorithmic system on how they can take action and change that outcome -- has gained attention as a means of giving persons agency in their interactions with artificial intelligence (AI) systems. Recent work has shown that even if an AI decision-making classifier is ``fair'' (according to some reasonable criteria), recourse itself may be unfair due to differences in the initial circumstances of individuals, compounding disparities for marginalized populations and requiring them to exert more effort than others. There is a need to define more methods and metrics for evaluating fairness in recourse that span a range of normative views of the world, and specifically those that take into account time. Time is a critical element in recourse because the longer it takes an individual to act, the more the setting may change due to model or data drift. This paper seeks to close this research gap by proposing two notions of fairness in recourse that are in normative alignment with substantive equality of opportunity, and that consider time. The first considers the (often repeated) effort individuals exert per successful recourse event, and the second considers time per successful recourse event. Building upon an agent-based framework for simulating recourse, this paper demonstrates how much effort is needed to overcome disparities in initial circumstances. We then proposes an intervention to improve the fairness of recourse by rewarding effort, and compare it to existing strategies.

Glyphosate contamination in waters is becoming a major health problem that needs to be urgently addressed, as accidental spraying, drift or leakage of this highly water-soluble herbicide can impact aquatic ecosystems. Researchers are increasingly concerned about exposure to glyphosate and the risks its poses to human health, since it may cause substantial damage, even in small doses. The detection of glyphosate residues in waters is not a simple task, as it requires complex and expensive equipment and qualified personnel. New technological tools need to be designed and developed, based on proven, but also cost-efficient, agile and user-friendly, analytical techniques, which can be used in the field and in the lab, enabled by connectivity and multi-platform software applications. This paper presents the design, development and testing of an innovative low-cost VIS-NIR (Visible and Near-Infrared) spectrometer (called SpectroGLY), based on IoT (Internet of Things) technologies, which allows potential glyphosate contamination in waters to be detected. SpectroGLY combines the functional concept of a traditional lab spectrometer with the IoT technological concept, enabling the integration of several connectivity options for rural and urban settings and digital visualization and monitoring platforms (Mobile App and Dashboard Web). Thanks to its portability, it can be used in any context and provides results in 10 minutes. Additionally, it is unnecessary to transfer the sample to a laboratory (optimizing time, costs and the capacity for corrective actions by the authorities). In short, this paper proposes an innovative, low-cost, agile and highly promising solution to avoid potential intoxications that may occur due to ingestion of water contaminated by this herbicide.

The frequency with which the letters of the English alphabet appear in writings has been applied to the field of cryptography, the development of keyboard mechanics, and the study of linguistics. We expanded on the statistical analysis of the English alphabet by examining the average frequency which each letter appears in different categories of writings. We evaluated news articles, novels, plays, scientific publications and calculated the frequency of each letter of the alphabet, the information density of each letter, and the overall letter distribution. Furthermore, we developed a metric known as distance, d that can be used to algorithmically recognize different categories of writings. The results of our study can be applied to information transmission, large data curation, and linguistics.

Missing data is a pernicious problem in epidemiologic research. Research on the validity of complete case analysis for missing data has typically focused on estimating the average treatment effect (ATE) in the whole population. However, other target populations like the treated (ATT) or external targets can be of substantive interest. In such cases, whether missing covariate data occurs within or outside the target population may impact the validity of complete case analysis. We sought to assess bias in complete case analysis when covariate data is missing outside the target (e.g., missing covariate data among the untreated when estimating the ATT). We simulated a study of the effect of a binary treatment X on a binary outcome Y in the presence of 3 confounders C1-C3 that modified the risk difference (RD). We induced missingness in C1 only among the untreated under 4 scenarios: completely randomly (similar to MCAR); randomly based on C2 and C3 (similar to MAR); randomly based on C1 (similar to MNAR); or randomly based on Y (similar to MAR). We estimated the ATE and ATT using weighting and averaged results across the replicates. We conducted a parallel simulation transporting trial results to a target population in the presence of missing covariate data in the trial. In the complete case analysis, estimated ATE was unbiased only when C1 was MCAR among the untreated. The estimated ATT, on the other hand, was unbiased in all scenarios except when Y caused missingness. The parallel simulation of generalizing and transporting trial results saw similar bias patterns. If missing covariate data is only present outside the target population, complete case analysis is unbiased except when missingness is associated with the outcome.

One of the primary sequencing methods gaining prominence in DNA storage is nanopore sequencing, attributed to various factors. In this work, we consider a simplified model of the sequencer, characterized as a channel. This channel takes a sequence and processes it using a sliding window of length $\ell$, shifting the window by $\delta$ characters each time. The output of this channel, which we refer to as the read vector, is a vector containing the sums of the entries in each of the windows. The capacity of the channel is defined as the maximal information rate of the channel. Previous works have already revealed capacity values for certain parameters $\ell$ and $\delta$. In this work, we show that when $\delta < \ell < 2\delta$, the capacity value is given by $\frac{1}{\delta}\log_2 \frac{1}{2}(\ell+1+ \sqrt{(\ell+1)^2 - 4(\ell - \delta)(\ell-\delta +1)})$. Additionally, we construct an upper bound when $2\delta < \ell$. Finally, we extend the model to the two-dimensional case and present several results on its capacity.

The adapted Wasserstein distance controls the calibration errors of optimal values in various stochastic optimization problems, pricing and hedging problems, optimal stopping problems, etc. Motivated by approximating the true underlying distribution by empirical data, we consider empirical measures of $\mathbb{R}^d$-valued stochastic process in finite discrete-time. It is known that the empirical measures do not converge under the adapted Wasserstein distance. To address this issue, we consider convolutions of Gaussian kernels and empirical measures as an alternative, which we refer to the Gaussian-smoothed empirical measures. By setting the bandwidths of Gaussian kernels depending on the number of samples, we prove the convergence of the Gaussian-smoothed empirical measures to the true underlying measure in terms of mean, deviation, and almost sure convergence. Although Gaussian-smoothed empirical measures converge to the true underlying measure and can potentially enlarge data, they are not discrete measures and therefore not applicable in practice. Therefore, we combine Gaussian-smoothed empirical measures and the adapted empirical measures in \cite{acciaio2022convergence} to introduce the adapted smoothed empirical measures, which are discrete substitutes of the smoothed empirical measures. We establish the polynomial mean convergence rate, the exponential deviation convergence rate and the almost sure convergence of the adapted smoothed empirical measures.

We investigate the approximation of Monge--Kantorovich problems on general compact metric spaces, showing that optimal values, plans and maps can be effectively approximated via a fully discrete method. First we approximate optimal values and plans by solving finite dimensional discretizations of the corresponding Kantorovich problem. Then we approximate optimal maps by means of the usual barycentric projection or by an analogous procedure available in general spaces without a linear structure. We prove the convergence of all these approximants in full generality and show that our convergence results are sharp.

Graph neural networks (GNNs) have been demonstrated to be a powerful algorithmic model in broad application fields for their effectiveness in learning over graphs. To scale GNN training up for large-scale and ever-growing graphs, the most promising solution is distributed training which distributes the workload of training across multiple computing nodes. However, the workflows, computational patterns, communication patterns, and optimization techniques of distributed GNN training remain preliminarily understood. In this paper, we provide a comprehensive survey of distributed GNN training by investigating various optimization techniques used in distributed GNN training. First, distributed GNN training is classified into several categories according to their workflows. In addition, their computational patterns and communication patterns, as well as the optimization techniques proposed by recent work are introduced. Second, the software frameworks and hardware platforms of distributed GNN training are also introduced for a deeper understanding. Third, distributed GNN training is compared with distributed training of deep neural networks, emphasizing the uniqueness of distributed GNN training. Finally, interesting issues and opportunities in this field are discussed.

Deep neural networks have revolutionized many machine learning tasks in power systems, ranging from pattern recognition to signal processing. The data in these tasks is typically represented in Euclidean domains. Nevertheless, there is an increasing number of applications in power systems, where data are collected from non-Euclidean domains and represented as the graph-structured data with high dimensional features and interdependency among nodes. The complexity of graph-structured data has brought significant challenges to the existing deep neural networks defined in Euclidean domains. Recently, many studies on extending deep neural networks for graph-structured data in power systems have emerged. In this paper, a comprehensive overview of graph neural networks (GNNs) in power systems is proposed. Specifically, several classical paradigms of GNNs structures (e.g., graph convolutional networks, graph recurrent neural networks, graph attention networks, graph generative networks, spatial-temporal graph convolutional networks, and hybrid forms of GNNs) are summarized, and key applications in power systems such as fault diagnosis, power prediction, power flow calculation, and data generation are reviewed in detail. Furthermore, main issues and some research trends about the applications of GNNs in power systems are discussed.

北京阿比特科技有限公司