亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper puts forth a new metric, dubbed channel cycle time, to measure the short-term fairness of communication networks. Channel cycle time characterizes the average duration between two successful transmissions of a user, during which all other users have successfully accessed the channel at least once. Compared with existing short-term fairness measures, channel cycle time provides a comprehensive picture of the transient behavior of communication networks, and is a single real value that is easy to compute. To demonstrate the effectiveness of our new approach, we analytically characterize the channel cycle time of slotted Aloha and CSMA/CA. It is shown that CSMA/CA is a short-term fairer protocol than slotted Aloha. Channel cycle time can serve as a promising design principle for future communication networks, placing greater emphasis on optimizing short-term behaviors like fairness, delay, and jitter.

相關內容

Wi-Fi Direct is a promising technology for the support of device-to-device communications (D2D) on commercial mobile devices. However, the standard as-it-is is not sufficient to support the real deployment of networking solutions entirely based on D2D such as opportunistic networks. In fact, WiFi Direct presents some characteristics that could limit the autonomous creation of D2D connections among users' personal devices. Specifically, the standard explicitly requires the user's authorization to establish a connection between two or more devices, and it provides a limited support for inter-group communication. In some cases, this might lead to the creation of isolated groups of nodes which cannot communicate among each other. In this paper, we propose a novel middleware-layer protocol for the efficient configuration and management of WiFi Direct groups (WiFi Direct Group Manager, WFD-GM) to enable autonomous connections and inter-group communication. This enables opportunistic networks in real conditions (e.g., variable mobility and network size). WFD-GM defines a context function that takes into account heterogeneous parameters for the creation of the best group configuration in a specific time window, including an index of nodes' stability and power levels. We evaluate the protocol performances by simulating three reference scenarios including different mobility models, geographical areas and number of nodes. Simulations are also supported by experimental results related to the evaluation in a real testbed of the involved context parameters. We compare WFD-GM with the state-of-the-art solutions and we show that it performs significantly better than a Baseline approach in scenarios with medium/low mobility, and it is comparable with it in case of high mobility, without introducing additional overhead.

Due to the diffusion of IoT, modern software systems are often thought to control and coordinate smart devices in order to manage assets and resources, and to guarantee efficient behaviours. For this class of systems, which interact extensively with humans and with their environment, it is thus crucial to guarantee their correct behaviour in order to avoid unexpected and possibly dangerous situations. In this paper we will present a framework that allows us to measure the robustness of systems. This is the ability of a program to tolerate changes in the environmental conditions and preserving the original behaviour. In the proposed framework, the interaction of a program with its environment is represented as a sequence of random variables describing how both evolve in time. For this reason, the considered measures will be defined among probability distributions of observed data. The proposed framework will be then used to define the notions of adaptability and reliability. The former indicates the ability of a program to absorb perturbation on environmental conditions after a given amount of time. The latter expresses the ability of a program to maintain its intended behaviour (up-to some reasonable tolerance) despite the presence of perturbations in the environment. Moreover, an algorithm, based on statistical inference, is proposed to evaluate the proposed metric and the aforementioned properties. We use two case studies to the describe and evaluate the proposed approach.

Entity matching (EM) is a challenging problem studied by different communities for over half a century. Algorithmic fairness has also become a timely topic to address machine bias and its societal impacts. Despite extensive research on these two topics, little attention has been paid to the fairness of entity matching. Towards addressing this gap, we perform an extensive experimental evaluation of a variety of EM techniques in this paper. We generated two social datasets from publicly available datasets for the purpose of auditing EM through the lens of fairness. Our findings underscore potential unfairness under two common conditions in real-world societies: (i) when some demographic groups are overrepresented, and (ii) when names are more similar in some groups compared to others. Among our many findings, it is noteworthy to mention that while various fairness definitions are valuable for different settings, due to EM's class imbalance nature, measures such as positive predictive value parity and true positive rate parity are, in general, more capable of revealing EM unfairness.

This paper presents an end-to-end methodology for collecting datasets to recognize handwritten English alphabets by utilizing Inertial Measurement Units (IMUs) and leveraging the diversity present in the Indian writing style. The IMUs are utilized to capture the dynamic movement patterns associated with handwriting, enabling more accurate recognition of alphabets. The Indian context introduces various challenges due to the heterogeneity in writing styles across different regions and languages. By leveraging this diversity, the collected dataset and the collection system aim to achieve higher recognition accuracy. Some preliminary experimental results demonstrate the effectiveness of the dataset in accurately recognizing handwritten English alphabet in the Indian context. This research can be extended and contributes to the field of pattern recognition and offers valuable insights for developing improved systems for handwriting recognition, particularly in diverse linguistic and cultural contexts.

Centralized data silos are not only becoming prohibitively expensive but also raise issues of data ownership and data availability. These developments are affecting the industry, researchers, and ultimately society in general. Decentralized storage solutions present a promising alternative. Furthermore, such systems can become a crucial layer for new paradigms of edge-centric computing and web3 applications. Decentralized storage solutions based on p2p networks can enable scalable and self-sustaining open-source infrastructures. However, like other p2p systems, they require well-designed incentive mechanisms for participating peers. These mechanisms should be not only effective but also fair in regard to individual participants. Even though several such systems have been studied in deployment, there is still a lack of systematic understanding regarding these issues. We investigate the interplay between incentive mechanisms, network characteristics, and fairness of peer rewards. In particular, we identify and evaluate three core and up-to-date reward mechanisms for moving data in p2p networks: distance-based payments, reciprocity, and time-limited free service. Distance-based payments are relevant since libp2p Kademlia, which enables distance-based algorithms for content lookup and retrieval, is part of various modern p2p systems. We base our model on the Swarm network that uses a combination of the three mechanisms and serves as inspiration for our Tit-for-Token model. We present our Tit-for-Token model and develop a tool to explore the behaviors of these payment mechanisms. Our evaluation provides novel insights into the functioning and interplay of these mechanisms and helps. Based on these insights, we propose modifications to these mechanisms that better address fairness concerns and outline improvement proposals for the Swarm network.

One of the most important hyper-parameters in duration-dependent Markov-switching (DDMS) models is the duration of the hidden states. Because there is currently no procedure for estimating this duration or testing whether a given duration is appropriate for a given data set, an ad hoc duration choice must be heuristically justified. This is typically a difficult task and is likely the most delicate point of the modeling procedure, allowing for criticism and ultimately hindering the use of DDMS models. In this paper, we propose and examine a methodology that mitigates the choice of duration in DDMS models when forecasting is the goal. The idea is to use a parametric link instead of the usual fixed link when calculating transition probabilities. As a result, the model becomes more flexible and any potentially incorrect duration choice (i.e., misspecification) is compensated by the parameter in the link, yielding a likelihood and transition probabilities very close to the true ones while, at the same time, improving forecasting accuracy under misspecification. We evaluate the proposed approach in Monte Carlo simulations and using real data applications. Results indicate that the parametric link model outperforms the benchmark logit model, both in terms of in-sample estimation and out-of-sample forecasting, for both well-specified and misspecified duration values.

Semantic similarity measures are widely used in natural language processing to catalyze various computer-related tasks. However, no single semantic similarity measure is the most appropriate for all tasks, and researchers often use ensemble strategies to ensure performance. This research work proposes a method for automatically designing semantic similarity ensembles. In fact, our proposed method uses grammatical evolution, for the first time, to automatically select and aggregate measures from a pool of candidates to create an ensemble that maximizes correlation to human judgment. The method is evaluated on several benchmark datasets and compared to state-of-the-art ensembles, showing that it can significantly improve similarity assessment accuracy and outperform existing methods in some cases. As a result, our research demonstrates the potential of using grammatical evolution to automatically compare text and prove the benefits of using ensembles for semantic similarity tasks.

We consider a user-centric cell-free massive MIMO wireless network with $L$ remote radio units, each with $M$ antennas, serving $K_{\rm tot}$ user equipments (UEs). Most of the literature considers the regime $LM \gg K_{\rm tot}$, where the $K$ UEs are active on each time-frequency slot, and evaluates the system performance in terms of ergodic rates. In this paper, we take a quite different viewpoint. We observe that the regime of $LM \gg K_{\rm tot}$ corresponds to a lightly loaded system with low sum spectral efficiency (SE). In contrast, in most relevant scenarios, the number of UEs is much larger than the total number of antennas, but users are not all active at the same time. To achieve high sum SE and handle $K_{\rm tot} \gg ML$, users must be scheduled over the time-frequency resource. The number of active users $K_{\rm act} \leq K_{\rm tot}$ must be chosen such that: 1) the network operates close to its maximum SE; 2) the active user set must be chosen dynamically over time in order to enforce fairness in terms of per-user time-averaged throughput rates. The fairness scheduling problem is formulated as the maximization of a concave componentwise non-decreasing network utility function of the per-user rates. Intermittent user activity imposes slot-by-slot coding/decoding which prevents the achievability of ergodic rates. Hence, we model the per-slot service rates using information outage probability. To obtain a tractable problem, we make a decoupling assumption on the CDF of the instantaneous mutual information seen at each UE $k$ receiver. We approximately enforce this condition with a conflict graph that prevents the simultaneous scheduling of users with large pilot contamination and propose an adaptive scheme for instantaneous service rate scheduling. Overall, the proposed dynamic scheduling is robust to system model uncertainties and can be easily implemented in practice.

As artificial intelligence plays an increasingly substantial role in decisions affecting humans and society, the accountability of automated decision systems has been receiving increasing attention from researchers and practitioners. Fairness, which is concerned with eliminating unjust treatment and discrimination against individuals or sensitive groups, is a critical aspect of accountability. Yet, for evaluating fairness, there is a plethora of fairness metrics in the literature that employ different perspectives and assumptions that are often incompatible. This work focuses on group fairness. Most group fairness metrics desire a parity between selected statistics computed from confusion matrices belonging to different sensitive groups. Generalizing this intuition, this paper proposes a new equal confusion fairness test to check an automated decision system for fairness and a new confusion parity error to quantify the extent of any unfairness. To further analyze the source of potential unfairness, an appropriate post hoc analysis methodology is also presented. The usefulness of the test, metric, and post hoc analysis is demonstrated via a case study on the controversial case of COMPAS, an automated decision system employed in the US to assist judges with assessing recidivism risks. Overall, the methods and metrics provided here may assess automated decision systems' fairness as part of a more extensive accountability assessment, such as those based on the system accountability benchmark.

Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.

北京阿比特科技有限公司