亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Residential fixed broadband internet access in the United States (US) has long been distributed inequitably, drawing significant attention from researchers and policymakers. This paper evaluates the efficacy of the Connect America Fund (CAF), a key policy intervention aimed at addressing disparities in US internet access. CAF subsidizes the creation of new regulated broadband monopolies in underserved areas, aiming to provide comparable internet access, in terms of price and speed, to that available in urban regions. Oversight of CAF largely relies on data self-reported by internet service providers (ISPs), which is often questionable. We use the broadband-plan querying tool (BQT) to curate a novel dataset that complements ISP-reported information with ISP-advertised broadband plan details (download speed and monthly cost) on publicly accessible websites. Specifically, we query advertised broadband plans for 687k residential addresses across 15 states, certified as served by ISPs to regulators. Our analysis reveals significant discrepancies between ISP-reported data and actual broadband availability. We find that the serviceability rate-defined as the fraction of addresses ISPs actively serve out of the total queried, weighted by the number of CAF addresses in a census block group-is only 55%, dropping to as low as 18% in some states. Additionally, the compliance rate-defined as the weighted fraction of addresses where ISPs actively serve and advertise download speeds above the FCC's 10 Mbps threshold-is only 33%. We also observe that in a subset of census blocks, CAF-funded addresses receive higher broadband speeds than their monopoly-served neighbors. These results indicate that while a few users have benefited from this multi-billion dollar program, it has largely failed to achieve its intended goal, leaving many targeted rural communities with inadequate or no broadband connectivity.

相關內容

Concentric Tube Robots (CTRs) have garnered significant interest within the surgical robotics community because of their flexibility, dexterity, and ease of miniaturization. However, mastering the unique kinematics and design principles of CTRs can be challenging for newcomers to the field. In this paper, we present an educational kit aimed at lowering the barriers to entry into concentric tube robot research. Our goal is to provide accessible learning resources for CTRs, bridging the knowledge gap between traditional robotic arms and these specialized devices. The proposed kit includes (1) An open-source design and assembly instructions for an economical (cost of materials $\approx$ 700 USD) modular CTR; (2) A set of self-study materials to learn the basics of CTR modeling and control, including automatically-graded assignments. To evaluate the effectiveness of our educational kit, we conducted a human subjects study involving first-year graduate students in engineering. Over a four-week period, participants -- none of whom had any prior knowledge of concentric tube robots -- successfully built their first CTR using the provided materials, implemented the robot's kinematics in MATLAB, and conducted a tip-tracking experiment with an optical tracking device. Our findings suggest that the proposed kit facilitates learning and hands-on experience with CTRs, and furthermore, it has the potential to help early-stage graduate students get rapidly started with CTR research. By disseminating these resources, we hope to broaden participation in concentric tube robot research to a wider a more diverse group of researchers.

Distantly-Supervised Named Entity Recognition (DS-NER) is widely used in real-world scenarios. It can effectively alleviate the burden of annotation by matching entities in existing knowledge bases with snippets in the text but suffer from the label noise. Recent works attempt to adopt the teacher-student framework to gradually refine the training labels and improve the overall robustness. However, these teacher-student methods achieve limited performance because the poor calibration of the teacher network produces incorrectly pseudo-labeled samples, leading to error propagation. Therefore, we propose: (1) Uncertainty-Aware Teacher Learning that leverages the prediction uncertainty to reduce the number of incorrect pseudo labels in the self-training stage; (2) Student-Student Collaborative Learning that allows the transfer of reliable labels between two student networks instead of indiscriminately relying on all pseudo labels from its teacher, and further enables a full exploration of mislabeled samples rather than simply filtering unreliable pseudo-labeled samples. We evaluate our proposed method on five DS-NER datasets, demonstrating that our method is superior to the state-of-the-art DS-NER methods.

The Colored Bin Packing Problem (CBPP) is a generalization of the Bin Packing Problem (BPP). The CBPP consists of packing a set of items, each with a weight and a color, in bins of limited capacity, minimizing the number of used bins and satisfying the constraint that two items of the same color cannot be packed side by side in the same bin. In this article, we proposed an adaptation of BPP heuristics and new heuristics for the CBPP. Moreover, we propose a set of fast neighborhood search algorithms for CBPP. These neighborhoods are applied in a meta-heuristic approach based on the Variable Neighborhood Search (VNS) and a matheuristic approach that combines linear programming with the meta-heuristics VNS and Greedy Randomized Adaptive Search (GRASP). The results indicate that our matheuristic is superior to VNS and that both approaches can find near-optimal solutions for a large number of instances, even for those with many items.

The initial 6G networks will likely operate in the upper mid-band (7-24 GHz), which has decent propagation conditions but underwhelming new spectrum availability. In this paper, we explore whether we can anyway reach the ambitious 6G performance goals by evolving the multiple-input multiple-output (MIMO) technology from being massive to gigantic. We describe how many antennas are needed and can realistically be deployed, and what the peak user rate and degrees-of-freedom (DOF) can become. We further suggest a new deployment strategy that enables the utilization of radiative near-field effects in these bands for precise beamfocusing, localization, and sensing from a single base station site. We also identify five open research challenges that must be overcome to efficiently use gigantic MIMO dimensions in 6G from hardware, cost, and algorithmic perspectives.

Knowledge Graphs (KGs) are fundamental resources in knowledge-intensive tasks in NLP. Due to the limitation of manually creating KGs, KG Completion (KGC) has an important role in automatically completing KGs by scoring their links with KG Embedding (KGE). To handle many entities in training, KGE relies on Negative Sampling (NS) loss that can reduce the computational cost by sampling. Since the appearance frequencies for each link are at most one in KGs, sparsity is an essential and inevitable problem. The NS loss is no exception. As a solution, the NS loss in KGE relies on smoothing methods like Self-Adversarial Negative Sampling (SANS) and subsampling. However, it is uncertain what kind of smoothing method is suitable for this purpose due to the lack of theoretical understanding. This paper provides theoretical interpretations of the smoothing methods for the NS loss in KGE and induces a new NS loss, Triplet Adaptive Negative Sampling (TANS), that can cover the characteristics of the conventional smoothing methods. Experimental results of TransE, DistMult, ComplEx, RotatE, HAKE, and HousE on FB15k-237, WN18RR, and YAGO3-10 datasets and their sparser subsets show the soundness of our interpretation and performance improvement by our TANS.

Graph neural networks (GNNs) have demonstrated a significant boost in prediction performance on graph data. At the same time, the predictions made by these models are often hard to interpret. In that regard, many efforts have been made to explain the prediction mechanisms of these models from perspectives such as GNNExplainer, XGNN and PGExplainer. Although such works present systematic frameworks to interpret GNNs, a holistic review for explainable GNNs is unavailable. In this survey, we present a comprehensive review of explainability techniques developed for GNNs. We focus on explainable graph neural networks and categorize them based on the use of explainable methods. We further provide the common performance metrics for GNNs explanations and point out several future research directions.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.

Deep Convolutional Neural Networks (CNNs) are a special type of Neural Networks, which have shown state-of-the-art results on various competitive benchmarks. The powerful learning ability of deep CNN is largely achieved with the use of multiple non-linear feature extraction stages that can automatically learn hierarchical representation from the data. Availability of a large amount of data and improvements in the hardware processing units have accelerated the research in CNNs and recently very interesting deep CNN architectures are reported. The recent race in deep CNN architectures for achieving high performance on the challenging benchmarks has shown that the innovative architectural ideas, as well as parameter optimization, can improve the CNN performance on various vision-related tasks. In this regard, different ideas in the CNN design have been explored such as use of different activation and loss functions, parameter optimization, regularization, and restructuring of processing units. However, the major improvement in representational capacity is achieved by the restructuring of the processing units. Especially, the idea of using a block as a structural unit instead of a layer is gaining substantial appreciation. This survey thus focuses on the intrinsic taxonomy present in the recently reported CNN architectures and consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multi-path, width, feature map exploitation, channel boosting and attention. Additionally, it covers the elementary understanding of the CNN components and sheds light on the current challenges and applications of CNNs.

Convolutional networks (ConvNets) have achieved great successes in various challenging vision tasks. However, the performance of ConvNets would degrade when encountering the domain shift. The domain adaptation is more significant while challenging in the field of biomedical image analysis, where cross-modality data have largely different distributions. Given that annotating the medical data is especially expensive, the supervised transfer learning approaches are not quite optimal. In this paper, we propose an unsupervised domain adaptation framework with adversarial learning for cross-modality biomedical image segmentations. Specifically, our model is based on a dilated fully convolutional network for pixel-wise prediction. Moreover, we build a plug-and-play domain adaptation module (DAM) to map the target input to features which are aligned with source domain feature space. A domain critic module (DCM) is set up for discriminating the feature space of both domains. We optimize the DAM and DCM via an adversarial loss without using any target domain label. Our proposed method is validated by adapting a ConvNet trained with MRI images to unpaired CT data for cardiac structures segmentations, and achieved very promising results.

北京阿比特科技有限公司