The paper uses machine learning and mathematical modeling to predict future vaccine distribution and solve the problem of allocating vaccines to different types of hospitals. They collected data and analyzed it, finding factors such as nearby residents, transportation, and medical personnel that impact distribution. They used the results to create a model and allocate vaccines to central and community hospitals and health centers in Hangzhou Gongshu District and Harbin Daoli District based on the model. They provide an explanation for the vaccine distribution based on their model and conclusions.
In this paper, we explore two fundamental first-order algorithms in convex optimization, namely, gradient descent (GD) and proximal gradient method (ProxGD). Our focus is on making these algorithms entirely adaptive by leveraging local curvature information of smooth functions. We propose adaptive versions of GD and ProxGD that are based on observed gradient differences and, thus, have no added computational costs. Moreover, we prove convergence of our methods assuming only local Lipschitzness of the gradient. In addition, the proposed versions allow for even larger stepsizes than those initially suggested in [MM20].
We study non-monetary mechanisms for the fair and efficient allocation of reusable public resources, i.e., resources used for varying durations. We consider settings where a limited resource is repeatedly shared among a set of agents, each of whom may request to use the resource over multiple consecutive rounds, receiving utility only if they get to use the resource for the full duration of their request. Such settings are of particular significance in scientific research where large-scale instruments such as electron microscopes, particle colliders, or telescopes are shared between multiple research groups; this model also subsumes and extends existing models of repeated non-monetary allocation where resources are required for a single round only. We study a simple pseudo-market mechanism where upfront we endow each agent with a budget of artificial credits, proportional to the fair share of the resource we want the agent to receive. The endowments thus define for each agent her ideal utility as that which she derives from her favorite allocation with no competition, but subject to getting at most her fair share of the resource across rounds. Next, on each round, and for each available resource item, our mechanism runs a first-price auction with a selective reserve, wherein each agent submits a desired duration and a per-round-bid, which must be at least the reserve price if requesting for multiple rounds; the bidder with the highest per-round-bid wins, and gets to use the item for the desired duration. We consider this problem in a Bayesian setting and show that under a carefully chosen reserve price, irrespective of how others bid, each agent has a simple strategy that guarantees she receives a $1/2$ fraction of her ideal utility in expectation. We also show this result is tight, i.e., no mechanism can guarantee that all agents get more than half of their ideal utility.
Programmable packet scheduling allows the deployment of scheduling algorithms into existing switches without need for hardware redesign. Scheduling algorithms are programmed by tagging packets with ranks, indicating their desired priority. Programmable schedulers then execute these algorithms by serving packets in the order described in their ranks. The ideal programmable scheduler is a Push-In First-Out (PIFO) queue, which achieves perfect packet sorting by pushing packets into arbitrary positions in the queue, while only draining packets from the head. Unfortunately, implementing PIFO queues in hardware is challenging due to the need to arbitrarily sort packets at line rate based on their ranks. In the last years, various techniques have been proposed, approximating PIFO behaviors using the available resources of existing data planes. While promising, approaches to date only approximate one of the characteristic behaviors of PIFO queues (i.e., its scheduling behavior, or its admission control). We propose PACKS, the first programmable scheduler that fully approximates PIFO queues on all their behaviors. PACKS does so by smartly using a set of strict-priority queues. It uses packet-rank information and queue-occupancy levels at enqueue to decide: whether to admit packets to the scheduler, and how to map admitted packets to the different queues. We fully implement PACKS in P4 and evaluate it on real workloads. We show that PACKS: better-approximates PIFO than state-of-the-art approaches and scales. We also show that PACKS runs at line rate on existing hardware (Intel Tofino).
We present SEIF, a methodology that combines static analysis with symbolic execution to verify and explicate information flow paths in a hardware design. SEIF begins with a statically built model of the information flow through a design and uses guided symbolic execution to recognize and eliminate non-flows with high precision or to find corresponding paths through the design state for true flows. We evaluate SEIF on two open-source CPUs, an AES core, and the AKER access control module. SEIF can exhaustively explore 10-12 clock cycles deep in 4-6 seconds on average, and can automatically account for 86-90% of the paths in the statically built model. Additionally, SEIF can be used to find multiple violating paths for security properties, providing a new angle for security verification.
With the rapid development of deep learning, training Big Models (BMs) for multiple downstream tasks becomes a popular paradigm. Researchers have achieved various outcomes in the construction of BMs and the BM application in many fields. At present, there is a lack of research work that sorts out the overall progress of BMs and guides the follow-up research. In this paper, we cover not only the BM technologies themselves but also the prerequisites for BM training and applications with BMs, dividing the BM review into four parts: Resource, Models, Key Technologies and Application. We introduce 16 specific BM-related topics in those four parts, they are Data, Knowledge, Computing System, Parallel Training System, Language Model, Vision Model, Multi-modal Model, Theory&Interpretability, Commonsense Reasoning, Reliability&Security, Governance, Evaluation, Machine Translation, Text Generation, Dialogue and Protein Research. In each topic, we summarize clearly the current studies and propose some future research directions. At the end of this paper, we conclude the further development of BMs in a more general view.
Geometric deep learning (GDL), which is based on neural network architectures that incorporate and process symmetry information, has emerged as a recent paradigm in artificial intelligence. GDL bears particular promise in molecular modeling applications, in which various molecular representations with different symmetry properties and levels of abstraction exist. This review provides a structured and harmonized overview of molecular GDL, highlighting its applications in drug discovery, chemical synthesis prediction, and quantum chemistry. Emphasis is placed on the relevance of the learned molecular features and their complementarity to well-established molecular descriptors. This review provides an overview of current challenges and opportunities, and presents a forecast of the future of GDL for molecular sciences.
Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.
Recent contrastive representation learning methods rely on estimating mutual information (MI) between multiple views of an underlying context. E.g., we can derive multiple views of a given image by applying data augmentation, or we can split a sequence into views comprising the past and future of some step in the sequence. Contrastive lower bounds on MI are easy to optimize, but have a strong underestimation bias when estimating large amounts of MI. We propose decomposing the full MI estimation problem into a sum of smaller estimation problems by splitting one of the views into progressively more informed subviews and by applying the chain rule on MI between the decomposed views. This expression contains a sum of unconditional and conditional MI terms, each measuring modest chunks of the total MI, which facilitates approximation via contrastive bounds. To maximize the sum, we formulate a contrastive lower bound on the conditional MI which can be approximated efficiently. We refer to our general approach as Decomposed Estimation of Mutual Information (DEMI). We show that DEMI can capture a larger amount of MI than standard non-decomposed contrastive bounds in a synthetic setting, and learns better representations in a vision domain and for dialogue generation.
This paper serves as a survey of recent advances in large margin training and its theoretical foundations, mostly for (nonlinear) deep neural networks (DNNs) that are probably the most prominent machine learning models for large-scale data in the community over the past decade. We generalize the formulation of classification margins from classical research to latest DNNs, summarize theoretical connections between the margin, network generalization, and robustness, and introduce recent efforts in enlarging the margins for DNNs comprehensively. Since the viewpoint of different methods is discrepant, we categorize them into groups for ease of comparison and discussion in the paper. Hopefully, our discussions and overview inspire new research work in the community that aim to improve the performance of DNNs, and we also point to directions where the large margin principle can be verified to provide theoretical evidence why certain regularizations for DNNs function well in practice. We managed to shorten the paper such that the crucial spirit of large margin learning and related methods are better emphasized.
Benefit from the quick development of deep learning techniques, salient object detection has achieved remarkable progresses recently. However, there still exists following two major challenges that hinder its application in embedded devices, low resolution output and heavy model weight. To this end, this paper presents an accurate yet compact deep network for efficient salient object detection. More specifically, given a coarse saliency prediction in the deepest layer, we first employ residual learning to learn side-output residual features for saliency refinement, which can be achieved with very limited convolutional parameters while keep accuracy. Secondly, we further propose reverse attention to guide such side-output residual learning in a top-down manner. By erasing the current predicted salient regions from side-output features, the network can eventually explore the missing object parts and details which results in high resolution and accuracy. Experiments on six benchmark datasets demonstrate that the proposed approach compares favorably against state-of-the-art methods, and with advantages in terms of simplicity, efficiency (45 FPS) and model size (81 MB).