亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Quantum network simulation is an essential step towards developing applications for quantum networks and determining minimal requirements for the network hardware. As it is with classical networking, a simulation ecosystem allows for application development, standardization, and overall community building. Currently, there is limited traction towards building a quantum networking community-there are limited open-source platforms, challenging frameworks with steep learning curves, and strong requirements of software engineering skills. Our Quantum Network Development Kit (QNDK) project aims to solve these issues. It includes a graphical user interface to easily develop and run quantum network simulations with very little code. It integrates various quantum network simulation engines and provides a single interface to them, allowing users to use the features from any of them. Further, it deploys simulation execution in a cloud environment, offloading strong computing requirements to a high-performance computing system. In this paper, we detail the core features of the QNDK and outline the development roadmap to enabling virtual quantum testbeds.

相關內容

Networking:IFIP International Conferences on Networking。 Explanation:國際網(wang)絡(luo)會議(yi)。 Publisher:IFIP。 SIT:

In reinforcement learning, offline value function learning is the procedure of using an offline dataset to estimate the expected discounted return from each state when taking actions according to a fixed target policy. The stability of this procedure, i.e., whether it converges to its fixed-point, critically depends on the representations of the state-action pairs. Poorly learned representations can make value function learning unstable, or even divergent. Therefore, it is critical to stabilize value function learning by explicitly shaping the state-action representations. Recently, the class of bisimulation-based algorithms have shown promise in shaping representations for control. However, it is still unclear if this class of methods can stabilize value function learning. In this work, we investigate this question and answer it affirmatively. We introduce a bisimulation-based algorithm called kernel representations for offline policy evaluation (KROPE). KROPE uses a kernel to shape state-action representations such that state-action pairs that have similar immediate rewards and lead to similar next state-action pairs under the target policy also have similar representations. We show that KROPE: 1) learns stable representations and 2) leads to lower value error than baselines. Our analysis provides new theoretical insight into the stability properties of bisimulation-based methods and suggests that practitioners can use these methods for stable and accurate evaluation of offline reinforcement learning agents.

Self-supervised learning has developed rapidly over the last decade and has been applied in many areas of computer vision. Decorrelation-based self-supervised pretraining has shown great promise among non-contrastive algorithms, yielding performance at par with supervised and contrastive self-supervised baselines. In this work, we explore the decorrelation-based paradigm of self-supervised learning and apply the same to learning disentangled stroke features for writer identification. Here we propose a modified formulation of the decorrelation-based framework named SWIS which was proposed for signature verification by standardizing the features along each dimension on top of the existing framework. We show that the proposed framework outperforms the contemporary self-supervised learning framework on the writer identification benchmark and also outperforms several supervised methods as well. To the best of our knowledge, this work is the first of its kind to apply self-supervised learning for learning representations for writer verification tasks.

We consider a decentralized optimization problem for networks affected by communication delays. Examples of such networks include collaborative machine learning, sensor networks, and multi-agent systems. To mimic communication delays, we add virtual non-computing nodes to the network, resulting in directed graphs. This motivates investigating decentralized optimization solutions on directed graphs. Existing solutions assume nodes know their out-degrees, resulting in limited applicability. To overcome this limitation, we introduce a novel gossip-based algorithm, called DT-GO, that does not need to know the out-degrees. The algorithm is applicable in general directed networks, for example networks with delays or limited acknowledgment capabilities. We derive convergence rates for both convex and non-convex objectives, showing that our algorithm achieves the same complexity order as centralized Stochastic Gradient Descent. In other words, the effects of the graph topology and delays are confined to higher-order terms. Additionally, we extend our analysis to accommodate time-varying network topologies. Numerical simulations are provided to support our theoretical findings.

Aspect-based Sentiment Analysis (ABSA) is an important sentiment analysis task, which aims to determine the sentiment polarity towards an aspect in a sentence. Due to the expensive and limited labeled data, data generation (DG) has become the standard for improving the performance of ABSA. However, current DG methods usually have some shortcomings: 1) poor fluency and coherence, 2) lack of diversity of generated data, and 3) reliance on some existing labeled data, hindering its applications in real-world scenarios. With the advancement of large language models (LLMs), LLM-based DG has the potential to solve the above issues. Unfortunately, directly prompting LLMs struggles to generate the desired pseudo-label ABSA data, as LLMs are prone to hallucinations, leading to undesired data generation. To this end, we propose a systematic Iterative Data Generation framework, namely IDG, to boost the performance of ABSA. The core of IDG is to make full use of the powerful abilities (i.e., instruction-following, in-context learning and self-reflection) of LLMs to iteratively generate more fluent and diverse pseudo-label data, starting from an unsupervised sentence corpus. Specifically, IDG designs a novel iterative data generation mechanism and a self-reflection data filtering module to tackle the challenges of unexpected data generation caused by hallucinations. Extensive experiments on four widely-used ABSA benchmarks show that IDG brings consistent and significant performance gains among five baseline ABSA models. More encouragingly, the synthetic data generated by IDG can achieve comparable or even better performance against the manually annotated data.

Predictive modeling and time-pattern analysis are increasingly critical in this swiftly shifting retail environment to improve operational efficiency and informed decision-making. This paper reports a comprehensive application of state-of-the-art machine learning to the retailing domain with a specific focus on association rule mining, sequential pattern mining, and time-series forecasting. Association rules: Relationship Mining This provides the key product relationships and customer buying patterns that form the basis of individually tailored marketing campaigns. Sequential pattern mining: Using the PrefixSpan algorithm, it identifies frequent sequences of purchasing products-extremely powerful insights into consumer behavior and also better management of the inventories. What is applied for sales trend forecasting models Prophet applies on historical transaction data over seasonality, holidays, and long-term growth. The forecast results allow predicting demand variations, thus helping in proper inventory alignment and avoiding overstocking or understocking of inventory. Our results are checked through the help of metrics like MAE (Mean Absolute Error) and RMSE (Root Mean Squared Error) to ensure our predictions are strong and accurate. We will combine the aspects of all of these techniques to prove how predictive modeling and temporal pattern analysis can help optimize control over inventory, enhance marketing effectiveness, and position retail businesses as they rise to ever greater heights. This entire methodology demonstrates the flexibility with which data-driven strategies can be leveraged to revitalize traditional retailing practices.

Given a graph $G = (V, E)$ and a model of information flow on that network, a fundamental question is to understand whether all nodes have sufficient access to information generated at other nodes in the graph. If not, we can ask if a small set of interventions in the form of edge additions improve information access. Formally, the broadcast value of a network is defined to be the minimum over pairs $u,v \in V$ of the probability that an information cascade starting at $u$ reaches $v$. Having a high broadcast value ensures that every node has sufficient access to information spreading in a network, thus quantifying fairness of access. In this paper, we formally study the Broadcast Improvement problem: given $G$ and a parameter $k$, the goal is to find the best set of $k$ edges to add to $G$ in order to maximize the broadcast value of the resulting graph. We develop efficient approximation algorithms for this problem. If the optimal solution adds $k$ edges and achieves a broadcast of $\beta^*$, we develop algorithms that can (a) add $k$ edges and achieve a broadcast value roughly $(\beta^*)^4/16^k$, or (b) add $O(k\log n)$ edges and achieve a broadcast roughly $\beta^*$. We also provide other trade-offs that can be better depending on the parameter values. Our algorithms rely on novel probabilistic tools to reason about the existence of paths in edge-sampled graphs, and extend to a single-source variant of the problem, where we obtain analogous algorithmic results. We complement our results by proving that unless P = NP, any algorithm that adds $O(k)$ edges must lose significantly in the approximation of $\beta^*$, resolving an open question from prior work.

Agent-based modeling and simulation has evolved as a powerful tool for modeling complex systems, offering insights into emergent behaviors and interactions among diverse agents. Integrating large language models into agent-based modeling and simulation presents a promising avenue for enhancing simulation capabilities. This paper surveys the landscape of utilizing large language models in agent-based modeling and simulation, examining their challenges and promising future directions. In this survey, since this is an interdisciplinary field, we first introduce the background of agent-based modeling and simulation and large language model-empowered agents. We then discuss the motivation for applying large language models to agent-based simulation and systematically analyze the challenges in environment perception, human alignment, action generation, and evaluation. Most importantly, we provide a comprehensive overview of the recent works of large language model-empowered agent-based modeling and simulation in multiple scenarios, which can be divided into four domains: cyber, physical, social, and hybrid, covering simulation of both real-world and virtual environments. Finally, since this area is new and quickly evolving, we discuss the open problems and promising future directions.

Unsupervised domain adaptation has recently emerged as an effective paradigm for generalizing deep neural networks to new target domains. However, there is still enormous potential to be tapped to reach the fully supervised performance. In this paper, we present a novel active learning strategy to assist knowledge transfer in the target domain, dubbed active domain adaptation. We start from an observation that energy-based models exhibit free energy biases when training (source) and test (target) data come from different distributions. Inspired by this inherent mechanism, we empirically reveal that a simple yet efficient energy-based sampling strategy sheds light on selecting the most valuable target samples than existing approaches requiring particular architectures or computation of the distances. Our algorithm, Energy-based Active Domain Adaptation (EADA), queries groups of targe data that incorporate both domain characteristic and instance uncertainty into every selection round. Meanwhile, by aligning the free energy of target data compact around the source domain via a regularization term, domain gap can be implicitly diminished. Through extensive experiments, we show that EADA surpasses state-of-the-art methods on well-known challenging benchmarks with substantial improvements, making it a useful option in the open world. Code is available at //github.com/BIT-DA/EADA.

Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data. User heterogeneity has imposed significant challenges to FL, which can incur drifted global models that are slow to converge. Knowledge Distillation has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users, other than directly averaging their model parameters. This approach, however, depends on a proxy dataset, making it impractical unless such a prerequisite is satisfied. Moreover, the ensemble knowledge is not fully utilized to guide local model learning, which may in turn affect the quality of the aggregated model. Inspired by the prior art, we propose a data-free knowledge distillation} approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner, which is then broadcasted to users, regulating local training using the learned knowledge as an inductive bias. Empirical studies powered by theoretical implications show that, our approach facilitates FL with better generalization performance using fewer communication rounds, compared with the state-of-the-art.

Multiple instance learning (MIL) is a powerful tool to solve the weakly supervised classification in whole slide image (WSI) based pathology diagnosis. However, the current MIL methods are usually based on independent and identical distribution hypothesis, thus neglect the correlation among different instances. To address this problem, we proposed a new framework, called correlated MIL, and provided a proof for convergence. Based on this framework, we devised a Transformer based MIL (TransMIL), which explored both morphological and spatial information. The proposed TransMIL can effectively deal with unbalanced/balanced and binary/multiple classification with great visualization and interpretability. We conducted various experiments for three different computational pathology problems and achieved better performance and faster convergence compared with state-of-the-art methods. The test AUC for the binary tumor classification can be up to 93.09% over CAMELYON16 dataset. And the AUC over the cancer subtypes classification can be up to 96.03% and 98.82% over TCGA-NSCLC dataset and TCGA-RCC dataset, respectively.

北京阿比特科技有限公司