亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider the problem of allocating orders to multiple stations and sequencing the interlinked order and rack processing flows in each station in the robot-assisted KIVA warehouse. The various decisions involved in the problem, which are closely associated and must be solved in real time, are often tackled separately for ease of treatment. However, exploiting the synergy between order assignment and picking station scheduling benefits picking efficiency. We develop a comprehensive mathematical model that takes the synergy into consideration to minimize the total number of rack visits. To solve this intractable problem, we develop an efficient algorithm based on simulated annealing and beam search. Computational studies show that our proposed approach outperforms the rule-based greedy policy and the independent picking station scheduling method in terms of solution quality, saving over one-third and one-fifth of rack visits compared with the former and latter, respectively.

相關內容

軟件工程評估(Evaluation and Assessment in Software Engineering,EASE)會議是一個國際領先的會議場所,學術界和實踐者可以在此展示和討論他們對基于證據的軟件工程的研究及其對軟件實踐的影響。第23屆EASE將于2019年4月在丹麥哥本哈根舉行,由哥本哈根IT大學主辦。EASE 2019歡迎向不同領域提交高質量的研究報告:完整的研究論文、短篇論文和手工藝品、新興成果和愿景、行業軌跡、博士研討會、海報。官網鏈接: · MIMO · Networks · 設計 · 閉式 ·
2023 年 6 月 23 日

Rate-Splitting Multiple Access (RSMA) is a robust multiple access scheme for multi-antenna wireless networks. In this work, we study the performance of RSMA in downlink overloaded networks, where the number of transmit antennas is smaller than the number of users. We provide analysis and closed-form solutions for optimal power and rate allocations that maximize max-min fairness when low-complexity precoding schemes are employed. The derived closed-form solutions are used to propose a low-complexity RSMA system design for precoder selection and resource allocation for arbitrary number of users and antennas under perfect Channel State Information at the Transmitter (CSIT). We compare the performance of the proposed design with benchmark designs based on Space Division Multiple Access (SDMA) to show that the proposed low-complexity RSMA design achieves a significantly higher performance gain in overloaded networks.

The advent of AI driven large language models (LLMs) have stirred discussions about their role in qualitative research. Some view these as tools to enrich human understanding, while others perceive them as threats to the core values of the discipline. This study aimed to compare and contrast the comprehension capabilities of humans and LLMs. We conducted an experiment with small sample of Alexa app reviews, initially classified by a human analyst. LLMs were then asked to classify these reviews and provide the reasoning behind each classification. We compared the results with human classification and reasoning. The research indicated a significant alignment between human and ChatGPT 3.5 classifications in one third of cases, and a slightly lower alignment with GPT4 in over a quarter of cases. The two AI models showed a higher alignment, observed in more than half of the instances. However, a consensus across all three methods was seen only in about one fifth of the classifications. In the comparison of human and LLMs reasoning, it appears that human analysts lean heavily on their individual experiences. As expected, LLMs, on the other hand, base their reasoning on the specific word choices found in app reviews and the functional components of the app itself. Our results highlight the potential for effective human LLM collaboration, suggesting a synergistic rather than competitive relationship. Researchers must continuously evaluate LLMs role in their work, thereby fostering a future where AI and humans jointly enrich qualitative research.

The distributed computation of a Nash equilibrium in aggregative games is gaining increased traction in recent years. Of particular interest is the mediator-free scenario where individual players only access or observe the decisions of their neighbors due to practical constraints. Given the competitive rivalry among participating players, protecting the privacy of individual players becomes imperative when sensitive information is involved. We propose a fully distributed equilibrium-computation approach for aggregative games that can achieve both rigorous differential privacy and guaranteed computation accuracy of the Nash equilibrium. This is in sharp contrast to existing differential-privacy solutions for aggregative games that have to either sacrifice the accuracy of equilibrium computation to gain rigorous privacy guarantees, or allow the cumulative privacy budget to grow unbounded, hence losing privacy guarantees, as iteration proceeds. Our approach uses independent noises across players, thus making it effective even when adversaries have access to all shared messages as well as the underlying algorithm structure. The encryption-free nature of the proposed approach, also ensures efficiency in computation and communication. The approach is also applicable in stochastic aggregative games, able to ensure both rigorous differential privacy and guaranteed computation accuracy of the Nash equilibrium when individual players only have stochastic estimates of their pseudo-gradient mappings. Numerical comparisons with existing counterparts confirm the effectiveness of the proposed approach.

Two new numerical schemes to approximate the Cahn-Hilliard equation with degenerate mobility (between stable values 0 and 1) are presented, by using two different non-centered approximation of the mobility. We prove that both schemes are energy stable and preserve the maximum principle approximately, i.e. the amount of the solution being outside of the interval [0,1] goes to zero in terms of a truncation parameter. Additionally, we present several numerical results in order to show the accuracy and the well behavior of the proposed schemes, comparing both schemes and the corresponding centered scheme.

Causal discovery from time series data is a typical problem setting across the sciences. Often, multiple datasets of the same system variables are available, for instance, time series of river runoff from different catchments. The local catchment systems then share certain causal parents, such as time-dependent large-scale weather over all catchments, but differ in other catchment-specific drivers, such as the altitude of the catchment. These drivers can be called temporal and spatial contexts, respectively, and are often partially unobserved. Pooling the datasets and considering the joint causal graph among system, context, and certain auxiliary variables enables us to overcome such latent confounding of system variables. In this work, we present a non-parametric time series causal discovery method, J(oint)-PCMCI+, that efficiently learns such joint causal time series graphs when both observed and latent contexts are present, including time lags. We present asymptotic consistency results and numerical experiments demonstrating the utility and limitations of the method.

Cascade systems comprise a two-model sequence, with a lightweight model processing all samples and a heavier, higher-accuracy model conditionally refining harder samples to improve accuracy. By placing the light model on the device side and the heavy model on a server, model cascades constitute a widely used distributed inference approach. With the rapid expansion of intelligent indoor environments, such as smart homes, the new setting of Multi-Device Cascade is emerging where multiple and diverse devices are to simultaneously use a shared heavy model on the same server, typically located within or close to the consumer environment. This work presents MultiTASC, a multi-tenancy-aware scheduler that adaptively controls the forwarding decision functions of the devices in order to maximize the system throughput, while sustaining high accuracy and low latency. By explicitly considering device heterogeneity, our scheduler improves the latency service-level objective (SLO) satisfaction rate by 20-25 percentage points (pp) over state-of-the-art cascade methods in highly heterogeneous setups, while serving over 40 devices, showcasing its scalability.

TalkBank is an online database that facilitates the sharing of linguistics research data. However, the existing TalkBank's API has limited data filtering and batch processing capabilities. To overcome these limitations, this paper introduces a pipeline framework that employs a hierarchical search approach, enabling efficient complex data selection. This approach involves a quick preliminary screening of relevant corpora that a researcher may need, and then perform an in-depth search for target data based on specific criteria. The identified files are then indexed, providing easier access for future analysis. Furthermore, the paper demonstrates how data from different studies curated with the framework can be integrated by standardizing and cleaning metadata, allowing researchers to extract insights from a large, integrated dataset. While being designed for TalkBank, the framework can also be adapted to process data from other open-science platforms.

This work aims to address the general order manipulation issue in blockchain-based decentralized exchanges (DEX) by exploring the benefits of employing differentially order-fair atomic broadcast (of-ABC) mechanisms for transaction ordering and frequent batch auction (FBA) for execution. In the suggested of-ABC approach, transactions submitted to a sufficient number of blockchain validators are ordered before or along with later transactions. FBA then executes transactions with a uniform price double auction that prioritizes price instead of transaction order within the same committed batch. To demonstrate the effectiveness of our order-but-not-execute-in-order design, we compare the welfare loss and liquidity provision in DEX under FBA and its continuous counterpart, Central Limit Order Book (CLOB). Assuming that the exchange is realized over an of-ABC protocol, we find that FBA achieves better social welfare compared to CLOB when (1) public information affecting the fundamental value of an asset is revealed more frequently than private information, or (2) the block generation interval is sufficiently large, or (3) the priority fees attached to submitted transactions are small compared to the asset price changes. Further, our findings also indicate that (4) liquidity provision is better under FBA when the market is not thin, meaning that a higher number of transactions are submitted by investors and traders in a block, or (5) when fewer privately informed traders are present. Overall, in the settings mentioned above, the adoption of FBA and of-ABC mechanisms in DEX demonstrates improved performance in terms of social welfare and liquidity provision compared to the continuous CLOB model.

Since the cyberspace consolidated as fifth warfare dimension, the different actors of the defense sector began an arms race toward achieving cyber superiority, on which research, academic and industrial stakeholders contribute from a dual vision, mostly linked to a large and heterogeneous heritage of developments and adoption of civilian cybersecurity capabilities. In this context, augmenting the conscious of the context and warfare environment, risks and impacts of cyber threats on kinetic actuations became a critical rule-changer that military decision-makers are considering. A major challenge on acquiring mission-centric Cyber Situational Awareness (CSA) is the dynamic inference and assessment of the vertical propagations from situations that occurred at the mission supportive Information and Communications Technologies (ICT), up to their relevance at military tactical, operational and strategical views. In order to contribute on acquiring CSA, this paper addresses a major gap in the cyber defence state-of-the-art: the dynamic identification of Key Cyber Terrains (KCT) on a mission-centric context. Accordingly, the proposed KCT identification approach explores the dependency degrees among tasks and assets defined by commanders as part of the assessment criteria. These are correlated with the discoveries on the operational network and the asset vulnerabilities identified thorough the supported mission development. The proposal is presented as a reference model that reveals key aspects for mission-centric KCT analysis and supports its enforcement and further enforcement by including an illustrative application case.

Artificial Intelligence (AI) is rapidly becoming integrated into military Command and Control (C2) systems as a strategic priority for many defence forces. The successful implementation of AI is promising to herald a significant leap in C2 agility through automation. However, realistic expectations need to be set on what AI can achieve in the foreseeable future. This paper will argue that AI could lead to a fragility trap, whereby the delegation of C2 functions to an AI could increase the fragility of C2, resulting in catastrophic strategic failures. This calls for a new framework for AI in C2 to avoid this trap. We will argue that antifragility along with agility should form the core design principles for AI-enabled C2 systems. This duality is termed Agile, Antifragile, AI-Enabled Command and Control (A3IC2). An A3IC2 system continuously improves its capacity to perform in the face of shocks and surprises through overcompensation from feedback during the C2 decision-making cycle. An A3IC2 system will not only be able to survive within a complex operational environment, it will also thrive, benefiting from the inevitable shocks and volatility of war.

北京阿比特科技有限公司