亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Cluster-randomized trials often involve units that are irregularly distributed in space without well-separated communities. In these settings, cluster construction is a critical aspect of the design due to the potential for cross-cluster interference. The existing literature relies on partial interference models, which take clusters as given and assume no cross-cluster interference. We relax this assumption by allowing interference to decay with geographic distance between units. This induces a bias-variance trade-off: constructing fewer, larger clusters reduces bias due to interference but increases variance. We propose new estimators that exclude units most potentially impacted by cross-cluster interference and show that this substantially reduces asymptotic bias relative to conventional difference-in-means estimators. We then study the design of clusters to optimize the estimators' rates of convergence. We provide formal justification for a new design that chooses the number of clusters to balance the asymptotic bias and variance of our estimators and uses unsupervised learning to automate cluster construction.

相關內容

Deep Material Network (DMN) has recently emerged as a data-driven surrogate model for heterogeneous materials. Given a particular microstructural morphology, the effective linear and nonlinear behaviors can be successfully approximated by such physics-based neural-network like architecture. In this work, a novel micromechanics-informed parametric DMN (MIpDMN) architecture is proposed for multiscale materials with a varying microstructure characterized by several parameters. A single-layer feedforward neural network is used to account for the dependence of DMN fitting parameters on the microstructural ones. Micromechanical constraints are prescribed both on the architecture and the outputs of this new neural network. The proposed MIpDMN is also recast in a multiple physics setting, where physical properties other than the mechanical ones can also be predicted. In the numerical simulations conducted on three parameterized microstructures, MIpDMN demonstrates satisfying generalization capabilities when morphology varies. The effective behaviors of such parametric multiscale materials can thus be predicted and encoded by MIpDMN with high accuracy and efficiency.

Hybrid Flying-Crawling Quadrotors (HyFCQs) are transformable robots with the ability of terrestrial and aerial hybrid motion. This article presents a motion planning and control framework designed for HyFCQs. A kinodynamic path-searching method with the crawling limitation of HyFCQs is proposed to guarantee the dynamical feasibility of trajectories. Subsequently, a hierarchical motion controller is designed to map the execution of the flight autopilot to both crawling and flying modes. Considering the distinct driving methods for crawling and flying, we introduce a motion state machine for autonomous locomotion regulation. Real-world experiments in diverse scenarios validate the exceptional performance of the proposed approach.

The roll out of 5G has been mainly characterized by its distinct support for vertical industries, especially manufacturing. Leveraging synergies among these two worlds, namely production facilities and network systems, is a fundamental aspect to enable flexibility and economic viability in future factories. This work highlights the potential for intelligent networking and advanced machine learning-based solutions in 5G-and-beyond systems in the context of Industry 4.0 and flexible manufacturing. The intersection thereof allows to create versatile machines and dynamic communication networks that can adapt to changes in the manufacturing process, factory layout and communication environment, supporting real-time interaction between humans, machines, and systems. We present a vision and corresponding framework by introducing the network-aware and production-aware principles, outlining results achieved in this context and summarizing them into three key use cases. Finally, we discuss a selection of remaining open challenges in private networks as well as give an outlook on future 6G research directions.

Representing a polygon using a set of simple shapes has numerous applications in different use-case scenarios. We consider the problem of covering the interior of a rectilinear polygon with holes by a set of area-weighted, axis-aligned rectangles such that the total weight of the rectangles in the cover is minimized. Already the unit-weight case is known to be NP-hard and the general problem has, to the best of our knowledge, not been studied experimentally before. We show a new basic property of optimal solutions of the weighted problem. This allows us to speed up existing algorithms for the unit-weight case, obtain an improved ILP formulation for both the weighted and unweighted problem, and develop several approximation algorithms and heuristics for the weighted case. All our algorithms are evaluated in a large experimental study on 186 837 polygons combined with six cost functions, which provides evidence that our algorithms are both fast and yield close-to-optimal solutions in practice.

Invisible units mainly refer to small-scale units that are not monitored by, and thus are not visible to utilities. Integration of these invisible units into power systems does significantly affect the way in which a distribution grid is planned and operated. This paper, based on random matrix theory (RMT), proposes a statistical, data-driven framework to handle the massive grid data, in contrast to its deterministic, model-based counterpart. Combining the RMT-based data-mining framework with conventional techniques, some heuristics are derived as the solution to the invisible units detection and estimation task: linear eigenvalue statistic indicators (LESs) are suggested as the main ingredients of the solution; according to the statistical properties of LESs, the hypothesis testing is formulated to conduct change point detection in the high-dimensional space. The proposed method is promising for anomaly detection and pertinent to current distribution networks-it is capable of detecting invisible power usage and fraudulent behavior while even being able to locate the suspect's location. Case studies, using both simulated data and actual data, validate the proposed method.

An approach for encoding abstract dialectical frameworks and their semantics into classical higher-order logic is presented. Important properties and semantic relationships are formally encoded and proven using the proof assistant Isabelle/HOL. This approach allows for the computer-assisted analysis of abstract dialectical frameworks using automated and interactive reasoning tools within a uniform logic environment. Exemplary applications include the formal analysis and verification of meta-theoretical properties, and the generation of interpretations and extensions under specific semantic constraints.

Large Language Models (LLMs) have shown excellent generalization capabilities that have led to the development of numerous models. These models propose various new architectures, tweaking existing architectures with refined training strategies, increasing context length, using high-quality training data, and increasing training time to outperform baselines. Analyzing new developments is crucial for identifying changes that enhance training stability and improve generalization in LLMs. This survey paper comprehensively analyses the LLMs architectures and their categorization, training strategies, training datasets, and performance evaluations and discusses future research directions. Moreover, the paper also discusses the basic building blocks and concepts behind LLMs, followed by a complete overview of LLMs, including their important features and functions. Finally, the paper summarizes significant findings from LLM research and consolidates essential architectural and training strategies for developing advanced LLMs. Given the continuous advancements in LLMs, we intend to regularly update this paper by incorporating new sections and featuring the latest LLM models.

Recent artificial intelligence (AI) systems have reached milestones in "grand challenges" ranging from Go to protein-folding. The capability to retrieve medical knowledge, reason over it, and answer medical questions comparably to physicians has long been viewed as one such grand challenge. Large language models (LLMs) have catalyzed significant progress in medical question answering; Med-PaLM was the first model to exceed a "passing" score in US Medical Licensing Examination (USMLE) style questions with a score of 67.2% on the MedQA dataset. However, this and other prior work suggested significant room for improvement, especially when models' answers were compared to clinicians' answers. Here we present Med-PaLM 2, which bridges these gaps by leveraging a combination of base LLM improvements (PaLM 2), medical domain finetuning, and prompting strategies including a novel ensemble refinement approach. Med-PaLM 2 scored up to 86.5% on the MedQA dataset, improving upon Med-PaLM by over 19% and setting a new state-of-the-art. We also observed performance approaching or exceeding state-of-the-art across MedMCQA, PubMedQA, and MMLU clinical topics datasets. We performed detailed human evaluations on long-form questions along multiple axes relevant to clinical applications. In pairwise comparative ranking of 1066 consumer medical questions, physicians preferred Med-PaLM 2 answers to those produced by physicians on eight of nine axes pertaining to clinical utility (p < 0.001). We also observed significant improvements compared to Med-PaLM on every evaluation axis (p < 0.001) on newly introduced datasets of 240 long-form "adversarial" questions to probe LLM limitations. While further studies are necessary to validate the efficacy of these models in real-world settings, these results highlight rapid progress towards physician-level performance in medical question answering.

Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.

Human-in-the-loop aims to train an accurate prediction model with minimum cost by integrating human knowledge and experience. Humans can provide training data for machine learning applications and directly accomplish some tasks that are hard for computers in the pipeline with the help of machine-based approaches. In this paper, we survey existing works on human-in-the-loop from a data perspective and classify them into three categories with a progressive relationship: (1) the work of improving model performance from data processing, (2) the work of improving model performance through interventional model training, and (3) the design of the system independent human-in-the-loop. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/ weaknesses, we have simple classification and discussion in natural language processing, computer vision, and others. Besides, we provide some open challenges and opportunities. This survey intends to provide a high-level summarization for human-in-the-loop and motivates interested readers to consider approaches for designing effective human-in-the-loop solutions.

北京阿比特科技有限公司