亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The non-dominated sorting genetic algorithm II (NSGA-II) is the most intensively used multi-objective evolutionary algorithm (MOEA) in real-world applications. However, in contrast to several simple MOEAs analyzed also via mathematical means, no such study exists for the NSGA-II so far. In this work, we show that mathematical runtime analyses are feasible also for the NSGA-II. As particular results, we prove that with a population size four times larger than the size of the Pareto front, the NSGA-II with two classic mutation operators and four different ways to select the parents satisfies the same asymptotic runtime guarantees as the SEMO and GSEMO algorithms on the basic OneMinMax and LeadingOnesTrailingZeros benchmarks. However, if the population size is only equal to the size of the Pareto front, then the NSGA-II cannot efficiently compute the full Pareto front: for an exponential number of iterations, the population will always miss a constant fraction of the Pareto front. Our experiments confirm the above findings.

相關內容

Despite the efficacy of graph-based algorithms for Approximate Nearest Neighbor (ANN) searches, the optimal tuning of such systems remains unclear. This study introduces a method to tune the performance of off-the-shelf graph-based indexes, focusing on the dimension of vectors, database size, and entry points of graph traversal. We utilize a black-box optimization algorithm to perform integrated tuning to meet the required levels of recall and Queries Per Second (QPS). We applied our approach to Task A of the SISAP 2023 Indexing Challenge and got second place in the 10M and 30M tracks. It improves performance substantially compared to brute force methods. This research offers a universally applicable tuning method for graph-based indexes, extending beyond the specific conditions of the competition to broader uses.

We prove precise rates of convergence for monotone approximation schemes of fractional and nonlocal Hamilton-Jacobi-Bellman (HJB) equations. We consider diffusion corrected difference-quadrature schemes from the literature and new approximations based on powers of discrete Laplacians, approximations which are (formally) fractional order and 2nd order methods. It is well-known in numerical analysis that convergence rates depend on the regularity of solutions, and here we consider cases with varying solution regularity: (i) Strongly degenerate problems with Lipschitz solutions, and (ii) weakly non-degenerate problems where we show that solutions have bounded fractional derivatives of order between 1 and 2. Our main results are optimal error estimates with convergence rates that capture precisely both the fractional order of the schemes and the fractional regularity of the solutions. For strongly degenerate equations, these rates improve earlier results. For weakly non-degenerate problems of order greater than one, the results are new. Here we show improved rates compared to the strongly degenerate case, rates that are always better than 1/2.

Emergency communication is vital for search and rescue operations following natural disasters. Unmanned Aerial Vehicles (UAVs) can significantly assist emergency communication by agile positioning, maintaining connectivity during rapid motion, and relaying critical disaster-related information to Ground Control Stations (GCS). Designing effective routing protocols for relaying crucial data in UAV networks is challenging due to dynamic topology, rapid mobility, and limited UAV resources. This paper presents a novel energy-constrained routing mechanism that ensures connectivity, inter-UAV collision avoidance, and network restoration post-UAV fragmentation while adapting without a predefined UAV path. The proposed method employs improved Q learning to optimize the next-hop node selection. Considering these factors, the paper proposes a novel, Improved Q-learning-based Multi-hop Routing (IQMR) protocol. Simulation results validate IQMRs adaptability to changing system conditions and superiority over QMR, QTAR, and QFANET in energy efficiency and data throughput. IQMR achieves energy consumption efficiency improvements of 32.27%, 36.35%, and 36.35% over QMR, Q-FANET, and QTAR, along with significantly higher data throughput enhancements of 53.3%, 80.35%, and 93.36% over Q-FANET, QMR, and QTAR.

Uncertainty quantification (UQ) is an active area of research, and an essential technique used in all fields of science and engineering. The most common methods for UQ are Monte Carlo and surrogate-modelling. The former method is dimensionality independent but has slow convergence, while the latter method has been shown to yield large computational speedups with respect to Monte Carlo. However, surrogate models suffer from the so-called curse of dimensionality, and become costly to train for high-dimensional problems, where UQ might become computationally prohibitive. In this paper we present a new technique, Lasso Monte Carlo (LMC), which combines a Lasso surrogate model with the multifidelity Monte Carlo technique, in order to perform UQ in high-dimensional settings, at a reduced computational cost. We provide mathematical guarantees for the unbiasedness of the method, and show that LMC can be more accurate than simple Monte Carlo. The theory is numerically tested with benchmarks on toy problems, as well as on a real example of UQ from the field of nuclear engineering. In all presented examples LMC is more accurate than simple Monte Carlo and other multifidelity methods. Thanks to LMC, computational costs are reduced by more than a factor of 5 with respect to simple MC, in relevant cases.

Ordinary differential equations (ODEs) are foundational in modeling intricate dynamics across a gamut of scientific disciplines. Yet, a possibility to represent a single phenomenon through multiple ODE models, driven by different understandings of nuances in internal mechanisms or abstraction levels, presents a model selection challenge. This study introduces a testing-based approach for ODE model selection amidst statistical noise. Rooted in the model misspecification framework, we adapt foundational insights from classical statistical paradigms (Vuong and Hotelling) to the ODE context, allowing for the comparison and ranking of diverse causal explanations without the constraints of nested models. Our simulation studies validate the theoretical robustness of our proposed test, revealing its consistent size and power. Real-world data examples further underscore the algorithm's applicability in practice. To foster accessibility and encourage real-world applications, we provide a user-friendly Python implementation of our model selection algorithm, bridging theoretical advancements with hands-on tools for the scientific community.

We prove precise rates of convergence for monotone approximation schemes of fractional and nonlocal Hamilton-Jacobi-Bellman (HJB) equations. We consider diffusion corrected difference-quadrature schemes from the literature and new approximations based on powers of discrete Laplacians, approximations which are (formally) fractional order and 2nd order methods. It is well-known in numerical analysis that convergence rates depend on the regularity of solutions, and here we consider cases with varying solution regularity: (i) Strongly degenerate problems with Lipschitz solutions, and (ii) weakly non-degenerate problems where we show that solutions have bounded fractional derivatives of order between 1 and 2. Our main results are optimal error estimates with convergence rates that capture precisely both the fractional order of the schemes and the fractional regularity of the solutions. For strongly degenerate equations, these rates improve earlier results. For weakly non-degenerate problems of order greater than one, the results are new. Here we show improved rates compared to the strongly degenerate case, rates that are always better than 1/2.

Recent artificial intelligence (AI) systems have reached milestones in "grand challenges" ranging from Go to protein-folding. The capability to retrieve medical knowledge, reason over it, and answer medical questions comparably to physicians has long been viewed as one such grand challenge. Large language models (LLMs) have catalyzed significant progress in medical question answering; Med-PaLM was the first model to exceed a "passing" score in US Medical Licensing Examination (USMLE) style questions with a score of 67.2% on the MedQA dataset. However, this and other prior work suggested significant room for improvement, especially when models' answers were compared to clinicians' answers. Here we present Med-PaLM 2, which bridges these gaps by leveraging a combination of base LLM improvements (PaLM 2), medical domain finetuning, and prompting strategies including a novel ensemble refinement approach. Med-PaLM 2 scored up to 86.5% on the MedQA dataset, improving upon Med-PaLM by over 19% and setting a new state-of-the-art. We also observed performance approaching or exceeding state-of-the-art across MedMCQA, PubMedQA, and MMLU clinical topics datasets. We performed detailed human evaluations on long-form questions along multiple axes relevant to clinical applications. In pairwise comparative ranking of 1066 consumer medical questions, physicians preferred Med-PaLM 2 answers to those produced by physicians on eight of nine axes pertaining to clinical utility (p < 0.001). We also observed significant improvements compared to Med-PaLM on every evaluation axis (p < 0.001) on newly introduced datasets of 240 long-form "adversarial" questions to probe LLM limitations. While further studies are necessary to validate the efficacy of these models in real-world settings, these results highlight rapid progress towards physician-level performance in medical question answering.

Graph Neural Networks (GNNs) have recently become increasingly popular due to their ability to learn complex systems of relations or interactions arising in a broad spectrum of problems ranging from biology and particle physics to social networks and recommendation systems. Despite the plethora of different models for deep learning on graphs, few approaches have been proposed thus far for dealing with graphs that present some sort of dynamic nature (e.g. evolving features or connectivity over time). In this paper, we present Temporal Graph Networks (TGNs), a generic, efficient framework for deep learning on dynamic graphs represented as sequences of timed events. Thanks to a novel combination of memory modules and graph-based operators, TGNs are able to significantly outperform previous approaches being at the same time more computationally efficient. We furthermore show that several previous models for learning on dynamic graphs can be cast as specific instances of our framework. We perform a detailed ablation study of different components of our framework and devise the best configuration that achieves state-of-the-art performance on several transductive and inductive prediction tasks for dynamic graphs.

Named entity recognition (NER) is the task to identify text spans that mention named entities, and to classify them into predefined categories such as person, location, organization etc. NER serves as the basis for a variety of natural language applications such as question answering, text summarization, and machine translation. Although early NER systems are successful in producing decent recognition accuracy, they often require much human effort in carefully designing rules or features. In recent years, deep learning, empowered by continuous real-valued vector representations and semantic composition through nonlinear processing, has been employed in NER systems, yielding stat-of-the-art performance. In this paper, we provide a comprehensive review on existing deep learning techniques for NER. We first introduce NER resources, including tagged NER corpora and off-the-shelf NER tools. Then, we systematically categorize existing works based on a taxonomy along three axes: distributed representations for input, context encoder, and tag decoder. Next, we survey the most representative methods for recent applied techniques of deep learning in new NER problem settings and applications. Finally, we present readers with the challenges faced by NER systems and outline future directions in this area.

Reasoning with knowledge expressed in natural language and Knowledge Bases (KBs) is a major challenge for Artificial Intelligence, with applications in machine reading, dialogue, and question answering. General neural architectures that jointly learn representations and transformations of text are very data-inefficient, and it is hard to analyse their reasoning process. These issues are addressed by end-to-end differentiable reasoning systems such as Neural Theorem Provers (NTPs), although they can only be used with small-scale symbolic KBs. In this paper we first propose Greedy NTPs (GNTPs), an extension to NTPs addressing their complexity and scalability limitations, thus making them applicable to real-world datasets. This result is achieved by dynamically constructing the computation graph of NTPs and including only the most promising proof paths during inference, thus obtaining orders of magnitude more efficient models. Then, we propose a novel approach for jointly reasoning over KBs and textual mentions, by embedding logic facts and natural language sentences in a shared embedding space. We show that GNTPs perform on par with NTPs at a fraction of their cost while achieving competitive link prediction results on large datasets, providing explanations for predictions, and inducing interpretable models. Source code, datasets, and supplementary material are available online at //github.com/uclnlp/gntp.

北京阿比特科技有限公司