亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The algorithm of Tutte for constructing convex planar straight-line drawings and the algorithm of Floater and Gotsman for constructing planar straight-line morphs are among the most popular graph drawing algorithms. Quite surprisingly, little is known about the resolution of the drawings produced by these algorithms. In this paper, focusing on maximal plane graphs, we prove tight bounds on the resolution of the planar straight-line drawings produced by Floater's algorithm, which is a broad generalization of Tutte's algorithm. Further, we use such a result in order to prove a lower bound on the resolution of the drawings of maximal plane graphs produced by Floater and Gotsman's morphing algorithm. Finally, we show that such a morphing algorithm might produce drawings with exponentially-small resolution, even when transforming drawings with polynomial resolution.

相關內容

We introduce the Density Formula for (topological) drawings of graphs in the plane or on the sphere, which relates the number of edges, vertices, crossings, and sizes of cells in the drawing. We demonstrate its capability by providing several applications: we prove tight upper bounds on the edge density of various beyond-planar graph classes, including so-called $k$-planar graphs with $k=1,2$, fan-crossing / fan-planar graphs, $k$-bend RAC-graphs with $k=0,1,2$, and quasiplanar graphs. In some cases ($1$-bend and $2$-bend RAC-graphs and fan-crossing / fan-planar graphs), we thereby obtain the first tight upper bounds on the edge density of the respective graph classes. In other cases, we give new streamlined and significantly shorter proofs for bounds that were already known in the literature. Thanks to the Density Formula, all of our proofs are mostly elementary counting and mostly circumvent the typical intricate case analysis found in earlier proofs. Further, in some cases (simple and non-homotopic quasiplanar graphs), our alternative proofs using the Density Formula lead to the first tight lower bound examples.

A system of partial differential equations (PDE) of a heat-transferring copper rod and a magnetizable piezoelectric beam, describing the longitudinal vibrations and the total charge accumulation at the electrodes of the beam, is considered in the transmission line setting. For magnetizable piezoelectric beams, traveling electromagnetic and mechanical waves are able to interact strongly despite a huge difference in velocities. It is known that the heat and beam interactions in the open-loop setting does not yield exponentially stability with the thermal effects only. Therefore, two types of boundary-type state feedback controllers are proposed. (i) Both feedback controllers are chosen static. (ii) The electrical controller of the piezoelectric beam is chosen dynamic to accelerate the system dynamics. The PDE system for each case is shown to have exponentially stable solutions by cleverly-constructed Lyapunov functions with various multipliers. The proposed proof technique is in line with proving the exponential stability of Finite-Difference-based robust model reductions as the discretization parameter tends to zero.

We propose a self-correction mechanism for Large Language Models (LLMs) to mitigate issues such as toxicity and fact hallucination. This method involves refining model outputs through an ensemble of critics and the model's own feedback. Drawing inspiration from human behavior, we explore whether LLMs can emulate the self-correction process observed in humans who often engage in self-reflection and seek input from others to refine their understanding of complex topics. Our approach is model-agnostic and can be applied across various domains to enhance trustworthiness by addressing fairness, bias, and robustness concerns. We consistently observe performance improvements in LLMs for reducing toxicity and correcting factual errors.

The fundamental diagram serves as the foundation of traffic flow modeling for almost a century. With the increasing availability of road sensor data, deterministic parametric models have proved inadequate in describing the variability of real-world data, especially in congested area of the density-flow diagram. In this paper we estimate the stochastic density-flow relation introducing a nonparametric method called convex quantile regression. The proposed method does not depend on any prior functional form assumptions, but thanks to the concavity constraints, the estimated function satisfies the theoretical properties of the density-flow curve. The second contribution is to develop the new convex quantile regression with bags (CQRb) approach to facilitate practical implementation of CQR to the real-world data. We illustrate the CQRb estimation process using the road sensor data from Finland in years 2016-2018. Our third contribution is to demonstrate the excellent out-of-sample predictive power of the proposed CQRb method in comparison to the standard parametric deterministic approach.

Many constraint satisfaction and optimisation problems can be solved effectively by encoding them as instances of the Boolean Satisfiability problem (SAT). However, even the simplest types of constraints have many encodings in the literature with widely varying performance, and the problem of selecting suitable encodings for a given problem instance is not trivial. We explore the problem of selecting encodings for pseudo-Boolean and linear constraints using a supervised machine learning approach. We show that it is possible to select encodings effectively using a standard set of features for constraint problems; however we obtain better performance with a new set of features specifically designed for the pseudo-Boolean and linear constraints. In fact, we achieve good results when selecting encodings for unseen problem classes. Our results compare favourably to AutoFolio when using the same feature set. We discuss the relative importance of instance features to the task of selecting the best encodings, and compare several variations of the machine learning method.

The problem of correcting deletions and insertions has recently received significantly increased attention due to the DNA-based data storage technology, which suffers from deletions and insertions with extremely high probability. In this work, we study the problem of constructing non-binary burst-deletion/insertion correcting codes. Particularly, for the quaternary alphabet, our designed codes are suited for correcting a burst of deletions/insertions in DNA storage. Non-binary codes correcting a single deletion or insertion were introduced by Tenengolts [1984], and the results were extended to correct a fixed-length burst of deletions or insertions by Schoeny et al. [2017]. Recently, Wang et al. [2021] proposed constructions of non-binary codes of length n, correcting a burst of length at most two for q-ary alphabets with redundancy log n+O(log q log log n) bits, for arbitrary even q. The common idea in those constructions is to convert non-binary sequences into binary sequences, and the error decoding algorithms for the q-ary sequences are mainly based on the success of recovering the corresponding binary sequences, respectively. In this work, we look at a natural solution in which the error detection and correction algorithms are performed directly over q-ary sequences, and for certain cases, our codes provide a more efficient encoder with lower redundancy than the best-known encoder in the literature.

This study focuses on the optimization of the Big-means algorithm for clustering large-scale datasets, exploring four distinct parallelization strategies. We conducted extensive experiments to assess the computational efficiency, scalability, and clustering performance of each approach, revealing their benefits and limitations. The paper also delves into the trade-offs between computational efficiency and clustering quality, examining the impacts of various factors. Our insights provide practical guidance on selecting the best parallelization strategy based on available resources and dataset characteristics, contributing to a deeper understanding of parallelization techniques for the Big-means algorithm.

Designing and analyzing model-based RL (MBRL) algorithms with guaranteed monotonic improvement has been challenging, mainly due to the interdependence between policy optimization and model learning. Existing discrepancy bounds generally ignore the impacts of model shifts, and their corresponding algorithms are prone to degrade performance by drastic model updating. In this work, we first propose a novel and general theoretical scheme for a non-decreasing performance guarantee of MBRL. Our follow-up derived bounds reveal the relationship between model shifts and performance improvement. These discoveries encourage us to formulate a constrained lower-bound optimization problem to permit the monotonicity of MBRL. A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns. Motivated by these analyses, we design a simple but effective algorithm CMLO (Constrained Model-shift Lower-bound Optimization), by introducing an event-triggered mechanism that flexibly determines when to update the model. Experiments show that CMLO surpasses other state-of-the-art methods and produces a boost when various policy optimization methods are employed.

Knowledge graph reasoning (KGR), aiming to deduce new facts from existing facts based on mined logic rules underlying knowledge graphs (KGs), has become a fast-growing research direction. It has been proven to significantly benefit the usage of KGs in many AI applications, such as question answering and recommendation systems, etc. According to the graph types, the existing KGR models can be roughly divided into three categories, \textit{i.e.,} static models, temporal models, and multi-modal models. The early works in this domain mainly focus on static KGR and tend to directly apply general knowledge graph embedding models to the reasoning task. However, these models are not suitable for more complex but practical tasks, such as inductive static KGR, temporal KGR, and multi-modal KGR. To this end, multiple works have been developed recently, but no survey papers and open-source repositories comprehensively summarize and discuss models in this important direction. To fill the gap, we conduct a survey for knowledge graph reasoning tracing from static to temporal and then to multi-modal KGs. Concretely, the preliminaries, summaries of KGR models, and typical datasets are introduced and discussed consequently. Moreover, we discuss the challenges and potential opportunities. The corresponding open-source repository is shared on GitHub: //github.com/LIANGKE23/Awesome-Knowledge-Graph-Reasoning.

Explainable Artificial Intelligence (XAI) is transforming the field of Artificial Intelligence (AI) by enhancing the trust of end-users in machines. As the number of connected devices keeps on growing, the Internet of Things (IoT) market needs to be trustworthy for the end-users. However, existing literature still lacks a systematic and comprehensive survey work on the use of XAI for IoT. To bridge this lacking, in this paper, we address the XAI frameworks with a focus on their characteristics and support for IoT. We illustrate the widely-used XAI services for IoT applications, such as security enhancement, Internet of Medical Things (IoMT), Industrial IoT (IIoT), and Internet of City Things (IoCT). We also suggest the implementation choice of XAI models over IoT systems in these applications with appropriate examples and summarize the key inferences for future works. Moreover, we present the cutting-edge development in edge XAI structures and the support of sixth-generation (6G) communication services for IoT applications, along with key inferences. In a nutshell, this paper constitutes the first holistic compilation on the development of XAI-based frameworks tailored for the demands of future IoT use cases.

北京阿比特科技有限公司