We investigate autonomous mobile robots in the Euclidean plane. A robot has a function called target function to decide the destination from the robots' positions. Robots may have different target functions. If the robots whose target functions are chosen from a set $\Phi$ of target functions always solve a problem $\Pi$, we say that $\Phi$ is compatible with respect to $\Pi$. If $\Phi$ is compatible with respect to $\Pi$, every target function $\phi \in \Phi$ is an algorithm for $\Pi$. Even if both $\phi$ and $\phi'$ are algorithms for $\Pi$, $\{ \phi, \phi' \}$ may not be compatible with respect to $\Pi$. From the view point of compatibility, we investigate the convergence, the fault tolerant ($n,f$)-convergence (FC($f$)), the fault tolerant ($n,f$)-convergence to $f$ points (FC($f$)-PO), the fault tolerant ($n,f$)-convergence to a convex $f$-gon (FC($f$)-CP), and the gathering problems, assuming crash failures. Obtained results classify these problems into three groups: The convergence, FC(1), FC(1)-PO, and FC($f$)-CP compose the first group: Every set of target functions which always shrink the convex hull of a configuration is compatible. The second group is composed of the gathering and FC($f$)-PO for $f \geq 2$: No set of target functions which always shrink the convex hull of a configuration is compatible. The third group, FC($f$) for $f \geq 2$, is placed in between. Thus, FC(1) and FC(2), FC(1)-PO and FC(2)-PO, and FC(2) and FC(2)-PO are respectively in different groups, despite that FC(1) and FC(1)-PO are in the first group.
The past decade has witnessed the flourishing of a new profession as media content creators, who rely on revenue streams from online content recommendation platforms. The reward mechanism employed by these platforms creates a competitive environment among creators which affect their production choices and, consequently, content distribution and system welfare. It is thus crucial to design the platform's reward mechanism in order to steer the creators' competition towards a desirable welfare outcome in the long run. This work makes two major contributions in this regard: first, we uncover a fundamental limit about a class of widely adopted mechanisms, coined Merit-based Monotone Mechanisms, by showing that they inevitably lead to a constant fraction loss of the welfare. To circumvent this limitation, we introduce Backward Rewarding Mechanisms (BRMs) and show that the competition games resulting from BRM possess a potential game structure, which naturally induces the strategic creators' behavior dynamics to optimize any given welfare metric. In addition, the class of BRM can be parameterized so that it allows the platform to directly optimize welfare within the feasible mechanism space even when the welfare metric is not explicitly defined.
In this work we propose tailored model order reduction for varying boundary optimal control problems governed by parametric partial differential equations. With varying boundary control, we mean that a specific parameter changes where the boundary control acts on the system. This peculiar formulation might benefit from model order reduction. Indeed, fast and reliable simulations of this model can be of utmost usefulness in many applied fields, such as geophysics and energy engineering. However, varying boundary control features very complicated and diversified parametric behaviour for the state and adjoint variables. The state solution, for example, changing the boundary control parameter, might feature transport phenomena. Moreover, the problem loses its affine structure. It is well known that classical model order reduction techniques fail in this setting, both in accuracy and in efficiency. Thus, we propose reduced approaches inspired by the ones used when dealing with wave-like phenomena. Indeed, we compare standard proper orthogonal decomposition with two tailored strategies: geometric recasting and local proper orthogonal decomposition. Geometric recasting solves the optimization system in a reference domain simplifying the problem at hand avoiding hyper-reduction, while local proper orthogonal decomposition builds local bases to increase the accuracy of the reduced solution in very general settings (where geometric recasting is unfeasible). We compare the various approaches on two different numerical experiments based on geometries of increasing complexity.
Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts, but there has been limited understanding of exactly how these explanations function or why they are effective. This work aims to better understand the mechanisms by which explanations are used for in-context learning. We first study the impact of two different factors on the performance of prompts with explanations: the computation trace (the way the solution is decomposed) and the natural language used to express the prompt. By perturbing explanations on three controlled tasks, we show that both factors contribute to the effectiveness of explanations. We further study how to form maximally effective sets of explanations for solving a given test query. We find that LLMs can benefit from the complementarity of the explanation set: diverse reasoning skills shown by different exemplars can lead to better performance. Therefore, we propose a maximal marginal relevance-based exemplar selection approach for constructing exemplar sets that are both relevant as well as complementary, which successfully improves the in-context learning performance across three real-world tasks on multiple LLMs.
We introduce a transformation framework that can be utilized to develop online algorithms with low $\epsilon$-approximate regret in the random-order model from offline approximation algorithms. We first give a general reduction theorem that transforms an offline approximation algorithm with low average sensitivity to an online algorithm with low $\epsilon$-approximate regret. We then demonstrate that offline approximation algorithms can be transformed into a low-sensitivity version using a coreset construction method. To showcase the versatility of our approach, we apply it to various problems, including online $(k,z)$-clustering, online matrix approximation, and online regression, and successfully achieve polylogarithmic $\epsilon$-approximate regret for each problem. Moreover, we show that in all three cases, our algorithm also enjoys low inconsistency, which may be desired in some online applications.
This thesis studies the domain of collective robotics, and more particularly the optimization problems of multirobot systems in the context of exploration, path planning and coordination. It includes two contributions. The first one is the use of the Butterfly Optimization Algorithm (BOA) to solve the Unknown Area Exploration problem with energy constraints in dynamic environments. This algorithm was never used for solving robotics problems before, as far as we know. We proposed a new version of this algorithm called xBOA based on the crossover operator to improve the diversity of the candidate solutions and speed up the convergence of the algorithm. The second contribution is the development of a new simulation framework for benchmarking dynamic incremental problems in robotics such as exploration tasks. The framework is made in such a manner to be generic to quickly compare different metaheuristics with minimum modifications, and to adapt easily to single and multi-robot scenarios. Also, it provides researchers with tools to automate their experiments and generate visuals, which will allow them to focus on more important tasks such as modeling new algorithms. We conducted a series of experiments that showed promising results and allowed us to validate our approach and model.
Minimizing the weight of an edge set satisfying parity constraints is a challenging branch of combinatorial optimization as witnessed by the binary hypergraph chapter of Alexander Schrijver's book ``Combinatorial Optimization'' (Chapter 80). This area contains relevant graph theory problems including open cases of the Max Cut problem, or some multiflow problems. We clarify the interconnections of some problems and establish three levels of difficulties. On the one hand, we prove that the Shortest Odd Path problem in an undirected graph without cycles of negative total weight and several related problems are NP-hard, settling a long-standing open question asked by Lov\'asz (Open Problem 27 in Schrijver's book ``Combinatorial Optimization''. On the other hand, we provide a polynomial-time algorithm to the closely related and well-studied Minimum-weight Odd $\{s,t\}$-Join problem for non-negative weights, whose complexity, however, was not known; more generally, we solve the Minimum-weight Odd $T$-Join problem in FPT time when parameterized by $|T|$. If negative weights are also allowed, then finding a minimum-weight odd $\{s,t\}$-join is equivalent to the Minimum-weight Odd $T$-Join problem for arbitrary weights, whose complexity is only conjectured to be polynomially solvable. The analogous problems for digraphs are also considered.
Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial intelligence (AI), including computer vision, natural language processing and speech recognition. However, their superior performance comes at the considerable cost of computational complexity, which greatly hinders their applications in many resource-constrained devices, such as mobile phones and Internet of Things (IoT) devices. Therefore, methods and techniques that are able to lift the efficiency bottleneck while preserving the high accuracy of DNNs are in great demand in order to enable numerous edge AI applications. This paper provides an overview of efficient deep learning methods, systems and applications. We start from introducing popular model compression methods, including pruning, factorization, quantization as well as compact model design. To reduce the large design cost of these manual solutions, we discuss the AutoML framework for each of them, such as neural architecture search (NAS) and automated pruning and quantization. We then cover efficient on-device training to enable user customization based on the local data on mobile devices. Apart from general acceleration techniques, we also showcase several task-specific accelerations for point cloud, video and natural language processing by exploiting their spatial sparsity and temporal/token redundancy. Finally, to support all these algorithmic advancements, we introduce the efficient deep learning system design from both software and hardware perspectives.
When is heterogeneity in the composition of an autonomous robotic team beneficial and when is it detrimental? We investigate and answer this question in the context of a minimally viable model that examines the role of heterogeneous speeds in perimeter defense problems, where defenders share a total allocated speed budget. We consider two distinct problem settings and develop strategies based on dynamic programming and on local interaction rules. We present a theoretical analysis of both approaches and our results are extensively validated using simulations. Interestingly, our results demonstrate that the viability of heterogeneous teams depends on the amount of information available to the defenders. Moreover, our results suggest a universality property: across a wide range of problem parameters the optimal ratio of the speeds of the defenders remains nearly constant.
Autonomous driving has achieved a significant milestone in research and development over the last decade. There is increasing interest in the field as the deployment of self-operating vehicles on roads promises safer and more ecologically friendly transportation systems. With the rise of computationally powerful artificial intelligence (AI) techniques, autonomous vehicles can sense their environment with high precision, make safe real-time decisions, and operate more reliably without human interventions. However, intelligent decision-making in autonomous cars is not generally understandable by humans in the current state of the art, and such deficiency hinders this technology from being socially acceptable. Hence, aside from making safe real-time decisions, the AI systems of autonomous vehicles also need to explain how these decisions are constructed in order to be regulatory compliant across many jurisdictions. Our study sheds a comprehensive light on developing explainable artificial intelligence (XAI) approaches for autonomous vehicles. In particular, we make the following contributions. First, we provide a thorough overview of the present gaps with respect to explanations in the state-of-the-art autonomous vehicle industry. We then show the taxonomy of explanations and explanation receivers in this field. Thirdly, we propose a framework for an architecture of end-to-end autonomous driving systems and justify the role of XAI in both debugging and regulating such systems. Finally, as future research directions, we provide a field guide on XAI approaches for autonomous driving that can improve operational safety and transparency towards achieving public approval by regulators, manufacturers, and all engaged stakeholders.
Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.