亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

From both an educational and research point of view, experiments on hardware are a key aspect of robotics and control. In the last decade, many open-source hardware and software frameworks for wheeled robots have been presented, mainly in the form of unicycles and car-like robots, with the goal of making robotics accessible to a wider audience and to support control systems development. Unicycles are usually small and inexpensive, and therefore facilitate experiments in a larger fleet, but they are not suited for high-speed motion. Car-like robots are more agile, but they are usually larger and more expensive, thus requiring more resources in terms of space and money. In order to bridge this gap, we present Chronos, a new car-like 1/28th scale robot with customized open-source electronics, and CRS, an open-source software framework for control and robotics. The CRS software framework includes the implementation of various state-of-the-art algorithms for control, estimation, and multi-agent coordination. With this work, we aim to provide easier access to hardware and reduce the engineering time needed to start new educational and research projects.

相關內容

Stabbing Planes (also known as Branch and Cut) is a proof system introduced very recently which, informally speaking, extends the DPLL method by branching on integer linear inequalities instead of single variables. The techniques known so far to prove size and depth lower bounds for Stabbing Planes are generalizations of those used for the Cutting Planes proof system. For size lower bounds these are established by monotone circuit arguments, while for depth these are found via communication complexity and protection. As such these bounds apply for lifted versions of combinatorial statements. Rank lower bounds for Cutting Planes are also obtained by geometric arguments called protection lemmas. In this work we introduce two new geometric approaches to prove size/depth lower bounds in Stabbing Planes working for any formula: (1) the antichain method, relying on Sperner's Theorem and (2) the covering method which uses results on essential coverings of the boolean cube by linear polynomials, which in turn relies on Alon's combinatorial Nullenstellensatz. We demonstrate their use on classes of combinatorial principles such as the Pigeonhole principle, the Tseitin contradictions and the Linear Ordering Principle. By the first method we prove almost linear size lower bounds and optimal logarithmic depth lower bounds for the Pigeonhole principle and analogous lower bounds for the Tseitin contradictions over the complete graph and for the Linear Ordering Principle. By the covering method we obtain a superlinear size lower bound and a logarithmic depth lower bound for Stabbing Planes proof of Tseitin contradictions over a grid graph.

Monotone missing data is a common problem in data analysis. However, imputation combined with dimensionality reduction can be computationally expensive, especially with the increasing size of datasets. To address this issue, we propose a Blockwise principal component analysis Imputation (BPI) framework for dimensionality reduction and imputation of monotone missing data. The framework conducts Principal Component Analysis (PCA) on the observed part of each monotone block of the data and then imputes on merging the obtained principal components using a chosen imputation technique. BPI can work with various imputation techniques and can significantly reduce imputation time compared to conducting dimensionality reduction after imputation. This makes it a practical and efficient approach for large datasets with monotone missing data. Our experiments validate the improvement in speed. In addition, our experiments also show that while applying MICE imputation directly on missing data may not yield convergence, applying BPI with MICE for the data may lead to convergence.

This paper aims to develop a semi-formal design space for Human-AI interactions, by building a set of interaction primitives which specify the communication between users and AI systems during their interaction. We show how these primitives can be combined into a set of interaction patterns which can provide an abstract specification for exchanging messages between humans and AI/ML models to carry out purposeful interactions. The motivation behind this is twofold: firstly, to provide a compact generalisation of existing practices, that highlights the similarities and differences between systems in terms of their interaction behaviours; and secondly, to support the creation of new systems, in particular by opening the space of possibilities for interactions with models. We present a short literature review on frameworks, guidelines and taxonomies related to the design and implementation of HAI interactions, including human-in-the-loop, explainable AI, as well as hybrid intelligence and collaborative learning approaches. From the literature review, we define a vocabulary for describing information exchanges in terms of providing and requesting particular model-specific data types. Based on this vocabulary, a message passing model for interactions between humans and models is presented, which we demonstrate can account for existing systems and approaches. Finally, we build this into design patterns as mid-level constructs that capture common interactional structures. We discuss how this approach can be used towards a design space for Human-AI interactions that creates new possibilities for designs as well as keeping track of implementation issues and concerns.

In the context of survival analysis, data-driven neural network-based methods have been developed to model complex covariate effects. While these methods may provide better predictive performance than regression-based approaches, not all can model time-varying interactions and complex baseline hazards. To address this, we propose Case-Base Neural Networks (CBNNs) as a new approach that combines the case-base sampling framework with flexible neural network architectures. Using a novel sampling scheme and data augmentation to naturally account for censoring, we construct a feed-forward neural network that includes time as an input. CBNNs predict the probability of an event occurring at a given moment to estimate the full hazard function. We compare the performance of CBNNs to regression and neural network-based survival methods in a simulation and three case studies using two time-dependent metrics. First, we examine performance on a simulation involving a complex baseline hazard and time-varying interactions to assess all methods, with CBNN outperforming competitors. Then, we apply all methods to three real data applications, with CBNNs outperforming the competing models in two studies and showing similar performance in the third. Our results highlight the benefit of combining case-base sampling with deep learning to provide a simple and flexible framework for data-driven modeling of single event survival outcomes that estimates time-varying effects and a complex baseline hazard by design. An R package is available at //github.com/Jesse-Islam/cbnn.

To enable the electrification of transportation systems, it is important to understand how technologies such as grid storage, solar photovoltaic systems, and control strategies can aid the deployment of electric vehicle charging at scale. In this work, we present EV-EcoSim, a co-simulation platform that couples electric vehicle charging, battery systems, solar photovoltaic systems, grid transformers, control strategies, and power distribution systems, to perform cost quantification and analyze the impacts of electric vehicle charging on the grid. This python-based platform can run a receding horizon control scheme for real-time operation and a one-shot control scheme for planning problems, with multi-timescale dynamics for different systems to simulate realistic scenarios. We demonstrate the utility of EV-EcoSim through a case study focused on economic evaluation of battery size to reduce electricity costs while considering impacts of fast charging on the power distribution grid. We present qualitative and quantitative evaluations on the battery size in tabulated results. The tabulated results delineate the trade-offs between candidate battery sizing solutions, providing comprehensive insights for decision-making under uncertainty. Additionally, we demonstrate the implications of the battery controller model fidelity on the system costs and show that the fidelity of the battery controller can completely change decisions made when planning an electric vehicle charging site.

Two important problems in the field of Topological Data Analysis are defining practical multifiltrations on objects and showing ability of TDA to detect the geometry. Motivated by the problems, we constuct three multifiltrations named multi-GENEO, multi-DGENEO and mix-GENEO, and prove the stability of both the interleaving distance and multiparameter persistence landscape of multi-GENEO with respect to the pseudometric of the subspace of bounded functions. We also give the estimations of upper bound for multi-DGENEO and mix-GENEO. Finally, we provide experiment results on MNIST dataset to demonstrate our bifiltrations have ability to detect geometric and topological differences of digital images.

Human trajectory forecasting is a critical challenge in fields such as robotics and autonomous driving. Due to the inherent uncertainty of human actions and intentions in real-world scenarios, various unexpected occurrences may arise. To uncover latent motion patterns in human behavior, we introduce a novel memory-based method, named Motion Pattern Priors Memory Network. Our method involves constructing a memory bank derived from clustered prior knowledge of motion patterns observed in the training set trajectories. We introduce an addressing mechanism to retrieve the matched pattern and the potential target distributions for each prediction from the memory bank, which enables the identification and retrieval of natural motion patterns exhibited by agents, subsequently using the target priors memory token to guide the diffusion model to generate predictions. Extensive experiments validate the effectiveness of our approach, achieving state-of-the-art trajectory prediction accuracy. The code will be made publicly available.

Demands for pneumatic-driven soft robots are constantly rising for various applications. However, they are often designed manually due to the lack of systematic methods. Moreover, design-dependent characteristics of pneumatic actuation pose distinctive challenges. This paper provides a compact MATLAB code, named SoRoTop, and its various extensions for designing pneumatic-driven soft robots using topology optimization. The code uses the method of moving asymptotes as the optimizer and builds upon the approach initially presented in Kumar et al.(Struct Multidiscip Optim 61 (4): 1637-1655, 2020). The pneumatic load is modeled using Darcy's law with a conceptualized drainage term. Consistent nodal loads are determined from the resultant pressure field using the conventional finite element approach. The robust formulation is employed, i.e., the eroded and blueprint design descriptions are used. A min-max optimization problem is formulated using the output displacements of the eroded and blueprint designs. A volume constraint is imposed on the blueprint design, while the eroded design is used to apply a conceptualized strain energy constraint. The latter constraint aids in attaining optimized designs that can endure the applied load without compromising their performance. Sensitivities required for optimization are computed using the adjoint-variable method. The code is explained in detail, and various extensions are also presented. It is structured into pre-optimization, MMA optimization, and post-optimization operations, each of which is comprehensively detailed. The paper also illustrates the impact of load sensitivities on the optimized designs. SoRoTop is provided in Appendix A and is available with extensions in the supplementary material and publicly at \url{//github.com/PrabhatIn/SoRoTop}.

The proximal gradient method is a generic technique introduced to tackle the non-smoothness in optimization problems, wherein the objective function is expressed as the sum of a differentiable convex part and a non-differentiable regularization term. Such problems with tensor format are of interest in many fields of applied mathematics such as image and video processing. Our goal in this paper is to address the solution of such problems with a more general form of the regularization term. An adapted iterative proximal gradient method is introduced for this purpose. Due to the slowness of the proposed algorithm, we use new tensor extrapolation methods to enhance its convergence. Numerical experiments on color image deblurring are conducted to illustrate the efficiency of our approach.

Agents centered around Large Language Models (LLMs) are now capable of automating mobile device operations for users. After fine-tuning to learn a user's mobile operations, these agents can adhere to high-level user instructions online. They execute tasks such as goal decomposition, sequencing of sub-goals, and interactive environmental exploration, until the final objective is achieved. However, privacy concerns related to personalized user data arise during mobile operations, requiring user confirmation. Moreover, users' real-world operations are exploratory, with action data being complex and redundant, posing challenges for agent learning. To address these issues, in our practical application, we have designed interactive tasks between agents and humans to identify sensitive information and align with personalized user needs. Additionally, we integrated Standard Operating Procedure (SOP) information within the model's in-context learning to enhance the agent's comprehension of complex task execution. Our approach is evaluated on the new device control benchmark AitW, which encompasses 30K unique instructions across multi-step tasks, including application operation, web searching, and web shopping. Experimental results show that the SOP-based agent achieves state-of-the-art performance without incurring additional inference costs, boasting an overall action success rate of 66.92%.

北京阿比特科技有限公司