Impact assessments have emerged as a common way to identify the negative and positive implications of AI deployment, with the goal of avoiding the downsides of its use. It is undeniable that impact assessments are important - especially in the case of rapidly proliferating technologies such as generative AI. But it is also essential to critically interrogate the current literature and practice on impact assessment, to identify its shortcomings, and to develop new approaches that are responsive to these limitations. In this provocation, we do just that by first critiquing the current impact assessment literature and then proposing a novel approach that addresses our concerns: Scenario-Based Sociotechnical Envisioning.
This exhaustive investigation is dedicated to delving into the intricate legal aspects that underlie the inefficiency in the advancement and utilization of sustainable energies, with a primary focus on the dynamic landscape of China and carefully selected representative nations. In an era where the global community increasingly acknowledges the pressing need for environmentally-friendly alternatives to traditional fossil fuels, renewable energy sources have rightfully garnered substantial attention as encouraging solutions. Nevertheless, notwithstanding their potential to revolutionize the energy sector and counteract climate change, a multitude of legal and regulatory barriers may present formidable hindrances that impede their seamless integration into the energy landscape. With a resolute and concentrated aim, the research sets forth on a painstaking exploration and analysis of the intricate legal frameworks, policies, and institutional arrangements in place within China and the chosen representative nations. The ultimate objective is to discern and identify potential challenges and inefficiencies that could hinder the progress of renewable energy projects and initiatives.
Objective: Configuring a prosthetic leg is an integral part of the fitting process, but the personalization of a multi-modal powered knee-ankle prosthesis is often too complex to realize in a clinical environment. This paper develops both the technical means to individualize a hybrid kinematic-impedance controller for variable-incline walking and sit-stand transitions, and an intuitive Clinical Tuning Interface (CTI) that allows prosthetists to directly modify the controller behavior. Methods: Utilizing an established method for predicting kinematic gait individuality alongside a new parallel approach for kinetic individuality, we applied tuned characteristics exclusively from level-ground walking to personalize continuous-phase/task models of joint kinematics and impedance. To take advantage of this method, we developed a CTI that translates common clinical tuning parameters into model adjustments. We then conducted a case study involving an above-knee amputee participant where a prosthetist iteratively tuned the prosthesis in a simulated clinical session involving walking and sit-stand transitions. Results: The prosthetist fully tuned the multi-activity prosthesis controller in under 20 min. Each iteration of tuning (i.e., observation, parameter adjustment, and model reprocessing) took 2 min on average for walking and 1 min on average for sit-stand. The tuned behavior changes were appropriately manifested in the commanded prosthesis torques, both at the tuned tasks and across untuned tasks (inclines). Conclusion: The CTI leveraged able-bodied trends to efficiently personalize a wide array of walking tasks and sit-stand transitions. A case-study validated the CTI tuning method and demonstrated the efficiency necessary for powered knee-ankle prostheses to become clinically viable.
In a polydiagonal subspace of the Euclidean space, certain components of the vectors are equal (synchrony) or opposite (anti-synchrony). Polydiagonal subspaces invariant under a matrix have many applications in graph theory and dynamical systems, especially coupled cell networks. We describe invariant polydiagonal subspaces in terms of coloring vectors. This approach gives an easy formulation of a constraint satisfaction problem for finding invariant polydiagonal subspaces. Solving the resulting problem with existing state-of-the-art constraint solvers greatly outperforms the currently known algorithms.
We introduce GaussianOcc, a systematic method that investigates the two usages of Gaussian splatting for fully self-supervised and efficient 3D occupancy estimation in surround views. First, traditional methods for self-supervised 3D occupancy estimation still require ground truth 6D poses from sensors during training. To address this limitation, we propose Gaussian Splatting for Projection (GSP) module to provide accurate scale information for fully self-supervised training from adjacent view projection. Additionally, existing methods rely on volume rendering for final 3D voxel representation learning using 2D signals (depth maps, semantic maps), which is both time-consuming and less effective. We propose Gaussian Splatting from Voxel space (GSV) to leverage the fast rendering properties of Gaussian splatting. As a result, the proposed GaussianOcc method enables fully self-supervised (no ground truth pose) 3D occupancy estimation in competitive performance with low computational cost (2.7 times faster in training and 5 times faster in rendering). The relevant code is available in //github.com/GANWANSHUI/GaussianOcc.git.
We introduce a unified, flexible, and easy-to-implement framework of sufficient dimension reduction that can accommodate both linear and nonlinear dimension reduction, and both the conditional distribution and the conditional mean as the targets of estimation. This unified framework is achieved by a specially structured neural network -- the Belted and Ensembled Neural Network (BENN) -- that consists of a narrow latent layer, which we call the belt, and a family of transformations of the response, which we call the ensemble. By strategically placing the belt at different layers of the neural network, we can achieve linear or nonlinear sufficient dimension reduction, and by choosing the appropriate transformation families, we can achieve dimension reduction for the conditional distribution or the conditional mean. Moreover, thanks to the advantage of the neural network, the method is very fast to compute, overcoming a computation bottleneck of the traditional sufficient dimension reduction estimators, which involves the inversion of a matrix of dimension either p or n. We develop the algorithm and convergence rate of our method, compare it with existing sufficient dimension reduction methods, and apply it to two data examples.
We introduce a new erasure decoder that applies to arbitrary quantum LDPC codes. Dubbed the cluster decoder, it generalizes the decomposition idea of Vertical-Horizontal (VH) decoding introduced by Connelly et al. in 2022. Like the VH decoder, the idea is to first run the peeling decoder and then post-process the resulting stopping set. The cluster decoder breaks the stopping set into a tree of clusters which can be solved sequentially via Gaussian Elimination (GE). By allowing clusters of unconstrained size, this decoder achieves maximum-likelihood (ML) performance with reduced complexity compared with full GE. When GE is applied only to clusters whose sizes are less than a constant, the performance is degraded but the complexity becomes linear in the block length. Our simulation results show that, for hypergraph product codes, the cluster decoder with constant cluster size achieves near-ML performance similar to VH decoding in the low-erasure-rate regime. For the general quantum LDPC codes we studied, the cluster decoder can be used to estimate the ML performance curve with reduced complexity over a wide range of erasure rates.
The success of AI models relies on the availability of large, diverse, and high-quality datasets, which can be challenging to obtain due to data scarcity, privacy concerns, and high costs. Synthetic data has emerged as a promising solution by generating artificial data that mimics real-world patterns. This paper provides an overview of synthetic data research, discussing its applications, challenges, and future directions. We present empirical evidence from prior art to demonstrate its effectiveness and highlight the importance of ensuring its factuality, fidelity, and unbiasedness. We emphasize the need for responsible use of synthetic data to build more powerful, inclusive, and trustworthy language models.
This article presents the affordances that Generative Artificial Intelligence can have in disinformation context, one of the major threats to our digitalized society. We present a research framework to generate customized agent-based social networks for disinformation simulations that would enable understanding and evaluation of the phenomena whilst discussing open challenges.
Chain-of-thought reasoning, a cognitive process fundamental to human intelligence, has garnered significant attention in the realm of artificial intelligence and natural language processing. However, there still remains a lack of a comprehensive survey for this arena. To this end, we take the first step and present a thorough survey of this research field carefully and widely. We use X-of-Thought to refer to Chain-of-Thought in a broad sense. In detail, we systematically organize the current research according to the taxonomies of methods, including XoT construction, XoT structure variants, and enhanced XoT. Additionally, we describe XoT with frontier applications, covering planning, tool use, and distillation. Furthermore, we address challenges and discuss some future directions, including faithfulness, multi-modal, and theory. We hope this survey serves as a valuable resource for researchers seeking to innovate within the domain of chain-of-thought reasoning.
Many natural language processing tasks solely rely on sparse dependencies between a few tokens in a sentence. Soft attention mechanisms show promising performance in modeling local/global dependencies by soft probabilities between every two tokens, but they are not effective and efficient when applied to long sentences. By contrast, hard attention mechanisms directly select a subset of tokens but are difficult and inefficient to train due to their combinatorial nature. In this paper, we integrate both soft and hard attention into one context fusion model, "reinforced self-attention (ReSA)", for the mutual benefit of each other. In ReSA, a hard attention trims a sequence for a soft self-attention to process, while the soft attention feeds reward signals back to facilitate the training of the hard one. For this purpose, we develop a novel hard attention called "reinforced sequence sampling (RSS)", selecting tokens in parallel and trained via policy gradient. Using two RSS modules, ReSA efficiently extracts the sparse dependencies between each pair of selected tokens. We finally propose an RNN/CNN-free sentence-encoding model, "reinforced self-attention network (ReSAN)", solely based on ReSA. It achieves state-of-the-art performance on both Stanford Natural Language Inference (SNLI) and Sentences Involving Compositional Knowledge (SICK) datasets.