On June 24, 2021, a 12-story condominium building (Champlain Towers South) in Surfside, Florida partially collapsed, resulting in one of the deadliest building collapses in United States history with 98 people confirmed deceased. In this work, we analyze the collapse event using a video clip that is publicly available from social media. In our analysis, we apply computer vision algorithms to corroborate new information from the video clip that may not be readily interpreted by human eyes. By comparing the differential features against different video frames, our proposed method is used to quantify the falling structural components by mapping the directions and magnitudes of their movements. We demonstrate the potential of this video processing methodology in investigations of catastrophic structural failures and hope our approach may serve as a basis for further investigations into structure collapse events.
Many real-world settings involve costs for performing actions; transaction costs in financial systems and fuel costs being common examples. In these settings, performing actions at each time step quickly accumulates costs leading to vastly suboptimal outcomes. Additionally, repeatedly acting produces wear and tear and ultimately, damage. Determining when to act is crucial for achieving successful outcomes and yet, the challenge of efficiently learning to behave optimally when actions incur minimally bounded costs remains unresolved. In this paper, we introduce a reinforcement learning (RL) framework named Learnable Impulse Control Reinforcement Algorithm (LICRA), for learning to optimally select both when to act and which actions to take when actions incur costs. At the core of LICRA is a nested structure that combines RL and a form of policy known as impulse control which learns to maximise objectives when actions incur costs. We prove that LICRA, which seamlessly adopts any RL method, converges to policies that optimally select when to perform actions and their optimal magnitudes. We then augment LICRA to handle problems in which the agent can perform at most $k<\infty$ actions and more generally, faces a budget constraint. We show LICRA learns the optimal value function and ensures budget constraints are satisfied almost surely. We demonstrate empirically LICRA's superior performance against benchmark RL methods in OpenAI gym's Lunar Lander and in Highway environments and a variant of the Merton portfolio problem within finance.
We introduce a new type of Krasnoselskii's result. Using a simple differentiability condition, we relax the nonexpansive condition in Krasnoselskii's theorem. More clearly, we analyze the convergence of the sequence $x_{n+1}=\frac{x_n+g(x_n)}{2}$ based on some differentiability condition of $g$ and present some fixed point results. We introduce some iterative sequences that for any real differentiable function $g$ and any starting point $x_0\in \mathbb [a,b]$ converge monotonically to the nearest root of $g$ in $[a,b]$ that lay to the right or left side of $x_0$. Based on this approach, we present an efficient and novel method for finding the real roots of real functions. We prove that no root will be missed in our method. It is worth mentioning that our iterative method is free from the derivative evaluation which can be regarded as an advantage of this method in comparison with many other methods. Finally, we illustrate our results with some numerical examples.
In the past redaction involved the use of black or white markers or paper cut-outs to obscure content on physical paper. Today many redactions take place on digital PDF documents and redaction is often performed by software tools. Typical redaction tools remove text from PDF documents and draw a black or white rectangle in its place, mimicking a physical redaction. This practice is thought to be secure when the redacted text is removed and cannot be "copy-pasted" from the PDF document. We find this common conception is false -- existing PDF redactions can be broken by precise measurements of non-redacted character positioning information. We develop a deredaction tool for automatically finding and breaking these vulnerable redactions. We report on 11 different redaction tools, finding the majority do not remove redaction-breaking information, including some Adobe Acrobat workflows. We empirically measure the information leaks, finding some redactions leak upwards of 15 bits of information, creating a 32,768-fold reduction in the space of potential redacted texts. We demonstrate a lower bound on the impact of these leaks via a 22,120 document study, including 18,975 Office of the Inspector General (OIG) investigation reports, where we find 769 vulnerable named-entity redactions. We find leaked information reduces the contents for 164 of these redacted names to less than 494 possibilities from a 7 million name dictionary. We show these findings impact by breaking redactions from the Epstein/Maxwell case, Manafort case, and a released Snowden document. Moreover, we develop an efficient algorithm for locating copy-pastable redactions and find over 100,000 poorly redacted words in US court documents. Current PDF text redaction methods are insufficient for named entity protection.
In the race towards carbon neutrality, the building sector has fallen behind and bears the potential to endanger the progress made across other industries. This is because buildings exhibit a life span of several decades which creates substantial inertia in the face of climate change. This inertia is further exacerbated by the scale of the existing building stock. With several billion operational buildings around the globe, working towards a carbon-neutral building sector requires solutions which enable stakeholders to accurately identify and retrofit subpar buildings at scale. However, improving the energy efficiency of the existing building stock through retrofits in a targeted and efficient way remains challenging. This is because, as of today, the energy efficiency of buildings is generally determined by on-site visits of certified energy auditors which makes the process slow, costly, and geographically incomplete. In order to accelerate the identification of promising retrofit targets, this work proposes a new method which can estimate a building's energy efficiency using purely remotely sensed data such as street view and aerial imagery, OSM-derived footprint areas, and satellite-borne land surface temperature (LST) measurements. We find that in the binary setting of distinguishing efficient from inefficient buildings, our end-to-end deep learning model achieves a macro-averaged F1-score of 62.06\%. As such, this work shows the potential and complementary nature of remotely sensed data in predicting building attributes such as energy efficiency and opens up new opportunities for future work to integrate additional data sources.
We consider a system consisting of $n$ particles, moving forward in jumps on the real line. System state is the empirical distribution of particle locations. Each particle ``jumps forward'' at some time points, with the instantaneous rate of jumps given by a decreasing function of the particle's location quantile within the current state (empirical distribution). Previous work on this model established, under certain conditions, the convergence, as $n\to\infty$, of the system random dynamics to that of a deterministic mean-field model (MFM), which is a solution to an integro-differential equation. Another line of previous work established the existence of MFMs that are traveling waves, as well as the attraction of MFM trajectories to traveling waves. The main results of this paper are: (a) We prove that, as $n\to\infty$, the stationary distributions of (re-centered) states concentrate on a (re-centered) traveling wave; (b) We obtain a uniform across $n$ moment bound on the stationary distributions of (re-centered) states; (c) We prove a convergence-to-MFM result, which is substantially more general than that in previous work. Results (b) and (c) serve as ``ingredients'' of the proof of (a), but also are of independent interest.
From a model-building perspective, in this paper we propose a paradigm shift for fitting over-parameterized models. Philosophically, the mindset is to fit models to future observations rather than to the observed sample. Technically, choosing an imputation model for generating future observations, we fit over-parameterized models to future observations via optimizing an approximation to the desired expected loss-function based on its sample counterpart and an adaptive simplicity-preference function. This technique is discussed in detail to both creating bootstrap imputation and final estimation with bootstrap imputation. The method is illustrated with the many-normal-means problem, $n < p$ linear regression, and deep convolutional neural networks for image classification of MNIST digits. The numerical results demonstrate superior performance across these three different types of applications. For example, for the many-normal-means problem, our method uniformly dominates James-Stein and Efron's $g-$modeling, and for the MNIST image classification, it performs better than all existing methods and reaches arguably the best possible result. While this paper is largely expository because of the ambitious task of taking a look at over-parameterized models from the new perspective, fundamental theoretical properties are also investigated. We conclude the paper with a few remarks.
Several deep neural networks have recently been shown to generate activations similar to those of the brain in response to the same input. These algorithms, however, remain largely implausible: they require (1) extraordinarily large amounts of data, (2) unobtainable supervised labels, (3) textual rather than raw sensory input, and / or (4) implausibly large memory (e.g. thousands of contextual words). These elements highlight the need to identify algorithms that, under these limitations, would suffice to account for both behavioral and brain responses. Focusing on the issue of speech processing, we here hypothesize that self-supervised algorithms trained on the raw waveform constitute a promising candidate. Specifically, we compare a recent self-supervised architecture, Wav2Vec 2.0, to the brain activity of 412 English, French, and Mandarin individuals recorded with functional Magnetic Resonance Imaging (fMRI), while they listened to ~1h of audio books. Our results are four-fold. First, we show that this algorithm learns brain-like representations with as little as 600 hours of unlabelled speech -- a quantity comparable to what infants can be exposed to during language acquisition. Second, its functional hierarchy aligns with the cortical hierarchy of speech processing. Third, different training regimes reveal a functional specialization akin to the cortex: Wav2Vec 2.0 learns sound-generic, speech-specific and language-specific representations similar to those of the prefrontal and temporal cortices. Fourth, we confirm the similarity of this specialization with the behavior of 386 additional participants. These elements, resulting from the largest neuroimaging benchmark to date, show how self-supervised learning can account for a rich organization of speech processing in the brain, and thus delineate a path to identify the laws of language acquisition which shape the human brain.
Radiogenomics is an emerging field in cancer research that combines medical imaging data with genomic data to predict patients clinical outcomes. In this paper, we propose a multivariate sparse group lasso joint model to integrate imaging and genomic data for building prediction models. Specifically, we jointly consider two models, one regresses imaging features on genomic features, and the other regresses patients clinical outcomes on genomic features. The regularization penalties through sparse group lasso allow incorporation of intrinsic group information, e.g. biological pathway and imaging category, to select both important intrinsic groups and important features within a group. To integrate information from the two models, in each model, we introduce a weight in the penalty term of each individual genomic feature, where the weight is inversely correlated with the model coefficient of that feature in the other model. This weight allows a feature to have a higher chance of selection by one model if it is selected by the other model. Our model is applicable to both continuous and time to event outcomes. It also allows the use of two separate datasets to fit the two models, addressing a practical challenge that many genomic datasets do not have imaging data available. Simulations and real data analyses demonstrate that our method outperforms existing methods in the literature.
In cyber-physical convergence scenarios information flows seamlessly between the physical and the cyber worlds. Here, users' mobile devices represent a natural bridge through which users process acquired information and perform actions. The sheer amount of data available in this context calls for novel, autonomous and lightweight data-filtering solutions, where only relevant information is finally presented to users. Moreover, in many real-world scenarios data is not categorised in predefined topics, but it is generally accompanied by semantic descriptions possibly describing users' interests. In these complex conditions, user devices should autonomously become aware not only of the existence of data in the network, but also of their semantic descriptions and correlations between them. To tackle these issues, we present a set of algorithms for knowledge and data dissemination in opportunistic networks, based on simple and very effective models (called cognitive heuristics) coming from cognitive sciences. We show how to exploit them to disseminate both semantic data and the corresponding data items. We provide a thorough performance analysis, under various different conditions comparing our results against non-cognitive solutions. Simulation results demonstrate the superior performance of our solution towards a more effective semantic knowledge acquisition and representation, and a more tailored content acquisition.
Recently, deep multiagent reinforcement learning (MARL) has become a highly active research area as many real-world problems can be inherently viewed as multiagent systems. A particularly interesting and widely applicable class of problems is the partially observable cooperative multiagent setting, in which a team of agents learns to coordinate their behaviors conditioning on their private observations and commonly shared global reward signals. One natural solution is to resort to the centralized training and decentralized execution paradigm. During centralized training, one key challenge is the multiagent credit assignment: how to allocate the global rewards for individual agent policies for better coordination towards maximizing system-level's benefits. In this paper, we propose a new method called Q-value Path Decomposition (QPD) to decompose the system's global Q-values into individual agents' Q-values. Unlike previous works which restrict the representation relation of the individual Q-values and the global one, we leverage the integrated gradient attribution technique into deep MARL to directly decompose global Q-values along trajectory paths to assign credits for agents. We evaluate QPD on the challenging StarCraft II micromanagement tasks and show that QPD achieves the state-of-the-art performance in both homogeneous and heterogeneous multiagent scenarios compared with existing cooperative MARL algorithms.