Sampling invariant distributions from an Ito diffusion process presents a significant challenge in stochastic simulation. Traditional numerical solvers for stochastic differential equations require both a fine step size and a lengthy simulation period, resulting in both biased and correlated samples. Current deep learning-based method solves the stationary Fokker--Planck equation to determine the invariant probability density function in form of deep neural networks, but they generally do not directly address the problem of sampling from the computed density function. In this work, we introduce a framework that employs a weak generative sampler (WGS) to directly generate independent and identically distributed (iid) samples induced by a transformation map derived from the stationary Fokker--Planck equation. Our proposed loss function is based on the weak form of the Fokker--Planck equation, integrating normalizing flows to characterize the invariant distribution and facilitate sample generation from the base distribution. Our randomized test function circumvents the need for mini-max optimization in the traditional weak formulation. Distinct from conventional generative models, our method neither necessitates the computationally intensive calculation of the Jacobian determinant nor the invertibility of the transformation map. A crucial component of our framework is the adaptively chosen family of test functions in the form of Gaussian kernel functions with centres selected from the generated data samples. Experimental results on several benchmark examples demonstrate the effectiveness of our method, which offers both low computational costs and excellent capability in exploring multiple metastable states.
We explore the use of deep learning to localise galactic structures in low surface brightness (LSB) images. LSB imaging reveals many interesting structures, though these are frequently confused with galactic dust contamination, due to a strong local visual similarity. We propose a novel unified approach to multi-class segmentation of galactic structures and of extended amorphous image contaminants. Our panoptic segmentation model combines Mask R-CNN with a contaminant specialised network and utilises an adaptive preprocessing layer to better capture the subtle features of LSB images. Further, a human-in-the-loop training scheme is employed to augment ground truth labels. These different approaches are evaluated in turn, and together greatly improve the detection of both galactic structures and contaminants in LSB images.
As fifth-generation (5G) and upcoming sixth-generation (6G) communications exhibit tremendous demands in providing high data throughput with a relatively low latency, millimeter-wave (mmWave) technologies manifest themselves as the key enabling components to achieve the envisioned performance and tasks. In this context, mmWave integrated circuits (IC) have attracted significant research interests over the past few decades, ranging from individual block design to complex system design. However, the highly nonlinear properties and intricate trade-offs involved render the design of analog or RF circuits a complicated process. The rapid evolution of fabrication technology also results in an increasingly long time allocated in the design process due to more stringent requirements. In this thesis, 28-GHz transceiver circuits are first investigated with detailed schematics and associated performance metrics. In this case, two target systems comprising heterogeneous individual blocks are selected and demonstrated on both the transmitter and receiver sides. Subsequently, some conventional and large-scale machine learning (ML) approaches are integrated into the design pipeline of the chosen systems to predict circuit parameters based on desired specifications, thereby circumventing the typical time-consuming iterations found in traditional methods. Finally, some potential research directions are discussed from the perspectives of circuit design and ML algorithms.
The vocabulary size in temporal action localization (TAL) is constrained by the scarcity of large-scale annotated datasets. To address this, recent works incorporate powerful pre-trained vision-language models (VLMs), such as CLIP, to perform open-vocabulary TAL (OV-TAL). However, unlike VLMs trained on extensive image/video-text pairs, existing OV-TAL methods still rely on small, fully labeled TAL datasets for training an action localizer. In this paper, we explore the scalability of self-training with unlabeled YouTube videos for OV-TAL. Our self-training approach consists of two stages. First, a class-agnostic action localizer is trained on a human-labeled TAL dataset and used to generate pseudo-labels for unlabeled videos. Second, the large-scale pseudo-labeled dataset is combined with the human-labeled dataset to train the localizer. Extensive experiments demonstrate that leveraging web-scale videos in self-training significantly enhances the generalizability of an action localizer. Additionally, we highlighted issues with existing OV-TAL evaluation schemes and proposed a new evaluation protocol. Code is released at //github.com/HYUNJS/STOV-TAL
The deployment process of a spiking neural network (SNN) can involve partitioning a neural network and mapping partitions onto processing units within the neuromorphic hardware. Searching for optimal deployment schemes presents an NP-hard problem. Optimization of deployment schemes encounters challenges in devising computationally effective cost functions for optimization objectives such as communication time consumption and energy efficiency. These kinds of objectives necessitate consideration of network dynamics shaped by neuron activity patterns, demanding intricate mathematical analyses or simulations for integrating them into a cost model for the deployment of an SNN. The network dynamics are hardware-independent and can be modeled separately from specific hardware configurations. Our approach employs a pairwise Ising-type maximum entropy model, which has shown its effectiveness in accurately reproducing pairwise correlations among components in a system. We utilized this model to capture network dynamics, upon which a cost function is built incorporating hardware-specific parameters. We conducted an extremely preliminary investigation using the SpiNNaker machine. We show that the existing model training can also be computationally complex. Currently, we still lack sufficient evidence to substantiate the effectiveness of our proposed methods. Further efforts is needed to explore integrating network dynamics into SNN deployment.
Linearizability is a commonly accepted correctness criterion for concurrent data structures. However, verifying linearizability of highly concurrent data structures is still a challenging task. In this paper, we present a simple and complete proof technique for verifying linearizability of concurrent stacks. Our proof technique reduces linearizability of concurrent stacks to establishing a set of conditions. These conditions are based on the happened-before order of operations, intuitively express the LIFO semantics and can be proved by simple arguments. Designers of concurrent data structures can easily and quickly learn to use the proof technique. We have successfully applied the method to several challenging concurrent stacks: the TS stack, the HSY stack, and the FA stack, etc.
In distributed computing by mobile robots, robots are deployed over a region, continuous or discrete, operating through a sequence of \textit{look-compute-move} cycles. An extensive study has been carried out to understand the computational powers of different robot models. The models vary on the ability to 1)~remember constant size information and 2)~communicate constant size message. Depending on the abilities the different models are 1)~$\mathcal{OBLOT}$ (robots are oblivious and silent), 2)~$\mathcal{FSTA}$ (robots have finite states but silent), 3)~$\mathcal{FCOM}$ (robots are oblivious but can communicate constant size information) and, 4)~$\mathcal{LUMI}$ (robots have finite states and can communicate constant size information). Another factor that affects computational ability is the scheduler that decides the activation time of the robots. The main three schedulers are \textit{fully-synchronous}, \textit{semi-synchronous} and \textit{asynchronous}. Combining the models ($M$) with schedulers ($K$), we have twelve combinations $M^K$. In the euclidean domain, the comparisons between these twelve variants have been done in different works for transparent robots, opaque robots, and robots with limited visibility. There is a vacant space for similar works when robots are operating on discrete regions like networks. It demands separate research attention because there have been a series of works where robots operate on different networks, and there is a fundamental difference when robots are operating on a continuous domain versus a discrete domain in terms of robots' movement. This work contributes to filling the space by giving a full comparison table for all models with two synchronous schedulers: fully-synchronous and semi-synchronous.
Vision-based occupancy prediction, also known as 3D Semantic Scene Completion (SSC), presents a significant challenge in computer vision. Previous methods, confined to onboard processing, struggle with simultaneous geometric and semantic estimation, continuity across varying viewpoints, and single-view occlusion. Our paper introduces OccFiner, a novel offboard framework designed to enhance the accuracy of vision-based occupancy predictions. OccFiner operates in two hybrid phases: 1) a multi-to-multi local propagation network that implicitly aligns and processes multiple local frames for correcting onboard model errors and consistently enhancing occupancy accuracy across all distances. 2) the region-centric global propagation, focuses on refining labels using explicit multi-view geometry and integrating sensor bias, especially to increase the accuracy of distant occupied voxels. Extensive experiments demonstrate that OccFiner improves both geometric and semantic accuracy across various types of coarse occupancy, setting a new state-of-the-art performance on the SemanticKITTI dataset. Notably, OccFiner elevates vision-based SSC models to a level even surpassing that of LiDAR-based onboard SSC models. Furthermore, OccFiner is the first to achieve automatic annotation of SSC in a purely vision-based approach. Quantitative experiments prove that OccFiner successfully facilitates occupancy data loop-closure in autonomous driving. Additionally, we quantitatively and qualitatively validate the superiority of the offboard approach on city-level SSC static maps. The source code will be made publicly available at //github.com/MasterHow/OccFiner.
This paper studies the problem of forecasting general stochastic processes using a path-dependent extension of the Neural Jump ODE (NJ-ODE) framework \citep{herrera2021neural}. While NJ-ODE was the first framework to establish convergence guarantees for the prediction of irregularly observed time series, these results were limited to data stemming from It\^o-diffusions with complete observations, in particular Markov processes, where all coordinates are observed simultaneously. In this work, we generalise these results to generic, possibly non-Markovian or discontinuous, stochastic processes with incomplete observations, by utilising the reconstruction properties of the signature transform. These theoretical results are supported by empirical studies, where it is shown that the path-dependent NJ-ODE outperforms the original NJ-ODE framework in the case of non-Markovian data. Moreover, we show that PD-NJ-ODE can be applied successfully to classical stochastic filtering problems and to limit order book (LOB) data.
Visual Question Answering (VQA) models have struggled with counting objects in natural images so far. We identify a fundamental problem due to soft attention in these models as a cause. To circumvent this problem, we propose a neural network component that allows robust counting from object proposals. Experiments on a toy task show the effectiveness of this component and we obtain state-of-the-art accuracy on the number category of the VQA v2 dataset without negatively affecting other categories, even outperforming ensemble models with our single model. On a difficult balanced pair metric, the component gives a substantial improvement in counting over a strong baseline by 6.6%.
High spectral dimensionality and the shortage of annotations make hyperspectral image (HSI) classification a challenging problem. Recent studies suggest that convolutional neural networks can learn discriminative spatial features, which play a paramount role in HSI interpretation. However, most of these methods ignore the distinctive spectral-spatial characteristic of hyperspectral data. In addition, a large amount of unlabeled data remains an unexploited gold mine for efficient data use. Therefore, we proposed an integration of generative adversarial networks (GANs) and probabilistic graphical models for HSI classification. Specifically, we used a spectral-spatial generator and a discriminator to identify land cover categories of hyperspectral cubes. Moreover, to take advantage of a large amount of unlabeled data, we adopted a conditional random field to refine the preliminary classification results generated by GANs. Experimental results obtained using two commonly studied datasets demonstrate that the proposed framework achieved encouraging classification accuracy using a small number of data for training.