We investigate the continuous non-monotone DR-submodular maximization problem subject to a down-closed convex solvable constraint. Our first contribution is to construct an example to demonstrate that (first-order) stationary points can have arbitrarily bad approximation ratios, and they are usually on the boundary of the feasible domain. These findings are in contrast with the monotone case where any stationary point yields a $1/2$-approximation (Hassani et al. (2017)). Moreover, this example offers insights on how to design improved algorithms by avoiding bad stationary points, such as the restricted continuous local search algorithm (Chekuri et al. (2014)) and the aided measured continuous greedy (Buchbinder and Feldman (2019)). However, the analyses in the last two algorithms only work for the discrete domain because both need to invoke the inequality that the multilinear extension of any submodular set function is bounded from below by its Lovasz extension. Our second contribution, therefore, is to remove this restriction and show that both algorithms can be extended to the continuous domain while retaining the same approximation ratios, and hence offering improved approximation ratios over those in Bian et al. (2017a). for the same problem. At last, we also include numerical experiments to demonstrate our algorithms on problems arising from machine learning and artificial intelligence.
Uncertainty estimation (UE), as an effective means of quantifying predictive uncertainty, is crucial for safe and reliable decision-making, especially in high-risk scenarios. Existing UE schemes usually assume that there are completely-labeled samples to support fully-supervised learning. In practice, however, many UE tasks often have no sufficiently-labeled data to use, such as the Multiple Instance Learning (MIL) with only weak instance annotations. To bridge this gap, this paper, for the first time, addresses the weakly-supervised issue of Multi-Instance UE (MIUE) and proposes a new baseline scheme, Multi-Instance Residual Evidential Learning (MIREL). Particularly, at the fine-grained instance UE with only weak supervision, we derive a multi-instance residual operator through the Fundamental Theorem of Symmetric Functions. On this operator derivation, we further propose MIREL to jointly model the high-order predictive distribution at bag and instance levels for MIUE. Extensive experiments empirically demonstrate that our MIREL not only could often make existing MIL networks perform better in MIUE, but also could surpass representative UE methods by large margins, especially in instance-level UE tasks. Our source code is available at //github.com/liupei101/MIREL.
Efficient data utilization is crucial for advancing 3D scene understanding in autonomous driving, where reliance on heavily human-annotated LiDAR point clouds challenges fully supervised methods. Addressing this, our study extends into semi-supervised learning for LiDAR semantic segmentation, leveraging the intrinsic spatial priors of driving scenes and multi-sensor complements to augment the efficacy of unlabeled datasets. We introduce LaserMix++, an evolved framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to further assist data-efficient learning. Our framework is tailored to enhance 3D scene consistency regularization by incorporating multi-modality, including 1) multi-modal LaserMix operation for fine-grained cross-sensor interactions; 2) camera-to-LiDAR feature distillation that enhances LiDAR feature learning; and 3) language-driven knowledge guidance generating auxiliary supervisions using open-vocabulary models. The versatility of LaserMix++ enables applications across LiDAR representations, establishing it as a universally applicable solution. Our framework is rigorously validated through theoretical analysis and extensive experiments on popular driving perception datasets. Results demonstrate that LaserMix++ markedly outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations and significantly improving the supervised-only baselines. This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
Offline scheduling in Time Sensitive Networking (TSN) utilizing the Time Aware Shaper (TAS) facilitates optimal deterministic latency and jitter-bounds calculation for Time- Triggered (TT) flows. However, the dynamic nature of traffic in industrial settings necessitates a strategy for adaptively scheduling flows without interrupting existing schedules. Our research identifies critical gaps in current dynamic scheduling methods for TSN and introduces the novel GCN-TD3 approach. This novel approach utilizes a Graph Convolutional Network (GCN) for representing the various relations within different components of TSN and employs the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm to dynamically schedule any incoming flow. Additionally, an Integer Linear Programming (ILP) based offline scheduler is used both to initiate the simulation and serve as a fallback mechanism. This mechanism is triggered to recalculate the entire schedule when the predefined threshold of Gate Control List(GCL) length exceeds. Comparative analyses demonstrate that GCN-TD3 outperforms existing methods like Deep Double Q-Network (DDQN) and Deep Deterministic Policy Gradient (DDPG), exhibiting convergence within 4000 epochs with a 90\% dynamic TT flow admission rate while maintaining deadlines and reducing jitter to as low as 2us. Finally, two modules were developed for the OMNeT++ simulator, facilitating dynamic simulation to evaluate the methodology.
Emotion recognition is an important part of affective computing. Extracting emotional cues from human gaits yields benefits such as natural interaction, a nonintrusive nature, and remote detection. Recently, the introduction of self-supervised learning techniques offers a practical solution to the issues arising from the scarcity of labeled data in the field of gait-based emotion recognition. However, due to the limited diversity of gaits and the incompleteness of feature representations for skeletons, the existing contrastive learning methods are usually inefficient for the acquisition of gait emotions. In this paper, we propose a contrastive learning framework utilizing selective strong augmentation (SSA) for self-supervised gait-based emotion representation, which aims to derive effective representations from limited labeled gait data. First, we propose an SSA method for the gait emotion recognition task, which includes upper body jitter and random spatiotemporal mask. The goal of SSA is to generate more diverse and targeted positive samples and prompt the model to learn more distinctive and robust feature representations. Then, we design a complementary feature fusion network (CFFN) that facilitates the integration of cross-domain information to acquire topological structural and global adaptive features. Finally, we implement the distributional divergence minimization loss to supervise the representation learning of the generally and strongly augmented queries. Our approach is validated on the Emotion-Gait (E-Gait) and Emilya datasets and outperforms the state-of-the-art methods under different evaluation protocols.
We propose a new Bayesian heteroskedastic Markov-switching structural vector autoregression with data-driven time-varying identification. The model selects alternative exclusion restrictions over time and, as a condition for the search, allows to verify identification through heteroskedasticity within each regime. Based on four alternative monetary policy rules, we show that a monthly six-variable system supports time variation in US monetary policy shock identification. In the sample-dominating first regime, systematic monetary policy follows a Taylor rule extended by the term spread, effectively curbing inflation. In the second regime, occurring after 2000 and gaining more persistence after the global financial and COVID crises, it is characterized by a money-augmented Taylor rule. This regime's unconventional monetary policy provides economic stimulus, features the liquidity effect, and is complemented by a pure term spread shock. Absent the specific monetary policy of the second regime, inflation would be over one percentage point higher on average after 2008.
We investigate the evidence/flexibility (i.e., "Occam") paradigm and demonstrate the theoretical and empirical consistency of Bayesian evidence for the task of determining an appropriate generative model for network data. This model selection framework involves determining a collection of candidate models, equipping each of these models' parameters with prior distributions derived via the encompassing priors method, and computing or approximating each models' evidence. We demonstrate how such a criterion may be used to select the most suitable model among the Erd\"{o}s-R\'{e}nyi (ER) model, independent edge (IE) model, and rank-1 stochastic blockmodel (SBM). The Erd\"{o}s-R\'{e}nyi may be considered as being linearly nested within IE, a fact which permits exponential family results. The rank-1 SBM is not so ideal, so we propose a numerical method to approximate its evidence. We apply this paradigm to brain connectome data. Future work necessitates deriving and equipping additional candidate random graph models with appropriate priors so they may be included in the paradigm.
We consider self-stabilizing algorithms to compute a Maximal Independent Set (MIS) in the extremely weak beeping communication model. The model consists of an anonymous network with synchronous rounds. In each round, each vertex can optionally transmit a signal to all its neighbors (beep). After the transmission of a signal, each vertex can only differentiate between no signal received, or at least one signal received. We assume that vertices have some knowledge about the topology of the network. We revisit the not self-stabilizing algorithm proposed by Jeavons, Scott, and Xu (2013), which computes an MIS in the beeping model. We enhance this algorithm to be self-stabilizing, and explore two different variants, which differ in the knowledge about the topology available to the vertices. In the first variant, every vertex knows an upper bound on the maximum degree $\Delta$ of the graph. For this case, we prove that the proposed self-stabilizing version maintains the same run-time as the original algorithm, i.e. it stabilizes after $O(\log n)$ rounds w.h.p. on any $n$-vertex graph. In the second variant, each vertex only knows an upper bound on its own degree. For this case, we prove that the algorithm stabilizes after $O(\log n\cdot \log \log n)$ rounds on any $n$-vertex graph, w.h.p.
Conventional entity typing approaches are based on independent classification paradigms, which make them difficult to recognize inter-dependent, long-tailed and fine-grained entity types. In this paper, we argue that the implicitly entailed extrinsic and intrinsic dependencies between labels can provide critical knowledge to tackle the above challenges. To this end, we propose \emph{Label Reasoning Network(LRN)}, which sequentially reasons fine-grained entity labels by discovering and exploiting label dependencies knowledge entailed in the data. Specifically, LRN utilizes an auto-regressive network to conduct deductive reasoning and a bipartite attribute graph to conduct inductive reasoning between labels, which can effectively model, learn and reason complex label dependencies in a sequence-to-set, end-to-end manner. Experiments show that LRN achieves the state-of-the-art performance on standard ultra fine-grained entity typing benchmarks, and can also resolve the long tail label problem effectively.
The accurate and interpretable prediction of future events in time-series data often requires the capturing of representative patterns (or referred to as states) underpinning the observed data. To this end, most existing studies focus on the representation and recognition of states, but ignore the changing transitional relations among them. In this paper, we present evolutionary state graph, a dynamic graph structure designed to systematically represent the evolving relations (edges) among states (nodes) along time. We conduct analysis on the dynamic graphs constructed from the time-series data and show that changes on the graph structures (e.g., edges connecting certain state nodes) can inform the occurrences of events (i.e., time-series fluctuation). Inspired by this, we propose a novel graph neural network model, Evolutionary State Graph Network (EvoNet), to encode the evolutionary state graph for accurate and interpretable time-series event prediction. Specifically, Evolutionary State Graph Network models both the node-level (state-to-state) and graph-level (segment-to-segment) propagation, and captures the node-graph (state-to-segment) interactions over time. Experimental results based on five real-world datasets show that our approach not only achieves clear improvements compared with 11 baselines, but also provides more insights towards explaining the results of event predictions.
Multi-relation Question Answering is a challenging task, due to the requirement of elaborated analysis on questions and reasoning over multiple fact triples in knowledge base. In this paper, we present a novel model called Interpretable Reasoning Network that employs an interpretable, hop-by-hop reasoning process for question answering. The model dynamically decides which part of an input question should be analyzed at each hop; predicts a relation that corresponds to the current parsed results; utilizes the predicted relation to update the question representation and the state of the reasoning process; and then drives the next-hop reasoning. Experiments show that our model yields state-of-the-art results on two datasets. More interestingly, the model can offer traceable and observable intermediate predictions for reasoning analysis and failure diagnosis, thereby allowing manual manipulation in predicting the final answer.