Coordinate-type subgradient methods for addressing nonsmooth optimization problems are relatively underexplored due to the set-valued nature of the subdifferential. In this work, our study focuses on nonsmooth composite optimization problems, encompassing a wide class of convex and weakly convex (nonconvex nonsmooth) problems. By utilizing the chain rule of the composite structure properly, we introduce the Randomized Coordinate Subgradient method (RCS) for tackling this problem class. To the best of our knowledge, this is the first coordinate subgradient method for solving general nonsmooth composite optimization problems. In theory, we consider the linearly bounded subgradients assumption for the objective function, which is more general than the traditional Lipschitz continuity assumption, to account for practical scenarios. We then conduct convergence analysis for RCS in both convex and weakly convex cases based on this generalized Lipschitz-type assumption. Specifically, we establish the $\widetilde{\mathcal{O}}$$(1/\sqrt{k})$ convergence rate in expectation and the $\tilde o(1/\sqrt{k})$ almost sure asymptotic convergence rate in terms of the suboptimality gap when $f$ is convex. For the case when $f$ is weakly convex and its subdifferential satisfies the global metric subregularity property, we derive the $\mathcal{O}(\varepsilon^{-4})$ iteration complexity in expectation. We also establish an asymptotic convergence result. To justify the global metric subregularity property utilized in the analysis, we establish this error bound condition for the concrete (real-valued) robust phase retrieval problem. We also provide a convergence lemma and the relationship between the global metric subregularity properties of a weakly convex function and its Moreau envelope. Finally, we conduct several experiments to demonstrate the possible superiority of RCS over the subgradient method.
We develop a class of interacting particle systems for implementing a maximum marginal likelihood estimation (MMLE) procedure to estimate the parameters of a latent variable model. We achieve this by formulating a continuous-time interacting particle system which can be seen as a Langevin diffusion over an extended state space of parameters and latent variables. In particular, we prove that the parameter marginal of the stationary measure of this diffusion has the form of a Gibbs measure where number of particles acts as the inverse temperature parameter in classical settings for global optimisation. Using a particular rescaling, we then prove geometric ergodicity of this system and bound the discretisation error in a manner that is uniform in time and does not increase with the number of particles. The discretisation results in an algorithm, termed Interacting Particle Langevin Algorithm (IPLA) which can be used for MMLE. We further prove nonasymptotic bounds for the optimisation error of our estimator in terms of key parameters of the problem, and also extend this result to the case of stochastic gradients covering practical scenarios. We provide numerical experiments to illustrate the empirical behaviour of our algorithm in the context of logistic regression with verifiable assumptions. Our setting provides a straightforward way to implement a diffusion-based optimisation routine compared to more classical approaches such as the Expectation Maximisation (EM) algorithm, and allows for especially explicit nonasymptotic bounds.
We propose a unified dynamic tracking algorithmic framework (PLAY-CS) to reconstruct signal sequences with their intrinsic structured dynamic sparsity. By capitalizing on specific statistical assumptions concerning the dynamic filter of the signal sequences, the proposed framework exhibits versatility by encompassing various existing dynamic compressive sensing (DCS) algorithms. This is achieved through the incorporation of a newly proposed Partial-Laplacian filtering sparsity model, tailored to capture a more sophisticated dynamic sparsity. In practical scenarios such as dynamic channel tracking in wireless communications, the framework demonstrates enhanced performance compared to existing DCS algorithms.
Current multilingual semantic parsing (MSP) datasets are almost all collected by translating the utterances in the existing datasets from the resource-rich language to the target language. However, manual translation is costly. To reduce the translation effort, this paper proposes the first active learning procedure for MSP (AL-MSP). AL-MSP selects only a subset from the existing datasets to be translated. We also propose a novel selection method that prioritizes the examples diversifying the logical form structures with more lexical choices, and a novel hyperparameter tuning method that needs no extra annotation cost. Our experiments show that AL-MSP significantly reduces translation costs with ideal selection methods. Our selection method with proper hyperparameters yields better parsing performance than the other baselines on two multilingual datasets.
In the field of energy Internet, blockchain-based distributed energy trading mode is a promising way to replace the traditional centralized trading mode. However, the current power blockchain platform based on public chain has problems such as low consensus efficiency and waste of computing resources. The energy trading platform based on the consortium chain has problems such as inability to attract users to join and collusion between alliances. We propose a microgrid trading framework based on proof of contribution (PoC). According to the contribution value, we randomly select nodes based on weights through verifiable random functions (VRF) in each round of consensus to form the next round of consensus committee. The proposed electricity trading framework not only improves the consensus efficiency of the blockchain, but also is suitable as an incentive mechanism to attract users to participate in the power blockchain. Experiments show that our framework is effective and the contribution value we designed is reasonable. Through our framework, we can motivate users to participate and make energy transactions more fair.
Zernike radial polynomials (ZRP) play a significant role in application areas such as optics design, imaging systems, and image processing systems. Currently, there are two kinds of numerical schemes for computing the ZRP automatically with computer programs: one is based on the definition in which the factorial operations may lead to the overflow problem and the high order derivatives are troublesome, and the other is based on recursion which is either unstable or with high computational complexity. In this paper, our emphasis is focused on exploring the balanced binary tree (BBT) schemes for computing the ZRP: firstly an elegant formulae for computation is established; secondly the recursive and iterative algorithms based-on BBT are proposed; thirdly the computational complexity of the algorithms are analyzed rigorously; finally the performance of BBT schemes by testing the running time is verified and validated. Theoretical analysis shows that the computational complexity of balanced binary tree recursive algorithm (BBRTA) and iterative algorithm are exponential and quadratic respectively, which coincides with the running time test very well. Experiments show that the time consumption is about $1\sim 10$ microseconds with different computation platforms for the balanced binary tree iterative algorithm (BBTIA), which is stable and efficient for real-time applications. In the sense of STEM education, the connection of the BBT and ZRP exhibits the beauty and applications of discrete mathematical structure behind the engineering problem, which is worthy of introducing to the college students, computer programmers and optics engineers.
Fog computing emerged as a promising paradigm to address the challenges of processing and managing data generated by the Internet of Things (IoT). Load balancing (LB) plays a crucial role in Fog computing environments to optimize the overall system performance. It requires efficient resource allocation to improve resource utilization, minimize latency, and enhance the quality of service for end-users. In this work, we improve the performance of privacy-aware Reinforcement Learning (RL) agents that optimize the execution delay of IoT applications by minimizing the waiting delay. To maintain privacy, these agents optimize the waiting delay by minimizing the change in the number of queued requests in the whole system, i.e., without explicitly observing the actual number of requests that are queued in each Fog node nor observing the compute resource capabilities of those nodes. Besides improving the performance of these agents, we propose in this paper a lifelong learning framework for these agents, where lightweight inference models are used during deployment to minimize action delay and only retrained in case of significant environmental changes. To improve the performance, minimize the training cost, and adapt the agents to those changes, we explore the application of Transfer Learning (TL). TL transfers the knowledge acquired from a source domain and applies it to a target domain, enabling the reuse of learned policies and experiences. TL can be also used to pre-train the agent in simulation before fine-tuning it in the real environment; this significantly reduces failure probability compared to learning from scratch in the real environment. To our knowledge, there are no existing efforts in the literature that use TL to address lifelong learning for RL-based Fog LB; this is one of the main obstacles in deploying RL LB solutions in Fog systems.
A novel unconstrained optimization model named weighted trace-penalty minimization (WTPM) is proposed to address the extreme eigenvalue problem arising from the Full Configuration Interaction (FCI) method. Theoretical analysis shows that the global minimizers of the WTPM objective function are the desired eigenvectors, rather than the eigenspace. Analyzing the condition number of the Hessian operator in detail contributes to the determination of a near-optimal weight matrix. With the sparse feature of FCI matrices in mind, the coordinate descent (CD) method is adapted to WTPM and results in WTPM-CD method. The reduction of computational and storage costs in each iteration shows the efficiency of the proposed algorithm. Finally, the numerical experiments demonstrate the capability to address large-scale FCI matrices.
Humans perceive the world by concurrently processing and fusing high-dimensional inputs from multiple modalities such as vision and audio. Machine perception models, in stark contrast, are typically modality-specific and optimised for unimodal benchmarks, and hence late-stage fusion of final representations or predictions from each modality (`late-fusion') is still a dominant paradigm for multimodal video classification. Instead, we introduce a novel transformer based architecture that uses `fusion bottlenecks' for modality fusion at multiple layers. Compared to traditional pairwise self-attention, our model forces information between different modalities to pass through a small number of bottleneck latents, requiring the model to collate and condense the most relevant information in each modality and only share what is necessary. We find that such a strategy improves fusion performance, at the same time reducing computational cost. We conduct thorough ablation studies, and achieve state-of-the-art results on multiple audio-visual classification benchmarks including Audioset, Epic-Kitchens and VGGSound. All code and models will be released.
It is always well believed that modeling relationships between objects would be helpful for representing and eventually describing an image. Nevertheless, there has not been evidence in support of the idea on image description generation. In this paper, we introduce a new design to explore the connections between objects for image captioning under the umbrella of attention-based encoder-decoder framework. Specifically, we present Graph Convolutional Networks plus Long Short-Term Memory (dubbed as GCN-LSTM) architecture that novelly integrates both semantic and spatial object relationships into image encoder. Technically, we build graphs over the detected objects in an image based on their spatial and semantic connections. The representations of each region proposed on objects are then refined by leveraging graph structure through GCN. With the learnt region-level features, our GCN-LSTM capitalizes on LSTM-based captioning framework with attention mechanism for sentence generation. Extensive experiments are conducted on COCO image captioning dataset, and superior results are reported when comparing to state-of-the-art approaches. More remarkably, GCN-LSTM increases CIDEr-D performance from 120.1% to 128.7% on COCO testing set.
Traditional methods for link prediction can be categorized into three main types: graph structure feature-based, latent feature-based, and explicit feature-based. Graph structure feature methods leverage some handcrafted node proximity scores, e.g., common neighbors, to estimate the likelihood of links. Latent feature methods rely on factorizing networks' matrix representations to learn an embedding for each node. Explicit feature methods train a machine learning model on two nodes' explicit attributes. Each of the three types of methods has its unique merits. In this paper, we propose SEAL (learning from Subgraphs, Embeddings, and Attributes for Link prediction), a new framework for link prediction which combines the power of all the three types into a single graph neural network (GNN). GNN is a new type of neural network which directly accepts graphs as input and outputs their labels. In SEAL, the input to the GNN is a local subgraph around each target link. We prove theoretically that our local subgraphs also reserve a great deal of high-order graph structure features related to link existence. Another key feature is that our GNN can naturally incorporate latent features and explicit features. It is achieved by concatenating node embeddings (latent features) and node attributes (explicit features) in the node information matrix for each subgraph, thus combining the three types of features to enhance GNN learning. Through extensive experiments, SEAL shows unprecedentedly strong performance against a wide range of baseline methods, including various link prediction heuristics and network embedding methods.