In this paper, we extend diagrammatic reasoning in monoidal categories with algebraic operations and equations. We achieve this by considering monoidal categories that are enriched in the category of Eilenberg-Moore algebras for a monad. Under the condition that this monad is monoidal and affine, we construct an adjunction between symmetric monoidal categories and symmetric monoidal categories enriched over algebras for the monad. This allows us to devise an extension, and its semantics, of the ZX-calculus with probabilistic choices by freely enriching over convex algebras, which are the algebras of the finite distribution monad. We show how this construction can be used for diagrammatic reasoning of noise in quantum systems.
In this paper, we explore optimal treatment allocation policies that target distributional welfare. Most literature on treatment choice has considered utilitarian welfare based on the conditional average treatment effect (ATE). While average welfare is intuitive, it may yield undesirable allocations especially when individuals are heterogeneous (e.g., with outliers) - the very reason individualized treatments were introduced in the first place. This observation motivates us to propose an optimal policy that allocates the treatment based on the conditional quantile of individual treatment effects (QoTE). Depending on the choice of the quantile probability, this criterion can accommodate a policymaker who is either prudent or negligent. The challenge of identifying the QoTE lies in its requirement for knowledge of the joint distribution of the counterfactual outcomes, which is generally hard to recover even with experimental data. Therefore, we introduce minimax policies that are robust to model uncertainty. A range of identifying assumptions can be used to yield more informative policies. For both stochastic and deterministic policies, we establish the asymptotic bound on the regret of implementing the proposed policies. In simulations and two empirical applications, we compare optimal decisions based on the QoTE with decisions based on other criteria. The framework can be generalized to any setting where welfare is defined as a functional of the joint distribution of the potential outcomes.
In this paper, we focus on the design of binary constant-weight codes that admit low-complexity encoding and decoding algorithms, and that have size as a power of $2$. We construct a family of $(n=2^\ell, M=2^k, d=2)$ constant-weight codes ${\cal C}[\ell, r]$ parameterized by integers $\ell \geq 3$ and $1 \leq r \leq \lfloor \frac{\ell+3}{4} \rfloor$, by encoding information in the gaps between successive $1$'s of a vector. The code has weight $w = \ell$ and combinatorial dimension $k$ that scales quadratically with $\ell$. The encoding time is linear in the input size $k$, and the decoding time is poly-logarithmic in the input size $n$, discounting the linear time spent on parsing the input. Encoding and decoding algorithms of similar codes known in either information-theoretic or combinatorial literature require computation of large number of binomial coefficients. Our algorithms fully eliminate the need to evaluate binomial coefficients. While the code has a natural price to pay in $k$, it performs fairly well against the information-theoretic upper bound $\lfloor \log_2 {n \choose w} \rfloor$. When $\ell =3$, the code is optimal achieving the upper bound; when $\ell=4$, it is one bit away from the upper bound, and as $\ell$ grows it is order-optimal in the sense that the ratio of $k$ with its upper bound becomes a constant $\frac{11}{16}$ when $r=\lfloor \frac{\ell+3}{4} \rfloor$. With the same or even lower complexity, we derive new codes permitting a wider range of parameters by modifying ${\cal C}[\ell, r]$ in two different ways. The code derived using the first approach has the same blocklength $n=2^\ell$, but weight $w$ is allowed to vary from $\ell-1$ to $1$. In the second approach, the weight remains fixed as $w = \ell$, but the blocklength is reduced to $n=2^\ell - 2^r +1$. For certain selected values of parameters, these modified codes have an optimal $k$.
In this paper, we concentrate on solving second-order singularly perturbed Fredholm integro-differential equations (SPFIDEs). It is well known that solving these equations analytically is a challenging endeavor because of the presence of boundary and interior layers within the domain. To overcome these challenges, we develop a fitted second-order difference scheme that can capture the layer behavior of the solution accurately and efficiently, which is again, based on the integral identities with exponential basis functions, the composite trapezoidal rule, and an appropriate interpolating quadrature rules with the remainder terms in the integral form on a piecewise uniform mesh. Hence, our numerical method acts as a superior alternative to the existing methods in the literature. Further, using appropriate techniques in error analysis the scheme's convergence and stability have been studied in the discrete max norm. We have provided necessary experimental evidence that corroborates the theoretical results with a high degree of accuracy.
In this paper, we provide a theoretical analysis for a preconditioned steepest descent (PSD) iterative solver that improves the computational time of a finite difference numerical scheme for the Cahn-Hilliard equation with Flory-Huggins energy potential. In the numerical design, a convex splitting approach is applied to the chemical potential such that the logarithmic and the surface diffusion terms are treated implicitly while the expansive concave term is treated with an explicit update. The nonlinear and singular nature of the logarithmic energy potential makes the numerical implementation very challenging. However, the positivity-preserving property for the logarithmic arguments, unconditional energy stability, and optimal rate error estimates have been established in a recent work and it has been shown that successful solvers ensure a similar positivity-preserving property at each iteration stage. Therefore, in this work, we will show that the PSD solver ensures a positivity-preserving property at each iteration stage. The PSD solver consists of first computing a search direction (involved with solving a Poisson-like equation) and then takes a one-parameter optimization step over the search direction in which the Newton iteration becomes very powerful. A theoretical analysis is applied to the PSD iteration solver and a geometric convergence rate is proved for the iteration. In particular, the strict separation property of the numerical solution, which indicates a uniform distance between the numerical solution and the singular limit values of $\pm 1$ for the phase variable, plays an essential role in the iteration convergence analysis. A few numerical results are presented to demonstrate the robustness and efficiency of the PSD solver.
Identifiability of a mathematical model plays a crucial role in parameterization of the model. In this study, we establish the structural identifiability of a Susceptible-Exposed-Infected-Recovered (SEIR) model given different combinations of input data and investigate practical identifiability with respect to different observable data, data frequency, and noise distributions. The practical identifiability is explored by both Monte Carlo simulations and a Correlation Matrix approach. Our results show that practical identifiability benefits from higher data frequency and data from the peak of an outbreak. The incidence data gives the best practical identifiability results compared to prevalence and cumulative data. In addition, we compare and distinguish the practical identifiability by Monte Carlo simulations and a Correlation Matrix approach, providing insights for when to use which method for other applications.
In this paper, we tackle two challenges in multimodal learning for visual recognition: 1) when missing-modality occurs either during training or testing in real-world situations; and 2) when the computation resources are not available to finetune on heavy transformer models. To this end, we propose to utilize prompt learning and mitigate the above two challenges together. Specifically, our modality-missing-aware prompts can be plugged into multimodal transformers to handle general missing-modality cases, while only requiring less than 1% learnable parameters compared to training the entire model. We further explore the effect of different prompt configurations and analyze the robustness to missing modality. Extensive experiments are conducted to show the effectiveness of our prompt learning framework that improves the performance under various missing-modality cases, while alleviating the requirement of heavy model re-training. Code is available.
Recently, contrastive learning (CL) has emerged as a successful method for unsupervised graph representation learning. Most graph CL methods first perform stochastic augmentation on the input graph to obtain two graph views and maximize the agreement of representations in the two views. Despite the prosperous development of graph CL methods, the design of graph augmentation schemes -- a crucial component in CL -- remains rarely explored. We argue that the data augmentation schemes should preserve intrinsic structures and attributes of graphs, which will force the model to learn representations that are insensitive to perturbation on unimportant nodes and edges. However, most existing methods adopt uniform data augmentation schemes, like uniformly dropping edges and uniformly shuffling features, leading to suboptimal performance. In this paper, we propose a novel graph contrastive representation learning method with adaptive augmentation that incorporates various priors for topological and semantic aspects of the graph. Specifically, on the topology level, we design augmentation schemes based on node centrality measures to highlight important connective structures. On the node attribute level, we corrupt node features by adding more noise to unimportant node features, to enforce the model to recognize underlying semantic information. We perform extensive experiments of node classification on a variety of real-world datasets. Experimental results demonstrate that our proposed method consistently outperforms existing state-of-the-art baselines and even surpasses some supervised counterparts, which validates the effectiveness of the proposed contrastive framework with adaptive augmentation.
The key issue of few-shot learning is learning to generalize. In this paper, we propose a large margin principle to improve the generalization capacity of metric based methods for few-shot learning. To realize it, we develop a unified framework to learn a more discriminative metric space by augmenting the softmax classification loss function with a large margin distance loss function for training. Extensive experiments on two state-of-the-art few-shot learning models, graph neural networks and prototypical networks, show that our method can improve the performance of existing models substantially with very little computational overhead, demonstrating the effectiveness of the large margin principle and the potential of our method.
In this paper, we propose a novel multi-task learning architecture, which incorporates recent advances in attention mechanisms. Our approach, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with task-specific soft-attention modules, which are trainable in an end-to-end manner. These attention modules allow for learning of task-specific features from the global pool, whilst simultaneously allowing for features to be shared across different tasks. The architecture can be built upon any feed-forward neural network, is simple to implement, and is parameter efficient. Experiments on the CityScapes dataset show that our method outperforms several baselines in both single-task and multi-task learning, and is also more robust to the various weighting schemes in the multi-task loss function. We further explore the effectiveness of our method through experiments over a range of task complexities, and show how our method scales well with task complexity compared to baselines.
In this paper, we propose a conceptually simple and geometrically interpretable objective function, i.e. additive margin Softmax (AM-Softmax), for deep face verification. In general, the face verification task can be viewed as a metric learning problem, so learning large-margin face features whose intra-class variation is small and inter-class difference is large is of great importance in order to achieve good performance. Recently, Large-margin Softmax and Angular Softmax have been proposed to incorporate the angular margin in a multiplicative manner. In this work, we introduce a novel additive angular margin for the Softmax loss, which is intuitively appealing and more interpretable than the existing works. We also emphasize and discuss the importance of feature normalization in the paper. Most importantly, our experiments on LFW BLUFR and MegaFace show that our additive margin softmax loss consistently performs better than the current state-of-the-art methods using the same network architecture and training dataset. Our code has also been made available at //github.com/happynear/AMSoftmax