In this work we introduce $\nu^2$-Flows, an extension of the $\nu$-Flows method to final states containing multiple neutrinos. The architecture can natively scale for all combinations of object types and multiplicities in the final state for any desired neutrino multiplicities. In $t\bar{t}$ dilepton events, the momenta of both neutrinos and correlations between them are reconstructed more accurately than when using the most popular standard analytical techniques, and solutions are found for all events. Inference time is significantly faster than competing methods, and can be reduced further by evaluating in parallel on graphics processing units. We apply $\nu^2$-Flows to $t\bar{t}$ dilepton events and show that the per-bin uncertainties in unfolded distributions is much closer to the limit of performance set by perfect neutrino reconstruction than standard techniques. For the chosen double differential observables $\nu^2$-Flows results in improved statistical precision for each bin by a factor of 1.5 to 2 in comparison to the Neutrino Weighting method and up to a factor of four in comparison to the Ellipse approach.
Multiple-input multiple-output (MIMO) systems will play a crucial role in future wireless communication, but improving their signal detection performance to increase transmission efficiency remains a challenge. To address this issue, we propose extending the discrete signal detection problem in MIMO systems to a continuous one and applying the Hamiltonian Monte Carlo method, an efficient Markov chain Monte Carlo algorithm. In our previous studies, we have used a mixture of normal distributions for the prior distribution. In this study, we propose using a mixture of t-distributions, which further improves detection performance. Based on our theoretical analysis and computer simulations, the proposed method can achieve near-optimal signal detection with polynomial computational complexity. This high-performance and practical MIMO signal detection could contribute to the development of the 6th-generation mobile network.
Due to their intrinsic capabilities on parallel signal processing, optical neural networks (ONNs) have attracted extensive interests recently as a potential alternative to electronic artificial neural networks (ANNs) with reduced power consumption and low latency. Preliminary confirmation of the parallelism in optical computing has been widely done by applying the technology of wavelength division multiplexing (WDM) in the linear transformation part of neural networks. However, inter-channel crosstalk has obstructed WDM technologies to be deployed in nonlinear activation in ONNs. Here, we propose a universal WDM structure called multiplexed neuron sets (MNS) which apply WDM technologies to optical neurons and enable ONNs to be further compressed. A corresponding back-propagation (BP) training algorithm is proposed to alleviate or even cancel the influence of inter-channel crosstalk on MNS-based WDM-ONNs. For simplicity, semiconductor optical amplifiers (SOAs) are employed as an example of MNS to construct a WDM-ONN trained with the new algorithm. The result shows that the combination of MNS and the corresponding BP training algorithm significantly downsize the system and improve the energy efficiency to tens of times while giving similar performance to traditional ONNs.
We study $L_2$-approximation problems in the worst case setting in the weighted Korobov spaces $H_{d,\a,{\bm \ga}}$ with parameters $1\ge \ga_1\ge \ga_2\ge \cdots\ge 0$ and $\frac1 2<\az_1\le \az_2\le \cdots$. We consider the worst case error of algorithms that use finitely many arbitrary continuous linear functionals. We discuss the strongly polynomial tractability (SPT), polynomial tractability (PT), and $(t_1,t_2)$-weak tractability ($(t_1,t_2)$-WT) for all $t_1>1$ and $t_2>0$ under the absolute or normalized error criterion. In particular, we obtain the matching necessary and sufficient condition for SPT or PT in terms of the parameters.
Model degrees of freedom ($\df$) is a fundamental concept in statistics because it quantifies the flexibility of a fitting procedure and is indispensable in model selection. The $\df$ is often intuitively equated with the number of independent variables in the fitting procedure. But for adaptive regressions that perform variable selection (e.g., the best subset regressions), the model $\df$ is larger than the number of selected variables. The excess part has been defined as the \emph{search degrees of freedom} ($\sdf$) to account for model selection. However, this definition is limited since it does not consider fitting procedures in augmented space, such as splines and regression trees; and it does not use the same fitting procedure for $\sdf$ and $\df$. For example, the lasso's $\sdf$ is defined through the \emph{relaxed} lasso's $\df$ instead of the lasso's $\df$. Here we propose a \emph{modified search degrees of freedom} ($\msdf$) to directly account for the cost of searching in the original or augmented space. Since many fitting procedures can be characterized by a linear operator, we define the search cost as the effort to determine such a linear operator. When we construct a linear operator for the lasso via the iterative ridge regression, $\msdf$ offers a new perspective for its search cost. For some complex procedures such as the multivariate adaptive regression splines (MARS), the search cost needs to be pre-determined to serve as a tuning parameter for the procedure itself, but it might be inaccurate. To investigate the inaccurate pre-determined search cost, we develop two concepts, \emph{nominal} $\df$ and \emph{actual} $\df$, and formulate a property named \emph{self-consistency} when there is no gap between the \emph{nominal} $\df$ and the \emph{actual} $\df$.
Machine learning techniques, in particular the so-called normalizing flows, are becoming increasingly popular in the context of Monte Carlo simulations as they can effectively approximate target probability distributions. In the case of lattice field theories (LFT) the target distribution is given by the exponential of the action. The common loss function's gradient estimator based on the "reparametrization trick" requires the calculation of the derivative of the action with respect to the fields. This can present a significant computational cost for complicated, non-local actions like e.g. fermionic action in QCD. In this contribution, we propose an estimator for normalizing flows based on the REINFORCE algorithm that avoids this issue. We apply it to two dimensional Schwinger model with Wilson fermions at criticality and show that it is up to ten times faster in terms of the wall-clock time as well as requiring up to $30\%$ less memory than the reparameterization trick estimator. It is also more numerically stable allowing for single precision calculations and the use of half-float tensor cores. We present an in-depth analysis of the origins of those improvements. We believe that these benefits will appear also outside the realm of the LFT, in each case where the target probability distribution is computationally intensive.
Partite, $3$-uniform hypergraphs are $3$-uniform hypergraphs in which each hyperedge contains exactly one point from each of the $3$ disjoint vertex classes. We consider the degree sequence problem of partite, $3$-uniform hypergraphs, that is, to decide if such a hypergraph with prescribed degree sequences exists. We prove that this decision problem is NP-complete in general, and give a polynomial running time algorithm for third almost-regular degree sequences, that is, when each degree in one of the vertex classes is $k$ or $k-1$ for some fixed $k$, and there is no restriction for the other two vertex classes. We also consider the sampling problem, that is, to uniformly sample partite, $3$-uniform hypergraphs with prescribed degree sequences. We propose a Parallel Tempering method, where the hypothetical energy of the hypergraphs measures the deviation from the prescribed degree sequence. The method has been implemented and tested on synthetic and real data. It can also be applied for $\chi^2$ testing of contingency tables. We have shown that this hypergraph-based $\chi^2$ test is more sensitive than the standard $\chi^2$ test. The extra sensitivity is especially advantageous on small data sets, where the proposed Parallel Tempering method shows promising performance.
Block majorization-minimization (BMM) is a simple iterative algorithm for nonconvex constrained optimization that sequentially minimizes majorizing surrogates of the objective function in each block coordinate while the other coordinates are held fixed. BMM entails a large class of optimization algorithms such as block coordinate descent and its proximal-point variant, expectation-minimization, and block projected gradient descent. We establish that for general constrained nonconvex optimization, BMM with strongly convex surrogates can produce an $\epsilon$-stationary point within $O(\epsilon^{-2}(\log \epsilon^{-1})^{2})$ iterations and asymptotically converges to the set of stationary points. Furthermore, we propose a trust-region variant of BMM that can handle surrogates that are only convex and still obtain the same iteration complexity and asymptotic stationarity. These results hold robustly even when the convex sub-problems are inexactly solved as long as the optimality gaps are summable. As an application, we show that a regularized version of the celebrated multiplicative update algorithm for nonnegative matrix factorization by Lee and Seung has iteration complexity of $O(\epsilon^{-2}(\log \epsilon^{-1})^{2})$. The same result holds for a wide class of regularized nonnegative tensor decomposition algorithms as well as the classical block projected gradient descent algorithm. These theoretical results are validated through various numerical experiments.
The dynamical equation of the boundary vorticity has been obtained, which shows that the viscosity at a solid wall is doubled as if the fluid became more viscous at the boundary. For certain viscous flows the boundary vorticity can be determined via the dynamical equation up to bounded errors for all time, without the need of knowing the details of the main stream flows. We then validate the dynamical equation by carrying out stochastic direct numerical simulations (i.e. the random vortex method for wall-bounded incompressible viscous flows) by two different means of updating the boundary vorticity, one using mollifiers of the Biot-Savart singular integral kernel, another using the dynamical equations.
In many scientific applications the aim is to infer a function which is smooth in some areas, but rough or even discontinuous in other areas of its domain. Such spatially inhomogeneous functions can be modelled in Besov spaces with suitable integrability parameters. In this work we study adaptive Bayesian inference over Besov spaces, in the white noise model from the point of view of rates of contraction, using $p$-exponential priors, which range between Laplace and Gaussian and possess regularity and scaling hyper-parameters. To achieve adaptation, we employ empirical and hierarchical Bayes approaches for tuning these hyper-parameters. Our results show that, while it is known that Gaussian priors can attain the minimax rate only in Besov spaces of spatially homogeneous functions, Laplace priors attain the minimax or nearly the minimax rate in both Besov spaces of spatially homogeneous functions and Besov spaces permitting spatial inhomogeneities.
We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.