In this paper, we derive results about the limiting distribution of the empirical magnetization vector and the maximum likelihood (ML) estimates of the natural parameters in the tensor Curie-Weiss Potts model. Our results reveal surprisingly new phase transition phenomena including the existence of a smooth curve in the interior of the parameter plane on which the magnetization vector and the ML estimates have mixture limiting distributions, the latter comprising of both continuous and discrete components, and a surprising superefficiency phenomenon of the ML estimates, which stipulates an $N^{-3/4}$ rate of convergence of the estimates to some non-Gaussian distribution at certain special points of one type and an $N^{-5/6}$ rate of convergence to some other non-Gaussian distribution at another special point of a different type. The last case can arise only for one particular value of the tuple of the tensor interaction order and the number of colors. These results are then used to derive asymptotic confidence intervals for the natural parameters at all points where consistent estimation is possible.
In this work, we develop a novel efficient quadrature and sparse grid based polynomial interpolation method to price American options with multiple underlying assets. The approach is based on first formulating the pricing of American options using dynamic programming, and then employing static sparse grids to interpolate the continuation value function at each time step. To achieve high efficiency, we first transform the domain from $\mathbb{R}^d$ to $(-1,1)^d$ via a scaled tanh map, and then remove the boundary singularity of the resulting multivariate function over $(-1,1)^d$ by a bubble function and simultaneously, to significantly reduce the number of interpolation points. We rigorously establish that with a proper choice of the bubble function, the resulting function has bounded mixed derivatives up to a certain order, which provides theoretical underpinnings for the use of sparse grids. Numerical experiments for American arithmetic and geometric basket put options with the number of underlying assets up to 16 are presented to validate the effectiveness of the approach.
As we embark on a new era of LLMs, it becomes increasingly crucial to understand their capabilities, limitations, and differences. Toward making further progress in this direction, we strive to build a deeper understanding of the gaps between massive LLMs (e.g., ChatGPT) and smaller yet effective open-source LLMs and their distilled counterparts. To this end, we specifically focus on long-form question answering (LFQA) because it has several practical and impactful applications (e.g., troubleshooting, customer service, etc.) yet is still understudied and challenging for LLMs. We propose a question-generation method from abstractive summaries and show that generating follow-up questions from summaries of long documents can create a challenging setting for LLMs to reason and infer from long contexts. Our experimental results confirm that: (1) our proposed method of generating questions from abstractive summaries pose a challenging setup for LLMs and shows performance gaps between LLMs like ChatGPT and open-source LLMs (Alpaca, Llama) (2) open-source LLMs exhibit decreased reliance on context for generated questions from the original document, but their generation capabilities drop significantly on generated questions from summaries -- especially for longer contexts (>1024 tokens)
Due to the imbalanced nature of networked observational data, the causal effect predictions for some individuals can severely violate the positivity/overlap assumption, rendering unreliable estimations. Nevertheless, this potential risk of individual-level treatment effect estimation on networked data has been largely under-explored. To create a more trustworthy causal effect estimator, we propose the uncertainty-aware graph deep kernel learning (GraphDKL) framework with Lipschitz constraint to model the prediction uncertainty with Gaussian process and identify unreliable estimations. To the best of our knowledge, GraphDKL is the first framework to tackle the violation of positivity assumption when performing causal effect estimation with graphs. With extensive experiments, we demonstrate the superiority of our proposed method in uncertainty-aware causal effect estimation on networked data.
In this paper, we study a sampling and transmission scheduling problem for multi-source remote estimation, where a scheduler determines when to take samples from multiple continuous-time Gauss-Markov processes and send the samples over multiple channels to remote estimators. The sample transmission times are i.i.d. across samples and channels. The objective of the scheduler is to minimize the weighted sum of the time-average expected estimation errors of these Gauss-Markov sources. This problem is a continuous-time Restless Multi-arm Bandit (RMAB) problem with a continuous state space. We prove that the arms are indexable and derive an exact expression of the Whittle index. To the extent of our knowledge, this is the first Whittle index policy for multi-source signal-aware remote estimation. This result has two degenerated cases of interest: (i) In the single-source case, the Whittle index policy reproduces earlier threshold-based sampling policies for the remote estimation of Wiener and Ornstein-Uhlenbeck processes. When the instantaneous estimation error of the Gauss-Markov process crosses the optimal threshold, the Whittle index is precisely equal to 0. In addition, a new optimal sampling policy for the remote estimation of the unstable Ornstein-Uhlenbeck process is obtained. (ii) In the signal-agnostic case, we find an exact expression of the Whittle index for Age of Information (AoI)-based remote estimation, which complements earlier results by allowing for random transmission times. Our numerical results show that the proposed policy performs better than the signal-agnostic AoI-based Whittle index policy and the Maximum-Age-First, Zero-Wait (MAF-ZW) policy. The performance gain of the proposed policy is high when some of the Gauss-Markov processes are highly unstable or when the sample transmission times follow a heavy-tail distribution.
In this paper, we propose a blind source separation of a linear mixture of dependent sources based on copula statistics that measure the non-linear dependence between source component signals structured as copula density functions. The source signals are assumed to be stationary. The method minimizes the Kullback-Leibler divergence between the copula density functions of the estimated sources and of the dependency structure. The proposed method is applied to data obtained from the time-domain analysis of the classical 11-Bus 4-Machine system. Extensive simulation results demonstrate that the proposed method based on copula statistics converges faster and outperforms the state-of-the-art blind source separation method for dependent sources in terms of interference-to-signal ratio.
The aim of this work is to improve musculoskeletal-based models of the upper-limb Wrench Feasible Set i.e. the set of achievable maximal wrenches at the hand for applications in collaborative robotics and computer aided ergonomics. In particular, a recent method performing wrench capacity evaluation called the Iterative Convex Hull Method is upgraded in order to integrate non dislocation and compression limitation constraints at the glenohumeral joint not taken into account in the available models. Their effects on the amplitude of the force capacities at the hand, glenohumeral joint reaction forces and upper-limb muscles coordination in comparison to the original iterative convex hull method are investigated in silico. The results highlight the glenohumeral potential dislocation for the majority of elements of the wrench feasible set with the original Iterative Convex Hull method and the fact that the modifications satisfy correctly stability constraints at the glenohumeral joint. Also, the induced muscles coordination pattern favors the action of stabilizing muscles, in particular the rotator-cuff muscles, and lowers that of known potential destabilizing ones according to the literature.
In this paper, we exploit the unique properties of a deterministic projected belief network (D-PBN) to take full advantage of trainable compound activation functions (TCAs). A D-PBN is a type of auto-encoder that operates by "backing up" through a feed-forward neural network. TCAs are activation functions with complex monotonic-increasing shapes that change the distribution of the data so that the linear transformation that follows is more effective. Because a D-PBN operates by "backing up", the TCAs are inverted in the reconstruction process, restoring the original distribution of the data, thus taking advantage of a given TCA in both analysis and reconstruction. In this paper, we show that a D-PBN auto-encoder with TCAs can significantly out-perform standard auto-encoders including variational auto-encoders.
Throughout the analytical revolution that has occurred in the NBA, the development of specific metrics and formulas has given teams, coaches, and players a new way to see the game. However - the question arises - how can we verify any metrics? One method would simply be eyeball approximation (trying out many different gameplans) and/or trial and error - an estimation-based and costly approach. Another approach is to try to model already existing metrics with a unique set of features using machine learning techniques. The key to this approach is that with these features that are selected, we can try to gauge the effectiveness of these features combined, rather than using individual analysis in simple metric evaluation. If we have an accurate model, it can particularly help us determine the specifics of gameplan execution. In this paper, the statistic ORTG (Offensive Rating, developed by Dean Oliver) was found to have a correlation with different NBA playtypes using both a linear regression model and a neural network regression model, although ultimately, a neural network worked slightly better than linear regression. Using the accuracy of the models as a justification, the next step was to optimize the output of the model with test examples, which would demonstrate the combination of features to best achieve a highly functioning offense.
Generalized metric spaces are obtained by weakening the requirements (e.g., symmetry) on the distance function and by allowing it to take values in structures (e.g., quantales) that are more general than the set of non-negative real numbers. Quantale-valued metric spaces have gained prominence due to their use in quantitative reasoning on programs/systems, and for defining various notions of behavioral metrics. We investigate imprecision and robustness in the framework of quantale-valued metric spaces, when the quantale is continuous. In particular, we study the relation between the robust topology, which captures robustness of analyses, and the Hausdorff-Smyth hemi-metric. To this end, we define a preorder-enriched monad $\mathsf{P}_S$, called the Hausdorff-Smyth monad, and when $Q$ is a continuous quantale and $X$ is a $Q$-metric space, we relate the topology induced by the metric on $\mathsf{P}_S(X)$ with the robust topology on the powerset $\mathsf{P}(X)$ defined in terms of the metric on $X$.
In this paper, we study the shape reconstruction problem, when the shape we wish to reconstruct is an orientable smooth d-dimensional submanifold of the Euclidean space. Assuming we have as input a simplicial complex K that approximates the submanifold (such as the Cech complex or the Rips complex), we recast the problem of reconstucting the submanifold from K as a L1-norm minimization problem in which the optimization variable is a d-chain of K. Providing that K satisfies certain reasonable conditions, we prove that the considered minimization problem has a unique solution which triangulates the submanifold and coincides with the flat Delaunay complex introduced and studied in a companion paper. Since the objective is a weighted L1-norm and the contraints are linear, the triangulation process can thus be implemented by linear programming.