Presenting high-level arguments is a crucial task for fostering participation in online societal discussions. Current argument summarization approaches miss an important facet of this task -- capturing diversity -- which is important for accommodating multiple perspectives. We introduce three aspects of diversity: those of opinions, annotators, and sources. We evaluate approaches to a popular argument summarization task called Key Point Analysis, which shows how these approaches struggle to (1) represent arguments shared by few people, (2) deal with data from various sources, and (3) align with subjectivity in human-provided annotations. We find that both general-purpose LLMs and dedicated KPA models exhibit this behavior, but have complementary strengths. Further, we observe that diversification of training data may ameliorate generalization. Addressing diversity in argument summarization requires a mix of strategies to deal with subjectivity.
In the absence of parallax cues, a learning-based single image depth estimation (SIDE) model relies heavily on shading and contextual cues in the image. While this simplicity is attractive, it is necessary to train such models on large and varied datasets, which are difficult to capture. It has been shown that using embeddings from pre-trained foundational models, such as CLIP, improves zero shot transfer in several applications. Taking inspiration from this, in our paper we explore the use of global image priors generated from a pre-trained ViT model to provide more detailed contextual information. We argue that the embedding vector from a ViT model, pre-trained on a large dataset, captures greater relevant information for SIDE than the usual route of generating pseudo image captions, followed by CLIP based text embeddings. Based on this idea, we propose a new SIDE model using a diffusion backbone which is conditioned on ViT embeddings. Our proposed design establishes a new state-of-the-art (SOTA) for SIDE on NYUv2 dataset, achieving Abs Rel error of 0.059(14% improvement) compared to 0.069 by the current SOTA (VPD). And on KITTI dataset, achieving Sq Rel error of 0.139 (2% improvement) compared to 0.142 by the current SOTA (GEDepth). For zero-shot transfer with a model trained on NYUv2, we report mean relative improvement of (20%, 23%, 81%, 25%) over NeWCRFs on (Sun-RGBD, iBims1, DIODE, HyperSim) datasets, compared to (16%, 18%, 45%, 9%) by ZoeDepth. The code is available at //github.com/Aradhye2002/EcoDepth.
Epistemic modals have peculiar logical features that are challenging to account for in a broadly classical framework. For instance, while a sentence of the form $p\wedge\Diamond\neg p$ ('$p$, but it might be that not $p$') appears to be a contradiction, $\Diamond\neg p$ does not entail $\neg p$, which would follow in classical logic. Likewise, the classical laws of distributivity and disjunctive syllogism fail for epistemic modals. Existing attempts to account for these facts generally either under- or over-correct. Some predict that $p\wedge\Diamond\neg p$, a so-called epistemic contradiction, is a contradiction only in an etiolated sense, under a notion of entailment that does not always allow us to replace $p\wedge\Diamond\neg p$ with a contradiction; these theories underpredict the infelicity of embedded epistemic contradictions. Other theories savage classical logic, eliminating not just rules that intuitively fail but also rules like non-contradiction, excluded middle, De Morgan's laws, and disjunction introduction, which intuitively remain valid for epistemic modals. In this paper, we aim for a middle ground, developing a semantics and logic for epistemic modals that makes epistemic contradictions genuine contradictions and that invalidates distributivity and disjunctive syllogism but that otherwise preserves classical laws that intuitively remain valid. We start with an algebraic semantics, based on ortholattices instead of Boolean algebras, and then propose a more concrete possibility semantics, based on partial possibilities related by compatibility. Both semantics yield the same consequence relation, which we axiomatize. We then show how to lift an arbitrary possible worlds model for a non-modal language to a possibility model for a language with epistemic modals.
Realtime shape estimation of continuum objects and manipulators is essential for developing accurate planning and control paradigms. The existing methods that create dense point clouds from camera images, and/or use distinguishable markers on a deformable body have limitations in realtime tracking of large continuum objects/manipulators. The physical occlusion of markers can often compromise accurate shape estimation. We propose a robust method to estimate the shape of linear deformable objects in realtime using scattered and unordered key points. By utilizing a robust probability-based labeling algorithm, our approach identifies the true order of the detected key points and then reconstructs the shape using piecewise spline interpolation. The approach only relies on knowing the number of the key points and the interval between two neighboring points. We demonstrate the robustness of the method when key points are partially occluded. The proposed method is also integrated into a simulation in Unity for tracking the shape of a cable with a length of 1m and a radius of 5mm. The simulation results show that our proposed approach achieves an average length error of 1.07% over the continuum's centerline and an average cross-section error of 2.11mm. The real-world experiments of tracking and estimating a heavy-load cable prove that the proposed approach is robust under occlusion and complex entanglement scenarios.
In fully Bayesian analyses, prior distributions are specified before observing data. Prior elicitation methods transfigure prior information into quantifiable prior distributions. Recently, methods that leverage copulas have been proposed to accommodate more flexible dependence structures when eliciting multivariate priors. We prove that under broad conditions, the posterior cannot retain many of these flexible prior dependence structures in large-sample settings. We emphasize the impact of this result by overviewing several objectives for prior specification to help practitioners select prior dependence structures that align with their objectives for posterior analysis. Because correctly specifying the dependence structure a priori can be difficult, we consider how the choice of prior copula impacts the posterior distribution in terms of asymptotic convergence of the posterior mode. Our resulting recommendations streamline the prior elicitation process.
Stress models are a promising approach for graph drawing. They minimize the weighted sum of the squared errors of the Euclidean and desired distances for each node pair. The desired distance typically uses the graph-theoretic distances obtained from the all-node pair shortest path problem. In a minimized stress function, the obtained coordinates are affected by the non-Euclidean property and the high-dimensionality of the graph-theoretic distance matrix. Therefore, the graph-theoretic distances used in stress models may not necessarily be the best metric for determining the node coordinates. In this study, we propose two different methods of adjusting the graph-theoretical distance matrix to a distance matrix suitable for graph drawing while preserving its structure. The first method is the application of eigenvalue decomposition to the inner product matrix obtained from the distance matrix and the obtainment of a new distance matrix by setting some eigenvalues with small absolute values to zero. The second approach is the usage of a stress model modified by adding a term that minimizes the Frobenius norm between the adjusted and original distance matrices. We perform computational experiments using several benchmark graphs to demonstrate that the proposed method improves some quality metrics, including the node resolution and the Gabriel graph property, when compared to conventional stress models.
Target class classification is a mixed classification and transition model whose integrated goal is to assign objects to a certain, so called target or normal class. The classification process is iterative, and in each step an object in a certain class undergoes an action attached to that class, initiating the transition of the object to one of the classes. The sequence of transitions, which we call class transitions, must be designed to provide the final assignment of objects to the target class. The transition process can be described in the form of a directed graph, and the success of the final classification is mainly due to the properties of this graph. In our previous research we showed that the desirable structure of the transition graph is an oriented rooted tree with orientation towards the root vertex, which corresponds to the normal class. It is clear that the transition graph of an arbitrary algorithm (policy) may not have this property. In this paper we study the structure of realistic transition graphs, which makes it possible to find classification inconsistencies, helping to transfer it into the desired form. The medical interpretation of dynamic treatment regime considered in the article further clarifies the investigated framework.
We develop a nonparametric Bayesian modeling approach to ordinal regression based on priors placed directly on the discrete distribution of the ordinal responses. The prior probability models are built from a structured mixture of multinomial distributions. We leverage a continuation-ratio logits representation to formulate the mixture kernel, with mixture weights defined through the logit stick-breaking process that incorporates the covariates through a linear function. The implied regression functions for the response probabilities can be expressed as weighted sums of parametric regression functions, with covariate-dependent weights. Thus, the modeling approach achieves flexible ordinal regression relationships, avoiding linearity or additivity assumptions in the covariate effects. Model flexibility is formally explored through the Kullback-Leibler support of the prior probability model. A key model feature is that the parameters for both the mixture kernel and the mixture weights can be associated with a continuation-ratio logits regression structure. Hence, an efficient and relatively easy to implement posterior simulation method can be designed, using P\'olya-Gamma data augmentation. Moreover, the model is built from a conditional independence structure for category-specific parameters, which results in additional computational efficiency gains through partial parallel sampling. In addition to the general mixture structure, we study simplified model versions that incorporate covariate dependence only in the mixture kernel parameters or only in the mixture weights. For all proposed models, we discuss approaches to prior specification and develop Markov chain Monte Carlo methods for posterior simulation. The methodology is illustrated with several synthetic and real data examples.
Graph labeling is a technique that assigns unique labels or weights to the vertices or edges of a graph, often used to analyze and solve various graph-related problems. There are few methods with certain limitations conducted by researchers previously on this topic. This research paper focuses on antimagic labeling of different types of graphs and trees. It entails the assignment of distinct prime values to edges in a manner that ensures the cumulative sum of edge labels at each vertex remains unique. This research proposes a conjecture on antimagic labeling of any graphs and proves two theories. Firstly, we tried to give weights to the edges randomly, as some exceptions are faced in particular phases in this way, we followed a whole new way to mitigate this problem. This research paper demonstrates computational and mathematical verification to prove that antimagic labeling of any perfect binary tree and complete graph is possible.
Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.
We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.