Recent years have witnessed an increasing number of artificial intelligence (AI) applications in transportation. As a new and emerging technology, AI's potential to advance transportation goals and the full extent of its impacts on the transportation sector is not yet well understood. As the transportation community explores these topics, it is critical to understand how transportation professionals, the driving force behind AI Transportation applications, perceive AI's potential efficiency and equity impacts. Toward this goal, we surveyed transportation professionals in the United States and collected a total of 354 responses. Based on the survey responses, we conducted both descriptive analysis and latent class cluster analysis (LCCA). The former provides an overview of prevalent attitudes among transportation professionals, while the latter allows the identification of distinct segments based on their latent attitudes toward AI. We find widespread optimism regarding AI's potential to improve many aspects of transportation (e.g., efficiency, cost reduction, and traveler experience); however, responses are mixed regarding AI's potential to advance equity. Moreover, many respondents are concerned that AI ethics are not well understood in the transportation community and that AI use in transportation could exaggerate existing inequalities. Through LCCA, we have identified four latent segments: AI Neutral, AI Optimist, AI Pessimist, and AI Skeptic. The latent class membership is significantly associated with respondents' age, education level, and AI knowledge level. Overall, the study results shed light on the extent to which the transportation community as a whole is ready to leverage AI systems to transform current practices and inform targeted education to improve the understanding of AI among transportation professionals.
The approach to analysing compositional data has been dominated by the use of logratio transformations, to ensure exact subcompositional coherence and, in some situations, exact isometry as well. A problem with this approach is that data zeros, found in most applications, have to be replaced to allow the logarithmic transformation. An alternative new approach, called the `chiPower' transformation, which allows data zeros, is to combine the standardization inherent in the chi-square distance in correspondence analysis, with the essential elements of the Box-Cox power transformation. The chiPower transformation is justified because it} defines between-sample distances that tend to logratio distances for strictly positive data as the power parameter tends to zero, and are then equivalent to transforming to logratios. For data with zeros, a value of the power can be identified that brings the chiPower transformation as close as possible to a logratio transformation, without having to substitute the zeros. Especially in the area of high-dimensional data, this alternative approach can present such a high level of coherence and isometry as to be a valid approach to the analysis of compositional data. Furthermore, in a supervised learning context, if the compositional variables serve as predictors of a response in a modelling framework, for example generalized linear models, then the power can be used as a tuning parameter in optimizing the accuracy of prediction through cross-validation. The chiPower-transformed variables have a straightforward interpretation, since they are each identified with single compositional parts, not ratios.
The categorical Gini covariance is a dependence measure between a numerical variable and a categorical variable. The Gini covariance measures dependence by quantifying the difference between the conditional and unconditional distributional functions. A value of zero for the categorical Gini covariance implies independence of the numerical variable and the categorical variable. We propose a non-parametric test for testing the independence between a numerical and categorical variable using the categorical Gini covariance. We used the theory of U-statistics to find the test statistics and study the properties. The test has an asymptotic normal distribution. As the implementation of a normal-based test is difficult, we develop a jackknife empirical likelihood (JEL) ratio test for testing independence. Extensive Monte Carlo simulation studies are carried out to validate the performance of the proposed JEL-based test. We illustrate the test procedure using real a data set.
A new approach is developed for computational modelling of microstructure evolution problems. The approach combines the phase-field method with the recently-developed laminated element technique (LET) which is a simple and efficient method to model weak discontinuities using nonconforming finite-element meshes. The essence of LET is in treating the elements that are cut by an interface as simple laminates of the two phases, and this idea is here extended to propagating interfaces so that the volume fraction of the phases and the lamination orientation vary accordingly. In the proposed LET-PF approach, the phase-field variable (order parameter), which is governed by an evolution equation of the Ginzburg-Landau type, plays the role of a level-set function that implicitly defines the position of the (sharp) interface. The mechanical equilibrium subproblem is then solved using the semisharp LET technique. Performance of LET-PF is illustrated by numerical examples. In particular, it is shown that, for the problems studied, LET-PF exhibits higher accuracy than the conventional phase-field method so that, for instance, qualitatively correct results can be obtained using a significantly coarser mesh, and thus at a lower computational cost.
Latitude on the choice of initialisation is a shared feature between one-step extended state-space and multi-step methods. The paper focuses on lattice Boltzmann schemes, which can be interpreted as examples of both previous categories of numerical schemes. We propose a modified equation analysis of the initialisation schemes for lattice Boltzmann methods, determined by the choice of initial data. These modified equations provide guidelines to devise and analyze the initialisation in terms of order of consistency with respect to the target Cauchy problem and time smoothness of the numerical solution. In detail, the larger the number of matched terms between modified equations for initialisation and bulk methods, the smoother the obtained numerical solution. This is particularly manifest for numerical dissipation. Starting from the constraints to achieve time smoothness, which can quickly become prohibitive for they have to take the parasitic modes into consideration, we explain how the distinct lack of observability for certain lattice Boltzmann schemes -- seen as dynamical systems on a commutative ring -- can yield rather simple conditions and be easily studied as far as their initialisation is concerned. This comes from the reduced number of initialisation schemes at the fully discrete level. These theoretical results are successfully assessed on several lattice Boltzmann methods.
The demand-supply balance of electricity systems is fundamentally linked to climate conditions. In light of this, the present study aims to model the effect of climate change on the European electricity system, specifically on its long-term reliability. A resource adequate power system -- a system where electricity supply covers demand -- is sensitive to generation capacity, demand patterns, and the network structure and capacity. Climate change is foreseen to affect each of these components. In this analysis, we focused on two drivers of power system adequacy: the impact of temperature variations on electricity demand, and of water inflows changes on hydro generation. Using a post-processing approach, based on results found in the literature, the inputs of a large-scale electricity market model covering the European region were modified. The results show that climate change may decrease total LOLE (Loss of Load Expectation) hours in Europe by more than 50%, as demand will largely decrease because of a higher temperatures during winter. We found that the climate change impact on demand tends to decrease LOLE values, while the climate change effects on hydrological conditions tend to increase LOLE values. The study is built on a limited amount of open-source data and can flexibly incorporate various sets of assumptions. Outcomes also show the current difficulties to reliably model the effects of climate change on power system adequacy. Overall, our presented method displays the relevance of climate change effects in electricity network studies.
Principal component analysis (PCA) is a longstanding and well-studied approach for dimension reduction. It rests upon the assumption that the underlying signal in the data has low rank, and thus can be well-summarized using a small number of dimensions. The output of PCA is typically represented using a scree plot, which displays the proportion of variance explained (PVE) by each principal component. While the PVE is extensively reported in routine data analyses, to the best of our knowledge the notion of inference on the PVE remains unexplored. In this paper, we consider inference on the PVE. We first introduce a new population quantity for the PVE with respect to an unknown matrix mean. Critically, our interest lies in the PVE of the sample principal components (as opposed to unobserved population principal components); thus, the population PVE that we introduce is defined conditional on the sample singular vectors. We show that it is possible to conduct inference, in the sense of confidence intervals, p-values, and point estimates, on this population quantity. Furthermore, we can conduct valid inference on the PVE of a subset of the principal components, even when the subset is selected using a data-driven approach such as the elbow rule. We demonstrate the proposed approach in simulation and in an application to a gene expression dataset.
The Laplace eigenvalue problem on circular sectors has eigenfunctions with corner singularities. Standard methods may produce suboptimal approximation results. To address this issue, a novel numerical algorithm that enhances standard isogeometric analysis is proposed in this paper by using a single-patch graded mesh refinement scheme. Numerical tests demonstrate optimal convergence rates for both the eigenvalues and eigenfunctions. Furthermore, the results show that smooth splines possess a superior approximation constant compared to their $C^0$-continuous counterparts for the lower part of the Laplace spectrum. This is an extension of previous findings about excellent spectral approximation properties of smooth splines on rectangular domains to circular sectors. In addition, graded meshes prove to be particularly advantageous for an accurate approximation of a limited number of eigenvalues. The novel algorithm applied here has a drawback in the singularity of the isogeometric parameterization. It results in some basis functions not belonging to the solution space of the corresponding weak problem, which is considered a variational crime. However, the approach proves to be robust. Finally, a hierarchical mesh structure is presented to avoid anisotropic elements, omit redundant degrees of freedom and keep the number of basis functions contributing to the variational crime constant, independent of the mesh size. Numerical results validate the effectiveness of hierarchical mesh grading for the simulation of eigenfunctions with and without corner singularities.
Data depth functions have been intensively studied for normed vector spaces. However, a discussion on depth functions on data where one specific data structure cannot be presupposed is lacking. In this article, we introduce a notion of depth functions for data types that are not given in statistical standard data formats and therefore we do not have one specific data structure. We call such data in general non-standard data. To achieve this, we represent the data via formal concept analysis which leads to a unified data representation. Besides introducing depth functions for non-standard data using formal concept analysis, we give a systematic basis by introducing structural properties. Furthermore, we embed the generalised Tukey depth into our concept of data depth and analyse it using the introduced structural properties. Thus, this article provides the mathematical formalisation of centrality and outlyingness for non-standard data and therefore increases the spaces centrality is currently discussed. In particular, it gives a basis to define further depth functions and statistical inference methods for non-standard data.
In this paper I will develop a lambda-term calculus, lambda-2Int, for a bi-intuitionistic logic and discuss its implications for the notions of sense and denotation of derivations in a bilateralist setting. Thus, I will use the Curry-Howard correspondence, which has been well-established between the simply typed lambda-calculus and natural deduction systems for intuitionistic logic, and apply it to a bilateralist proof system displaying two derivability relations, one for proving and one for refuting. The basis will be the natural deduction system of Wansing's bi-intuitionistic logic 2Int, which I will turn into a term-annotated form. Therefore, we need a type theory that extends to a two-sorted typed lambda-calculus. I will present such a term-annotated proof system for 2Int and prove a Dualization Theorem relating proofs and refutations in this system. On the basis of these formal results I will argue that this gives us interesting insights into questions about sense and denotation as well as synonymy and identity of proofs from a bilateralist point of view.
We propose a novel algorithm for the support estimation of partially known Gaussian graphical models that incorporates prior information about the underlying graph. In contrast to classical approaches that provide a point estimate based on a maximum likelihood or a maximum a posteriori criterion using (simple) priors on the precision matrix, we consider a prior on the graph and rely on annealed Langevin diffusion to generate samples from the posterior distribution. Since the Langevin sampler requires access to the score function of the underlying graph prior, we use graph neural networks to effectively estimate the score from a graph dataset (either available beforehand or generated from a known distribution). Numerical experiments demonstrate the benefits of our approach.