When extreme weather events affect large areas, their regional to sub-continental spatial scale is important for their impacts. We propose a novel methodology that combines spatial extreme-value theory with a machine learning (ML) algorithm to model weather extremes and quantify probabilities associated with the occurrence, intensity and spatial extent of these events. The model is here applied to Western European summertime heat extremes. Using new loss functions adapted to extreme values, we fit a theoretically-motivated spatial model to extreme positive temperature anomaly fields from 1959-2022, using the daily 500-hpa geopotential height fields across the Euro-Atlantic region and the local soil moisture as predictors. Our generative model reveals the importance of individual circulation features in determining different facets of heat extremes, thereby enriching our process understanding of them from a data-driven perspective. The occurrence, intensity, and spatial extent of heat extremes are sensitive to the relative position of individual ridges and troughs that are part of a large-scale wave pattern. Heat extremes in Europe are thus the result of a complex interplay between local and remote physical processes. Our approach is able to extrapolate beyond the range of the data to make risk-related probabilistic statements, and applies more generally to other weather extremes. It also offers an attractive alternative to physical model-based techniques, or to ML approaches that optimise scores focusing on predicting well the bulk instead of the tail of the data distribution.
The exponential growth in scientific publications poses a severe challenge for human researchers. It forces attention to more narrow sub-fields, which makes it challenging to discover new impactful research ideas and collaborations outside one's own field. While there are ways to predict a scientific paper's future citation counts, they need the research to be finished and the paper written, usually assessing impact long after the idea was conceived. Here we show how to predict the impact of onsets of ideas that have never been published by researchers. For that, we developed a large evolving knowledge graph built from more than 21 million scientific papers. It combines a semantic network created from the content of the papers and an impact network created from the historic citations of papers. Using machine learning, we can predict the dynamic of the evolving network into the future with high accuracy, and thereby the impact of new research directions. We envision that the ability to predict the impact of new ideas will be a crucial component of future artificial muses that can inspire new impactful and interesting scientific ideas.
With the expansion of operational scale of supermarkets in China, the vegetable market has grown considerably. The decision-making related to procurement costs and allocation quantities of vegetables has become a pivotal factor in determining the profitability of supermarkets. This paper analyzes the relationship between pricing and allocation faced by supermarkets in vegetable operations. Optimization algorithms are employed to determine replenishment and pricing strategies. Linear regression is utilized to model the historical data of various products, establishing the relationship between sale prices and sales volumes for 61 products. By integrating historical data on vegetable costs with time information based on the 24 solar terms, a cost prediction model is trained using TCN-Attention. The Topis evaluation model identifies the 32 most market-demanded products. A genetic algorithm is then used to search for the globally optimized vegetable product allocation-pricing decision.
The comparison of frequency distributions is a common statistical task with broad applications. However, existing measures do not explicitly quantify the magnitude and direction by which one distribution is shifted relative to another. In the present study, we define distributional shift (DS) as the concentration of frequencies towards the lowest discrete class, e.g., the left-most bin of a histogram. We measure DS via the sum of cumulative frequencies and define relative distributional shift (RDS) as the difference in DS between distributions. Using simulated random sampling, we show that RDS is highly related to measures that are widely used to compare frequency distributions. Focusing on specific applications, we show that DS and RDS provide insights into healthcare billing distributions, ecological species-abundance distributions, and economic distributions of wealth. RDS has the unique advantage of being a signed (i.e., directional) measure based on a simple difference in an intuitive property that, in turn, serves as a measure of rarity, poverty, and scarcity.
Understanding a surgical scene is crucial for computer-assisted surgery systems to provide any intelligent assistance functionality. One way of achieving this scene understanding is via scene segmentation, where every pixel of a frame is classified and therefore identifies the visible structures and tissues. Progress on fully segmenting surgical scenes has been made using machine learning. However, such models require large amounts of annotated training data, containing examples of all relevant object classes. Such fully annotated datasets are hard to create, as every pixel in a frame needs to be annotated by medical experts and, therefore, are rarely available. In this work, we propose a method to combine multiple partially annotated datasets, which provide complementary annotations, into one model, enabling better scene segmentation and the use of multiple readily available datasets. Our method aims to combine available data with complementary labels by leveraging mutual exclusive properties to maximize information. Specifically, we propose to use positive annotations of other classes as negative samples and to exclude background pixels of binary annotations, as we cannot tell if they contain a class not annotated but predicted by the model. We evaluate our method by training a DeepLabV3 on the publicly available Dresden Surgical Anatomy Dataset, which provides multiple subsets of binary segmented anatomical structures. Our approach successfully combines 6 classes into one model, increasing the overall Dice Score by 4.4% compared to an ensemble of models trained on the classes individually. By including information on multiple classes, we were able to reduce confusion between stomach and colon by 24%. Our results demonstrate the feasibility of training a model on multiple datasets. This paves the way for future work further alleviating the need for one large, fully segmented datasets.
In multivariate time series analysis, spectral coherence measures the linear dependency between two time series at different frequencies. However, real data applications often exhibit nonlinear dependency in the frequency domain. Conventional coherence analysis fails to capture such dependency. The quantile coherence, on the other hand, characterizes nonlinear dependency by defining the coherence at a set of quantile levels based on trigonometric quantile regression. This paper introduces a new estimation technique for quantile coherence. The proposed method is semi-parametric, which uses the parametric form of the spectrum of a vector autoregressive (VAR) model to approximate the quantile coherence, combined with nonparametric smoothing across quantiles. At a given quantile level, we compute the quantile autocovariance function (QACF) by performing the Fourier inverse transform of the quantile periodograms. Subsequently, we utilize the multivariate Durbin-Levinson algorithm to estimate the VAR parameters and derive the estimate of the quantile coherence. Finally, we smooth the preliminary estimate of quantile coherence across quantiles using a nonparametric smoother. Numerical results show that the proposed estimation method outperforms nonparametric methods. We show that quantile coherence-based bivariate time series clustering has advantages over the ordinary VAR coherence. For applications, the identified clusters of financial stocks by quantile coherence with a market benchmark are shown to have an intriguing and more informative structure of diversified investment portfolios that may be used by investors to make better decisions.
We consider a class of linear Vlasov partial differential equations driven by Wiener noise. Different types of stochastic perturbations are treated: additive noise, multiplicative It\^o and Stratonovich noise, and transport noise. We propose to employ splitting integrators for the temporal discretization of these stochastic partial differential equations. These integrators are designed in order to preserve qualitative properties of the exact solutions depending on the stochastic perturbation, such as preservation of norms or positivity of the solutions. We provide numerical experiments in order to illustrate the properties of the proposed integrators and investigate mean-square rates of convergence.
In an error estimation of finite element solutions to the Poisson equation, we usually impose the shape regularity assumption on the meshes to be used. In this paper, we show that even if the shape regularity condition is violated, the standard error estimation can be obtained if "bad" elements (elements that violate the shape regularity or maximum angle condition) are covered virtually by "good" simplices. A numerical experiment confirms the theoretical result.
In large-scale, data-driven applications, parameters are often only known approximately due to noise and limited data samples. In this paper, we focus on high-dimensional optimization problems with linear constraints under uncertain conditions. To find high quality solutions for which the violation of the true constraints is limited, we develop a linear shrinkage method that blends random matrix theory and robust optimization principles. It aims to minimize the Frobenius distance between the estimated and the true parameter matrix, especially when dealing with a large and comparable number of constraints and variables. This data-driven method excels in simulations, showing superior noise resilience and more stable performance in both obtaining high quality solutions and adhering to the true constraints compared to traditional robust optimization. Our findings highlight the effectiveness of our method in improving the robustness and reliability of optimization in high-dimensional, data-driven scenarios.
Although metaheuristics have been widely recognized as efficient techniques to solve real-world optimization problems, implementing them from scratch remains difficult for domain-specific experts without programming skills. In this scenario, metaheuristic optimization frameworks are a practical alternative as they provide a variety of algorithms composed of customized elements, as well as experimental support. Recently, many engineering problems require to optimize multiple or even many objectives, increasing the interest in appropriate metaheuristic algorithms and frameworks that might integrate new specific requirements while maintaining the generality and reusability principles they were conceived for. Based on this idea, this paper introduces JCLEC-MO, a Java framework for both multi- and many-objective optimization that enables engineers to apply, or adapt, a great number of multi-objective algorithms with little coding effort. A case study is developed and explained to show how JCLEC-MO can be used to address many-objective engineering problems, often requiring the inclusion of domain-specific elements, and to analyze experimental outcomes by means of conveniently connected R utilities.
Knowledge graphs (KGs) of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge graphs are typically incomplete, it is useful to perform knowledge graph completion or link prediction, i.e. predict whether a relationship not in the knowledge graph is likely to be true. This paper serves as a comprehensive survey of embedding models of entities and relationships for knowledge graph completion, summarizing up-to-date experimental results on standard benchmark datasets and pointing out potential future research directions.