With the expansion of operational scale of supermarkets in China, the vegetable market has grown considerably. The decision-making related to procurement costs and allocation quantities of vegetables has become a pivotal factor in determining the profitability of supermarkets. This paper analyzes the relationship between pricing and allocation faced by supermarkets in vegetable operations. Optimization algorithms are employed to determine replenishment and pricing strategies. Linear regression is utilized to model the historical data of various products, establishing the relationship between sale prices and sales volumes for 61 products. By integrating historical data on vegetable costs with time information based on the 24 solar terms, a cost prediction model is trained using TCN-Attention. The Topis evaluation model identifies the 32 most market-demanded products. A genetic algorithm is then used to search for the globally optimized vegetable product allocation-pricing decision.
Generalized singular values (GSVs) play an essential role in the comparative analysis. In the real world data for comparative analysis, both data matrices are usually numerically low-rank. This paper proposes a randomized algorithm to first approximately extract bases and then calculate GSVs efficiently. The accuracy of both basis extration and comparative analysis quantities, angular distances, generalized fractions of the eigenexpression, and generalized normalized Shannon entropy, are rigursly analyzed. The proposed algorithm is applied to both synthetic data sets and the genome-scale expression data sets. Comparing to other GSVs algorithms, the proposed algorithm achieves the fastest runtime while preserving sufficient accuracy in comparative analysis.
We construct new higher-order implicit-explicit (IMEX) schemes using the generalized scalar auxiliary variable (GSAV) approach for the Landau-Lifshitz equation. These schemes are linear, length preserving and only require solving one elliptic equation with constant coefficients at each time step. We show that numerical solutions of these schemes are uniformly bounded without any restriction on the time step size, and establish rigorous error estimates in $l^{\infty}(0,T;H^1(\Omega)) \bigcap l^{2}(0,T;H^2(\Omega))$ of orders 1 to 5 in a unified framework.
Dynamical low-rank (DLR) approximation has gained interest in recent years as a viable solution to the curse of dimensionality in the numerical solution of kinetic equations including the Boltzmann and Vlasov equations. These methods include the projector-splitting and Basis-update & Galerkin (BUG) DLR integrators, and have shown promise at greatly improving the computational efficiency of kinetic solutions. However, this often comes at the cost of conservation of charge, current and energy. In this work we show how a novel macro-micro decomposition may be used to separate the distribution function into two components, one of which carries the conserved quantities, and the other of which is orthogonal to them. We apply DLR approximation to the latter, and thereby achieve a clean and extensible approach to a conservative DLR scheme which retains the computational advantages of the base scheme. Moreover, our approach requires no change to the mechanics of the DLR approximation, so it is compatible with both the BUG family of integrators and the projector-splitting integrator which we use here. We describe a first-order integrator which can exactly conserve charge and either current or energy, as well as an integrator which exactly conserves charge and energy and exhibits second-order accuracy on our test problems. To highlight the flexibility of the proposed macro-micro decomposition, we implement a pair of velocity space discretizations, and verify the claimed accuracy and conservation properties on a suite of plasma benchmark problems.
We give generators and relations for the hypergraph props of Gaussian relations and positive affine Lagrangian relations. The former extends Gaussian probabilistic processes by completely-uninformative priors, and the latter extends Gaussian quantum mechanics with infinitely-squeezed states. These presentations are given by adding a generator to the presentation of real affine relations and of real affine Lagrangian relations which freely codiscards effects, as well as certain rotations. The presentation of positive affine Lagrangian relations provides a rigorous justification for many common yet informal calculations in the quantum physics literature involving infinite-squeezing. Our presentation naturally extends Menicucci et al.'s graph-theoretic representation of Gaussian quantum states with a representation for Gaussian transformations. Using this graphical calculus, we also give a graphical proof of Braunstein and Kimble's continuous-variable quantum teleportation protocol. We also interpret the LOv-calculus, a diagrammatic calculus for reasoning about passive linear-optical quantum circuits in our graphical calculus. Moreover, we show how our presentation allows for additional optical operations such as active squeezing.
We consider M-estimation problems, where the target value is determined using a minimizer of an expected functional of a Levy process. With discrete observations from the Levy process, we can produce a "quasi-path" by shuffling increments of the Levy process, we call it a quasi-process. Under a suitable sampling scheme, a quasi-process can converge weakly to the true process according to the properties of the stationary and independent increments. Using this resampling technique, we can estimate objective functionals similar to those estimated using the Monte Carlo simulations, and it is available as a contrast function. The M-estimator based on these quasi-processes can be consistent and asymptotically normal.
The high volatility of renewable energies calls for more energy efficiency. Thus, different physical systems need to be coupled efficiently although they run on various time scales. Here, the port-Hamiltonian (pH) modeling framework comes into play as it has several advantages, e.g., physical properties are encoded in the system structure and systems running on different time scales can be coupled easily. Additionally, pH systems coupled by energy-preserving conditions are still pH. Furthermore, in the energy transition hydrogen becomes an important player and unlike in natural gas, its temperature-dependence is of importance. Thus, we introduce an infinite dimensional pH formulation of the compressible non-isothermal Euler equations to model flow with temperature-dependence. We set up the underlying Stokes-Dirac structure and deduce the boundary port variables. We introduce coupling conditions into our pH formulation, such that the whole network system is pH itself. This is achieved by using energy-preserving coupling conditions, i.e., mass conservation and equality of total enthalpy, at the coupling nodes. Furthermore, to close the system a third coupling condition is needed. Here, equality of the outgoing entropy at coupling nodes is used and included into our systems in a structure-preserving way. Following that, we adapt the structure-preserving aproximation methods from the isothermal to the non-isothermal case. Academic numerical examples will support our analytical findings.
We explore the possibility of simulating the grade-two fluid model in a geometry related to a contraction rheometer, and we provide details on several key aspects of the computation. We show how the results can be used to determine the viscosity $\nu$ from experimental data. We also explore the identifiability of the grade-two parameters $\alpha_1$ and $\alpha_2$ from experimental data. In particular, as the flow rate varies, force data appears to be nearly the same for certain distinct pairs of values $\alpha_1$ and $\alpha_2$; however we determine a regime for $\alpha_1$ and $\alpha_2$ for which the parameters may be identifiable with a contraction rheometer.
We extend three related results from the analysis of influences of Boolean functions to the quantum setting, namely the KKL Theorem, Friedgut's Junta Theorem and Talagrand's variance inequality for geometric influences. Our results are derived by a joint use of recently studied hypercontractivity and gradient estimates. These generic tools also allow us to derive generalizations of these results in a general von Neumann algebraic setting beyond the case of the quantum hypercube, including examples in infinite dimensions relevant to quantum information theory such as continuous variables quantum systems. Finally, we comment on the implications of our results as regards to noncommutative extensions of isoperimetric type inequalities, quantum circuit complexity lower bounds and the learnability of quantum observables.
The problem of constructing optimal factoring automata arises in the context of unification factoring for the efficient execution of logic programs. Given an ordered set of $n$ strings of length $m$, the problem is to construct a trie-like tree structure of minimum size in which the leaves in left-to-right order represent the input strings in the given order. Contrary to standard tries, the order in which the characters of a string are encountered can be different on different root-to-leaf paths. Dawson et al. [ACM Trans. Program. Lang. Syst. 18(5):528--563, 1996] gave an algorithm that solves the problem in time $O(n^2 m (n+m))$. In this paper, we present an improved algorithm with running-time $O(n^2m)$.
An interesting case of the well-known Dataset Shift Problem is the classification of Electroencephalogram (EEG) signals in the context of Brain-Computer Interface (BCI). The non-stationarity of EEG signals can lead to poor generalisation performance in BCI classification systems used in different sessions, also from the same subject. In this paper, we start from the hypothesis that the Dataset Shift problem can be alleviated by exploiting suitable eXplainable Artificial Intelligence (XAI) methods to locate and transform the relevant characteristics of the input for the goal of classification. In particular, we focus on an experimental analysis of explanations produced by several XAI methods on an ML system trained on a typical EEG dataset for emotion recognition. Results show that many relevant components found by XAI methods are shared across the sessions and can be used to build a system able to generalise better. However, relevant components of the input signal also appear to be highly dependent on the input itself.