In order to cope with the task allocation in national economic mobilization, a task allocation planning method based on Hierarchical Task Network (HTN) for national economic mobilization is proposed. An HTN planning algorithm is designed to solve and optimize task allocation, and a method is explored to deal with the resource shortage. Finally, based on a real task allocation case in national economic mobilization, an experimental study verifies the effectiveness of the proposed method.
Designing cable harnesses can be time-consuming and complex due to many design and manufacturing aspects and rules. Automating the design process can help to fulfil these rules, speed up the process, and optimize the design. To accommodate this, we formulate a harness routing optimization problem to minimize cable lengths, maximize bundling by rewarding shared paths, and optimize the cables' spatial location with respect to case-specific information of the routing environment, e.g., zones to avoid. A deterministic and computationally effective cable harness routing algorithm has been developed to solve the routing problem and is used to generate a set of cable harness topology candidates and approximate the Pareto front. Our approach was tested against a stochastic and an exact solver and our routing algorithm generated objective function values better than the stochastic approach and close to the exact solver. Our algorithm was able to find solutions, some of them being proven to be near-optimal, for three industrial-sized 3D cases within reasonable time (in magnitude of seconds to minutes) and the computation times were comparable to those of the stochastic approach.
This work presents an abstract framework for the design, implementation, and analysis of the multiscale spectral generalized finite element method (MS-GFEM), a particular numerical multiscale method originally proposed in [I. Babuska and R. Lipton, Multiscale Model.\;\,Simul., 9 (2011), pp.~373--406]. MS-GFEM is a partition of unity method employing optimal local approximation spaces constructed from local spectral problems. We establish a general local approximation theory demonstrating exponential convergence with respect to local degrees of freedom under certain assumptions, with explicit dependence on key problem parameters. Our framework applies to a broad class of multiscale PDEs with $L^{\infty}$-coefficients in both continuous and discrete, finite element settings, including highly indefinite problems (convection-dominated diffusion, as well as the high-frequency Helmholtz, Maxwell and elastic wave equations with impedance boundary conditions), and higher-order problems. Notably, we prove a local convergence rate of $O(e^{-cn^{1/d}})$ for MS-GFEM for all these problems, improving upon the $O(e^{-cn^{1/(d+1)}})$ rate shown by Babuska and Lipton. Moreover, based on the abstract local approximation theory for MS-GFEM, we establish a unified framework for showing low-rank approximations to multiscale PDEs. This framework applies to the aforementioned problems, proving that the associated Green's functions admit an $O(|\log\epsilon|^{d})$-term separable approximation on well-separated domains with error $\epsilon>0$. Our analysis improves and generalizes the result in [M. Bebendorf and W. Hackbusch, Numerische Mathematik, 95 (2003), pp.~1-28] where an $O(|\log\epsilon|^{d+1})$-term separable approximation was proved for Poisson-type problems.
The maximum likelihood estimator (MLE) is pivotal in statistical inference, yet its application is often hindered by the absence of closed-form solutions for many models. This poses challenges in real-time computation scenarios, particularly within embedded systems technology, where numerical methods are impractical. This study introduces a generalized form of the MLE that yields closed-form estimators under certain conditions. We derive the asymptotic properties of the proposed estimator and demonstrate that our approach retains key properties such as invariance under one-to-one transformations, strong consistency, and an asymptotic normal distribution. The effectiveness of the generalized MLE is exemplified through its application to the Gamma, Nakagami, and Beta distributions, showcasing improvements over the traditional MLE. Additionally, we extend this methodology to a bivariate gamma distribution, successfully deriving closed-form estimators. This advancement presents significant implications for real-time statistical analysis across various applications.
By establishing an interesting connection between ordinary Bell polynomials and rational convolution powers, some composition and inverse relations of Bell polynomials as well as explicit expressions for convolution roots of sequences are obtained. Based on these results, a new method is proposed for calculation of partial Bell polynomials based on prime factorization. It is shown that this method is more efficient than the conventional recurrence procedure for computing Bell polynomials in most cases, requiring far less arithmetic operations. A detailed analysis of the computation complexity is provided, followed by some numerical evaluations.
Feedforward neural networks (FNNs) are typically viewed as pure prediction algorithms, and their strong predictive performance has led to their use in many machine-learning applications. However, their flexibility comes with an interpretability trade-off; thus, FNNs have been historically less popular among statisticians. Nevertheless, classical statistical theory, such as significance testing and uncertainty quantification, is still relevant. Supplementing FNNs with methods of statistical inference, and covariate-effect visualisations, can shift the focus away from black-box prediction and make FNNs more akin to traditional statistical models. This can allow for more inferential analysis, and, hence, make FNNs more accessible within the statistical-modelling context.
Quantum entanglement is a fundamental property commonly used in various quantum information protocols and algorithms. Nonetheless, the problem of identifying entanglement has still not reached a general solution for systems larger than two qubits. In this study, we use deep convolutional neural networks, a type of supervised machine learning, to identify quantum entanglement for any bipartition in a 3-qubit system. We demonstrate that training the model on synthetically generated datasets of random density matrices excluding challenging positive-under-partial-transposition entangled states (PPTES), which cannot be identified (and correctly labeled) in general, leads to good model accuracy even for PPTES states, that were outside the training data. Our aim is to enhance the model's generalization on PPTES. By applying entanglement-preserving symmetry operations through a triple Siamese network trained in a semi-supervised manner, we improve the model's accuracy and ability to recognize PPTES. Moreover, by constructing an ensemble of Siamese models, even better generalization is observed, in analogy with the idea of finding separate types of entanglement witnesses for different classes of states. The neural models' code and training schemes, as well as data generation procedures, are available at github.com/Maticraft/quantum_correlations.
Neural network representations contain structure beyond what was present in the training labels. For instance, representations of images that are visually or semantically similar tend to lie closer to each other than to dissimilar images, regardless of their labels. Clustering these representations can thus provide insights into dataset properties as well as the network internals. In this work, we study how the many design choices involved in neural network training affect the clusters formed in the hidden representations. To do so, we establish an evaluation setup based on the BREEDS hierarchy, for the task of subclass clustering after training models with only superclass information. We isolate the training dataset and architecture as important factors affecting clusterability. Datasets with labeled classes consisting of unrelated subclasses yield much better clusterability than those following a natural hierarchy. When using pretrained models to cluster representations on downstream datasets, models pretrained on subclass labels provide better clusterability than models pretrained on superclass labels, but only when there is a high degree of domain overlap between the pretraining and downstream data. Architecturally, we find that normalization strategies affect which layers yield the best clustering performance, and, surprisingly, Vision Transformers attain lower subclass clusterability than ResNets.
Ground settlement prediction during the process of mechanized tunneling is of paramount importance and remains a challenging research topic. Typically, two paradigms are existing: a physics-driven approach utilizing process-oriented computational simulation models for the tunnel-soil interaction and the settlement prediction, and a data-driven approach employing machine learning techniques to establish mappings between influencing factors and the ground settlement. To integrate the advantages of both approaches and to assimilate the data from different sources, we propose a multi-fidelity deep operator network (DeepONet) framework, leveraging the recently developed operator learning methods. The presented framework comprises of two components: a low-fidelity subnet that captures the fundamental ground settlement patterns obtained from finite element simulations, and a high-fidelity subnet that learns the nonlinear correlation between numerical models and real engineering monitoring data. A pre-processing strategy for causality is adopted to consider the spatio-temporal characteristics of the settlement during tunnel excavation. Transfer learning is utilized to reduce the training cost for the low-fidelity subnet. The results show that the proposed method can effectively capture the physical information provided by the numerical simulations and accurately fit measured data as well. Remarkably, even with very limited noisy monitoring data, the proposed model can achieve rapid, accurate, and robust predictions of the full-field ground settlement in real-time during mechanized tunnel excavation.
Optimal allocation of resources across sub-units in the context of centralized decision-making systems such as bank branches or supermarket chains is a classical application of operations research and management science. In this paper, we develop quantile allocation models to examine how much the output and productivity could potentially increase if the resources were efficiently allocated between units. We increase robustness to random noise and heteroscedasticity by utilizing the local estimation of multiple production functions using convex quantile regression. The quantile allocation models then rely on the estimated shadow prices instead of detailed data of units and allow the entry and exit of units. Our empirical results on Finland's business sector reveal a large potential for productivity gains through better allocation, keeping the current technology and resources fixed.
Self-citations are a key topic in evaluative bibliometrics because they can artificially inflate citation-related performance indicators. Recently, self-citations defined at the largest scale, i.e., country self-citations, have started to attract the attention of researchers and policymakers. According to a recent research, in fact, the anomalous trends in the country self-citation rates of some countries, such as Italy, have been induced by the distorting effect of citation metrics-centered science policies. In the present study, we investigate the trends of country self-citations in 50 countries over the world in the period 1996-2019 using Scopus data. Results show that for most countries country self-citations have decreased over time. 12 countries (Colombia, Egypt, Indonesia, Iran, Italy, Malaysia, Pakistan, Romania, Russian Federation, Saudi Arabia, Thailand, and Ukraine), however, exhibit different behavior, with anomalous trends of self-citations. We argue that these anomalies should be attributed to the aggressive science policies adopted by these countries in recent years, which are all characterized by direct or indirect incentives for citations. Our analysis confirms that when bibliometric indicators are integrated into systems of incentives, they are capable of affecting rapidly and visibly the citation behavior of entire countries.