As global attention on renewable and clean energy grows, the research and implementation of microgrids become paramount. This paper delves into the methodology of exploring the relationship between the operational and environmental costs of microgrids through multi-objective optimization models. By integrating various optimization algorithms like Genetic Algorithm, Simulated Annealing, Ant Colony Optimization, and Particle Swarm Optimization, we propose an integrated approach for microgrid optimization. Simulation results depict that these algorithms provide different dispatch results under economic and environmental dispatch, revealing distinct roles of diesel generators and micro gas turbines in microgrids. Overall, this study offers in-depth insights and practical guidance for microgrid design and operation.
Big data is ubiquitous in practices, and it has also led to heavy computation burden. To reduce the calculation cost and ensure the effectiveness of parameter estimators, an optimal subset sampling method is proposed to estimate the parameters in marginal models with massive longitudinal data. The optimal subsampling probabilities are derived, and the corresponding asymptotic properties are established to ensure the consistency and asymptotic normality of the estimator. Extensive simulation studies are carried out to evaluate the performance of the proposed method for continuous, binary and count data and with four different working correlation matrices. A depression data is used to illustrate the proposed method.
In high performance computing environments, we observe an ongoing increase in the available numbers of cores. This development calls for re-emphasizing performance (scalability) analysis and speedup laws as suggested in the literature (e.g., Amdahl's law and Gustafson's law), with a focus on asymptotic performance. Understanding speedup and efficiency issues of algorithmic parallelism is useful for several purposes, including the optimization of system operations, temporal predictions on the execution of a program, and the analysis of asymptotic properties and the determination of speedup bounds. However, the literature is fragmented and shows a large diversity and heterogeneity of speedup models and laws. These phenomena make it challenging to obtain an overview of the models and their relationships, to identify the determinants of performance in a given algorithmic and computational context, and, finally, to determine the applicability of performance models and laws to a particular parallel computing setting. In this work, we provide a generic speedup (and thus also efficiency) model for homogeneous computing environments. Our approach generalizes many prominent models suggested in the literature and allows showing that they can be considered special cases of a unifying approach. The genericity of the unifying speedup model is achieved through parameterization. Considering combinations of parameter ranges, we identify six different asymptotic speedup cases and eight different asymptotic efficiency cases. Jointly applying these speedup and efficiency cases, we derive eleven scalability cases, from which we build a scalability typology. Researchers can draw upon our typology to classify their speedup model and to determine the asymptotic behavior when the number of parallel processing units increases. In addition, our results may be used to address various extensions of our setting.
We present a formulation for high-order generalized periodicity conditions in the context of a high-order electromechanical theory including flexoelectricity, strain gradient elasticity and gradient dielectricity, with the goal of studying periodic architected metamaterials. Such theory results in fourth-order governing partial differential equations, and the periodicity conditions involve continuity across the periodic boundary of primal fields (displacement and electric potential) and their normal derivatives, continuity of the corresponding dual generalized forces (tractions, double tractions, surface charge density and double surface charge density). Rather than imposing these conditions numerically as explicit constraints, we develop an approximation space which fulfils generalized periodicity by construction. Our method naturally allows us to impose general macroscopic fields (strains/stresses and electric fields/electric displacements) along arbitrary directions, enabling the characterization of the material anisotropy. We apply the proposed method to study periodic architected metamaterials with apparent piezoelectricity. We first verify the method by directly comparing the results with a large periodic structure, then apply it to evaluate the anisotropic apparently piezoelectricity of a geometrically polarized 2D lattice, and finally demonstrate the application of the method in a 3D architected metamaterial.
We propose Riemannian preconditioned algorithms for the tensor completion problem via tensor ring decomposition. A new Riemannian metric is developed on the product space of the mode-2 unfolding matrices of the core tensors in tensor ring decomposition. The construction of this metric aims to approximate the Hessian of the cost function by its diagonal blocks, paving the way for various Riemannian optimization methods. Specifically, we propose the Riemannian gradient descent and Riemannian conjugate gradient algorithms. We prove that both algorithms globally converge to a stationary point. In the implementation, we exploit the tensor structure and adopt an economical procedure to avoid large matrix formulation and computation in gradients, which significantly reduces the computational cost. Numerical experiments on various synthetic and real-world datasets -- movie ratings, hyperspectral images, and high-dimensional functions -- suggest that the proposed algorithms are more efficient and have better reconstruction ability than other candidates.
Randomness in the void distribution within a ductile metal complicates quantitative modeling of damage following the void growth to coalescence failure process. Though the sequence of micro-mechanisms leading to ductile failure is known from unit cell models, often based on assumptions of a regular distribution of voids, the effect of randomness remains a challenge. In the present work, mesoscale unit cell models, each containing an ensemble of four voids of equal size that are randomly distributed, are used to find statistical effects on the yield surface of the homogenized material. A yield locus is found based on a mean yield surface and a standard deviation of yield points obtained from 15 realizations of the four-void unit cells. It is found that the classical GTN model very closely agrees with the mean of the yield points extracted from the unit cell calculations with random void distributions, while the standard deviation $\textbf{S}$ varies with the imposed stress state. It is shown that the standard deviation is nearly zero for stress triaxialities $T\leq1/3$, while it rapidly increases for triaxialities above $T\approx 1$, reaching maximum values of about $\textbf{S}/\sigma_0\approx0.1$ at $T \approx 4$. At even higher triaxialities it decreases slightly. The results indicate that the dependence of the standard deviation on the stress state follows from variations in the deformation mechanism since a well-correlated variation is found for the volume fraction of the unit cell that deforms plastically at yield. Thus, the random void distribution activates different complex localization mechanisms at high stress triaxialities that differ from the ligament thinning mechanism forming the basis for the classical GTN model. A method for introducing the effect of randomness into the GTN continuum model is presented, and an excellent comparison to the unit cell yield locus is achieved.
In harsh environments, organisms may self-organize into spatially patterned systems in various ways. So far, studies of ecosystem spatial self-organization have primarily focused on apparent orders reflected by regular patterns. However, self-organized ecosystems may also have cryptic orders that can be unveiled only through certain quantitative analyses. Here we show that disordered hyperuniformity as a striking class of hidden orders can exist in spatially self-organized vegetation landscapes. By analyzing the high-resolution remotely sensed images across the American drylands, we demonstrate that it is not uncommon to find disordered hyperuniform vegetation states characterized by suppressed density fluctuations at long range. Such long-range hyperuniformity has been documented in a wide range of microscopic systems. Our finding contributes to expanding this domain to accommodate natural landscape ecological systems. We use theoretical modeling to propose that disordered hyperuniform vegetation patterning can arise from three generalized mechanisms prevalent in dryland ecosystems, including (1) critical absorbing states driven by an ecological legacy effect, (2) scale-dependent feedbacks driven by plant-plant facilitation and competition, and (3) density-dependent aggregation driven by plant-sediment feedbacks. Our modeling results also show that disordered hyperuniform patterns can help ecosystems cope with arid conditions with enhanced functioning of soil moisture acquisition. However, this advantage may come at the cost of slower recovery of ecosystem structure upon perturbations. Our work highlights that disordered hyperuniformity as a distinguishable but underexplored ecosystem self-organization state merits systematic studies to better understand its underlying mechanisms, functioning, and resilience.
Electrodermal activity (EDA) is considered a standard marker of sympathetic activity. However, traditional EDA measurement requires electrodes in steady contact with the skin. Can sympathetic arousal be measured using only an optical sensor, such as an RGB camera? This paper presents a novel approach to infer sympathetic arousal by measuring the peripheral blood flow on the face or hand optically. We contribute a self-recorded dataset of 21 participants, comprising synchronized videos of participants' faces and palms and gold-standard EDA and photoplethysmography (PPG) signals. Our results show that we can measure peripheral sympathetic responses that closely correlate with the ground truth EDA. We obtain median correlations of 0.57 to 0.63 between our inferred signals and the ground truth EDA using only videos of the participants' palms or foreheads or PPG signals from the foreheads or fingers. We also show that sympathetic arousal is best inferred from the forehead, finger, or palm.
This study explores the integration of the hyper-power sequence, a method commonly employed for approximating the Moore-Penrose inverse, to enhance the effectiveness of an existing preconditioner. The approach is closely related to polynomial preconditioning based on Neumann series. We commence with a state-of-the-art matrix-free preconditioner designed for the saddle point system derived from isogeometric structure-preserving discretization of the Stokes equations. Our results demonstrate that incorporating multiple iterations of the hyper-power method enhances the effectiveness of the preconditioner, leading to a substantial reduction in both iteration counts and overall solution time for simulating Stokes flow within a 3D lid-driven cavity. Through a comprehensive analysis, we assess the stability, accuracy, and numerical cost associated with the proposed scheme.
The consistency of the maximum likelihood estimator for mixtures of elliptically-symmetric distributions for estimating its population version is shown, where the underlying distribution $P$ is nonparametric and does not necessarily belong to the class of mixtures on which the estimator is based. In a situation where $P$ is a mixture of well enough separated but nonparametric distributions it is shown that the components of the population version of the estimator correspond to the well separated components of $P$. This provides some theoretical justification for the use of such estimators for cluster analysis in case that $P$ has well separated subpopulations even if these subpopulations differ from what the mixture model assumes.
We investigate the use of multilevel Monte Carlo (MLMC) methods for estimating the expectation of discretized random fields. Specifically, we consider a setting in which the input and output vectors of the numerical simulators have inconsistent dimensions across the multilevel hierarchy. This requires the introduction of grid transfer operators borrowed from multigrid methods. Starting from a simple 1D illustration, we demonstrate numerically that the resulting MLMC estimator deteriorates the estimation of high-frequency components of the discretized expectation field compared to a Monte Carlo (MC) estimator. By adapting mathematical tools initially developed for multigrid methods, we perform a theoretical spectral analysis of the MLMC estimator of the expectation of discretized random fields, in the specific case of linear, symmetric and circulant simulators. This analysis provides a spectral decomposition of the variance into contributions associated with each scale component of the discretized field. We then propose improved MLMC estimators using a filtering mechanism similar to the smoothing process of multigrid methods. The filtering operators improve the estimation of both the small- and large-scale components of the variance, resulting in a reduction of the total variance of the estimator. These improvements are quantified for the specific class of simulators considered in our spectral analysis. The resulting filtered MLMC (F-MLMC) estimator is applied to the problem of estimating the discretized variance field of a diffusion-based covariance operator, which amounts to estimating the expectation of a discretized random field. The numerical experiments support the conclusions of the theoretical analysis even with non-linear simulators, and demonstrate the improvements brought by the proposed F-MLMC estimator compared to both a crude MC and an unfiltered MLMC estimator.