We apply Coloured Petri Nets (CPNs) and the CPN Tools to develop a formal model of an embedded system consisting of a power converter and an associated controller. Matlab/Simulink is the de-facto tool for embedded control and system design, but it relies on informal semantics and has limited support for transparent and integrated specification and validation of both the power converter electronics, controller (hardware), and the control logic (software). The contribution of this paper is to develop a timed hierarchical CPN model that mitigates the shortcomings of Simulink by relying on a Petri net formalisation. We demonstrate the application of our approach by developing a fully integrated model of a buck power converter with controller in CPN Tools. Furthermore, we perform time-domain simulation to verify the capability of the controller to serve the control objectives. To validate the developed CPN model, we compare the simulation results obtained in an open-loop configuration with a corresponding implementation in Simulink. The experimental results show correspondence between the CPN model and the Simulink model. As our CPN model reflects the fully integrated system, we are able to compare CPN simulation results to measurements obtained with a corresponding implementation in real hardware/software and compare closed-loop with open-loop configuration. The results show alignment for the steady state while further refinement of the control algorithm and validation is required.
We study the numerical approximation by space-time finite element methods of a multi-physics system coupling hyperbolic elastodynamics with parabolic transport and modeling poro- and thermoelasticity. The equations are rewritten as a first-order system in time. Discretizations by continuous Galerkin methods in time and inf-sup stable pairs of finite element spaces for the spatial variables are investigated. Optimal order error estimates are proved by an analysis in weighted norms that depict the energy of the system's unknowns. A further important ingredient and challenge of the analysis is the control of the couplings terms. The techniques developed here can be generalized to other families of Galerkin space discretizations and advanced models. The error estimates are confirmed by numerical experiments, also for higher order piecewise polynomials in time and space. The latter lead to algebraic systems with complex block structure and put a facet of challenge on the design of iterative solvers. An efficient solution technique is referenced.
When assessing the strength of sawn lumber for use in engineering applications, the sizes and locations of knots are an important consideration. Knots are the most common visual characteristics of lumber, that result from the growth of tree branches. Large individual knots, as well as clusters of distinct knots, are known to have strength-reducing effects. However, industry grading rules that govern knots are informed by subjective judgment to some extent, particularly the spatial interaction of knots and their relationship with lumber strength. This case study reports the results of an experiment that investigated and modelled the strength-reducing effects of knots on a sample of Douglas Fir lumber. Experimental data were obtained by taking scans of lumber surfaces and applying tensile strength testing. The modelling approach presented incorporates all relevant knot information in a Bayesian framework, thereby contributing a more refined way of managing the quality of manufactured lumber.
Large pre-trained language models contain societal biases and carry along these biases to downstream tasks. Current in-processing bias mitigation approaches (like adversarial training) impose debiasing by updating a model's parameters, effectively transferring the model to a new, irreversible debiased state. In this work, we propose a novel approach to develop stand-alone debiasing functionalities separate from the model, which can be integrated into the model on-demand, while keeping the core model untouched. Drawing from the concept of AdapterFusion in multi-task learning, we introduce DAM (Debiasing with Adapter Modules) - a debiasing approach to first encapsulate arbitrary bias mitigation functionalities into separate adapters, and then add them to the model on-demand in order to deliver fairness qualities. We conduct a large set of experiments on three classification tasks with gender, race, and age as protected attributes. Our results show that DAM improves or maintains the effectiveness of bias mitigation, avoids catastrophic forgetting in a multi-attribute scenario, and maintains on-par task performance, while granting parameter-efficiency and easy switching between the original and debiased models.
We study the single-site Glauber dynamics for the fugacity $\lambda$, Hard-core model on the random graph $G(n, d/n)$. We show that for the typical instances of the random graph $G(n,d/n)$ and for fugacity $\lambda < \frac{d^d}{(d-1)^{d+1}}$, the mixing time of Glauber dynamics is $n^{1 + O(1/\log \log n)}$. Our result improves on the recent elegant algorithm in [Bezakova, Galanis, Goldberg Stefankovic; ICALP'22]. The algorithm there is a MCMC based sampling algorithm, but it is not the Glauber dynamics. Our algorithm here is simpler, as we use the classic Glauber dynamics. Furthermore, the bounds on mixing time we prove are smaller than those in Bezakova et al. paper, hence our algorithm is also faster. The main challenge in our proof is handling vertices with unbounded degrees. We provide stronger results with regard the spectral independence via branching values and show that the our Gibbs distributions satisfy the approximate tensorisation of the entropy. We conjecture that the bounds we have here are optimal for $G(n,d/n)$. As corollary of our analysis for the Hard-core model, we also get bounds on the mixing time of the Glauber dynamics for the Monomer-dimer model on $G(n,d/n)$. The bounds we get for this model are slightly better than those we have for the Hard-core model
The photogrammetric and reconstructive modeling of cultural heritage sites is mostly focused on visually perceivable aspects, but if their intended purpose is the performance of cultural acts with a sonic emphasis, it is important to consider the preservation of their acoustical behaviour to make them audible in an authentic way. This applies in particular to sacral and concert environments as popular objects for photogrammetric models, which contain geometrical and textural information that can be used to locate and classify acoustically relevant surface properties. With the advancing conversion or destruction of historical acoustical spaces, it becomes even more important to preserve their unique sonic characters, while three-dimensional auralizations become widely applicable. The proposed study presents the current state of a new methodological approach to acoustical modeling using photogrammetric data and introduces a parameterizable pipeline that will be accessible as an open-source software with a graphical user interface.
We prove a convergence theorem for U-statistics of degree two, where the data dimension $d$ is allowed to scale with sample size $n$. We find that the limiting distribution of a U-statistic undergoes a phase transition from the non-degenerate Gaussian limit to the degenerate limit, regardless of its degeneracy and depending only on a moment ratio. A surprising consequence is that a non-degenerate U-statistic in high dimensions can have a non-Gaussian limit with a larger variance and asymmetric distribution. Our bounds are valid for any finite $n$ and $d$, independent of individual eigenvalues of the underlying function, and dimension-independent under a mild assumption. As an application, we apply our theory to two popular kernel-based distribution tests, MMD and KSD, whose high-dimensional performance has been challenging to study. In a simple empirical setting, our results correctly predict how the test power at a fixed threshold scales with $d$ and the bandwidth.
This paper characterizes the impact of covariate serial dependence on the non-asymptotic estimation error bound of penalized regressions (PRs). Focusing on the direct relationship between the degree of cross-correlation between covariates and the estimation error bound of PRs, we show that orthogonal or weakly cross-correlated stationary AR processes can exhibit high spurious correlations caused by serial dependence. We provide analytical results on the distribution of the sample cross-correlation in the case of two orthogonal Gaussian AR(1) processes, and extend and validate them through an extensive simulation study. Furthermore, we introduce a new procedure to mitigate spurious correlations in a time series setting, applying PRs to pre-whitened (ARMA filtered) time series. We show that under mild assumptions our procedure allows both to reduce the estimation error and to develop an effective forecasting strategy. The estimation accuracy of our proposal is validated through additional simulations, as well as an empirical application to a large set of monthly macroeconomic time series relative to the Euro Area.
Mobile robots are ubiquitous. Such vehicles benefit from well-designed and calibrated control algorithms ensuring their task execution under precise uncertainty bounds. Yet, in tasks involving humans in the loop, such as elderly or mobility impaired, the problem takes a new dimension. In such cases, the system needs not only to compensate for uncertainty and volatility in its operation but at the same time to anticipate and offer responses that go beyond robust. Such robots operate in cluttered, complex environments, akin to human residences, and need to face during their operation sensor and, even, actuator faults, and still operate. This is where our thesis comes into the foreground. We propose a new control design framework based on the principles of antifragility. Such a design is meant to offer a high uncertainty anticipation given previous exposure to failures and faults, and exploit this anticipation capacity to provide performance beyond robust. In the current instantiation of antifragile control applied to mobile robot trajectory tracking, we provide controller design steps, the analysis of performance under parametrizable uncertainty and faults, as well as an extended comparative evaluation against state-of-the-art controllers. We believe in the potential antifragile control has in achieving closed-loop performance in the face of uncertainty and volatility by using its exposures to uncertainty to increase its capacity to anticipate and compensate for such events.
Autonomic computing investigates how systems can achieve (user) specified control outcomes on their own, without the intervention of a human operator. Autonomic computing fundamentals have been substantially influenced by those of control theory for closed and open-loop systems. In practice, complex systems may exhibit a number of concurrent and inter-dependent control loops. Despite research into autonomic models for managing computer resources, ranging from individual resources (e.g., web servers) to a resource ensemble (e.g., multiple resources within a data center), research into integrating Artificial Intelligence (AI) and Machine Learning (ML) to improve resource autonomy and performance at scale continues to be a fundamental challenge. The integration of AI/ML to achieve such autonomic and self-management of systems can be achieved at different levels of granularity, from full to human-in-the-loop automation. In this article, leading academics, researchers, practitioners, engineers, and scientists in the fields of cloud computing, AI/ML, and quantum computing join to discuss current research and potential future directions for these fields. Further, we discuss challenges and opportunities for leveraging AI and ML in next generation computing for emerging computing paradigms, including cloud, fog, edge, serverless and quantum computing environments.
It has been a long time that computer architecture and systems are optimized to enable efficient execution of machine learning (ML) algorithms or models. Now, it is time to reconsider the relationship between ML and systems, and let ML transform the way that computer architecture and systems are designed. This embraces a twofold meaning: the improvement of designers' productivity, and the completion of the virtuous cycle. In this paper, we present a comprehensive review of work that applies ML for system design, which can be grouped into two major categories, ML-based modelling that involves predictions of performance metrics or some other criteria of interest, and ML-based design methodology that directly leverages ML as the design tool. For ML-based modelling, we discuss existing studies based on their target level of system, ranging from the circuit level to the architecture/system level. For ML-based design methodology, we follow a bottom-up path to review current work, with a scope of (micro-)architecture design (memory, branch prediction, NoC), coordination between architecture/system and workload (resource allocation and management, data center management, and security), compiler, and design automation. We further provide a future vision of opportunities and potential directions, and envision that applying ML for computer architecture and systems would thrive in the community.