The development of new manufacturing techniques such as 3D printing have enabled the creation of previously infeasible chemical reactor designs. Systematically optimizing the highly parameterized geometries involved in these new classes of reactor is vital to ensure enhanced mixing characteristics and feasible manufacturability. Here we present a framework to rapidly solve this nonlinear, computationally expensive, and derivative-free problem, enabling the fast prototype of novel reactor parameterizations. We take advantage of Gaussian processes to adaptively learn a multi-fidelity model of reactor simulations across a number of different continuous mesh fidelities. The search space of reactor geometries is explored through an amalgam of different, potentially lower, fidelity simulations which are chosen for evaluation based on weighted acquisition function, trading off information gain with cost of simulation. Within our framework we derive a novel criteria for monitoring the progress and dictating the termination of multi-fidelity Bayesian optimization, ensuring a high fidelity solution is returned before experimental budget is exhausted. The class of reactor we investigate are helical-tube reactors under pulsed-flow conditions, which have demonstrated outstanding mixing characteristics, have the potential to be highly parameterized, and are easily manufactured using 3D printing. To validate our results, we 3D print and experimentally validate the optimal reactor geometry, confirming its mixing performance. In doing so we demonstrate our design framework to be extensible to a broad variety of expensive simulation-based optimization problems, supporting the design of the next generation of highly parameterized chemical reactors.
The combinatorial pure exploration (CPE) in the stochastic multi-armed bandit setting (MAB) is a well-studied online decision-making problem: A player wants to find the optimal \emph{action} $\boldsymbol{\pi}^*$ from \emph{action class} $\mathcal{A}$, which is a collection of subsets of arms with certain combinatorial structures. Though CPE can represent many combinatorial structures such as paths, matching, and spanning trees, most existing works focus only on binary action class $\mathcal{A}\subseteq\{0, 1\}^d$ for some positive integer $d$. This binary formulation excludes important problems such as the optimal transport, knapsack, and production planning problems. To overcome this limitation, we extend the binary formulation to real, $\mathcal{A}\subseteq\mathbb{R}^d$, and propose a new algorithm. The only assumption we make is that the number of actions in $\mathcal{A}$ is polynomial in $d$. We show an upper bound of the sample complexity for our algorithm and the action class-dependent lower bound for R-CPE-MAB, by introducing a quantity that characterizes the problem's difficulty, which is a generalization of the notion \emph{width} introduced in Chen et al.[2014].
Magneto-static finite element (FE) simulations make numerical optimization of electrical machines very time-consuming and computationally intensive during the design stage. In this paper, we present the application of a hybrid data-and physics-driven model for numerical optimization of permanent magnet synchronous machines (PMSM). Following the data-driven supervised training, deep neural network (DNN) will act as a meta-model to characterize the electromagnetic behavior of PMSM by predicting intermediate FE measures. These intermediate measures are then post-processed with various physical models to compute the required key performance indicators (KPIs), e.g., torque, shaft power, and material costs. We perform multi-objective optimization with both classical FE and a hybrid approach using a nature-inspired evolutionary algorithm. We show quantitatively that the hybrid approach maintains the quality of Pareto results better or close to conventional FE simulation-based optimization while being computationally very cheap.
In applications such as end-to-end encrypted instant messaging, secure email, and device pairing, users need to compare key fingerprints to detect impersonation and adversary-in-the-middle attacks. Key fingerprints are usually computed as truncated hashes of each party's view of the channel keys, encoded as an alphanumeric or numeric string, and compared out-of-band, e.g. manually, to detect any inconsistencies. Previous work has extensively studied the usability of various verification strategies and encoding formats, however, the exact effect of key fingerprint length on the security and usability of key fingerprint verification has not been rigorously investigated. We present a 162-participant study on the effect of numeric key fingerprint length on comparison time and error rate. While the results confirm some widely-held intuitions such as general comparison times and errors increasing significantly with length, a closer look reveals interesting nuances. The significant rise in comparison time only occurs when highly similar fingerprints are compared, and comparison time remains relatively constant otherwise. On errors, our results clearly distinguish between security non-critical errors that remain low irrespective of length and security critical errors that significantly rise, especially at higher fingerprint lengths. A noteworthy implication of this latter result is that Signal/WhatsApp key fingerprints provide a considerably lower level of security than usually assumed.
Despite the prevalence of wireless connectivity in urban areas around the globe, there remain numerous and diverse situations where connectivity is insufficient or unavailable. To address this, we introduce mobile wireless infrastructure on demand, a system of UAVs that can be rapidly deployed to establish an ad-hoc wireless network. This network has the capability of reconfiguring itself dynamically to satisfy and maintain the required quality of communication. The system optimizes the positions of the UAVs and the routing of data flows throughout the network to achieve this quality of service (QoS). By these means, task agents using the network simply request a desired QoS, and the system adapts accordingly while allowing them to move freely. We have validated this system both in simulation and in real-world experiments. The results demonstrate that our system effectively offers mobile wireless infrastructure on demand, extending the operational range of task agents and supporting complex mobility patterns, all while ensuring connectivity and being resilient to agent failures.
In dynamic mechanism design literature, one critical aspect has been typically ignored-the agents' periodic participation, which they can adapt and plan strategically. We propose a framework for dynamic principal-multiagent problems, augmenting the classic model by incorporating agents' periodic coupled decisions on participation and regular action selections. The principal faces adverse selection and designs a mechanism comprising a task policy profile (defining evolving agent action menus), a coupling policy profile (affecting agent utilities), and an off-switch function profile (assigning rewards or penalties upon agent withdrawal). Firstly, we introduce payoff-flow conservation-a sufficient condition to ensure dynamic incentive compatibility for regular actions. Secondly, we formulate a unique process, persistence transformation, which integrates task policy's implicit functions, enabling a closed-form off-switch function derivation, hence securing sufficient conditions for agents' coupled decisions' incentive compatibility, aligning with the principal's preferences. Thirdly, we go beyond the traditional envelope theorem by presenting a necessary condition for incentive compatibility, leveraging the coupled optimality of principal-desired actions. This approach helps explicitly formulate both the coupling and off-switch functions. Finally, we establish envelope-like conditions exclusively on the task policies, facilitating the application of the first-order approach.
Predictive simulation of human motion could provide insight into optimal techniques. In repetitive or long-duration tasks, these simulations must predict fatigue-induced adaptation. However, most studies minimize cost function terms related to actuator activations, assuming it minimizes fatigue. An additional modeling layer is needed to consider the previous use of muscles to reveal adaptive strategies to the decreased force production capability. Here, we propose interfacing Xia's three-compartment fatigue dynamics model with rigid-body dynamics. A stabilization invariant was added to Xia's model. We simulated the maximum repetition of dumbbell biceps curls as an optimal control problem (OCP) using direct multiple shooting. We explored three cost functions (minimizing minimum torque, fatigue, or both) and two OCP formulations (full-horizon and sliding-horizon approaches). We found that Xia's model modified with the stabilization invariant (10 or 5) was adapted to direct multiple shooting. Sliding-horizon OCPs achieved 20 to 21 repetitions. The kinematic strategy slowly deviated from a plausible dumbbell lifting task to a swinging strategy as fatigue onset increasingly compromised the ability to keep the arm vertical. In full-horizon OCPs, the latter kinematic strategy was used over the whole motion, resulting in 32 repetitions. We showed that sliding-horizon OCPs revealed a reactive strategy to fatigue when only torque was included in the cost function, whereas an anticipatory strategy was revealed when the fatigue term was included in the cost function. Overall, the proposed approach has the potential to be a valuable tool in optimizing performance and helping reduce fatigue-related injuries in a variety of fields.
During the concept design of complex networked systems, concept developers have to assure that the choice of hardware modules and the topology of the target platform will provide adequate resources to support the needs of the application. For example, future-generation aerospace systems need to consider multiple requirements, with many trade-offs, foreseeing rapid technological change and a long time span for realization and service. For that purpose, we introduce NetGAP, an automated 3-phase approach to synthesize network topologies and support the exploration and concept design of networked systems with multiple requirements including dependability, security, and performance. NetGAP represents the possible interconnections between hardware modules using a graph grammar and uses a Monte Carlo Tree Search optimization to generate candidate topologies from the grammar while aiming to satisfy the requirements. We apply the proposed approach to the synthetic version of a realistic avionics application use case and show the merits of the solution to support the early-stage exploration of alternative candidate topologies. The method is shown to vividly characterize the topology-related trade-offs between requirements stemming from security, fault tolerance, timeliness, and the "cost" of adding new modules or links. Finally, we discuss the flexibility of using the approach when changes in the application and its requirements occur.
The design of particle simulation methods for collisional plasma physics has always represented a challenge due to the unbounded total collisional cross section, which prevents a natural extension of the classical Direct Simulation Monte Carlo (DSMC) method devised for the Boltzmann equation. One way to overcome this problem is to consider the design of Monte Carlo algorithms that are robust in the so-called grazing collision limit. In the first part of this manuscript, we will focus on the construction of collision algorithms for the Landau-Fokker-Planck equation based on the grazing collision asymptotics and which avoids the use of iterative solvers. Subsequently, we discuss problems involving uncertainties and show how to develop a stochastic Galerkin projection of the particle dynamics which permits to recover spectral accuracy for smooth solutions in the random space. Several classical numerical tests are reported to validate the present approach.
Poisson's equation plays an important role in modeling many physical systems. In electrostatic self-consistent low-temperature plasma (LTP) simulations, Poisson's equation is solved at each simulation time step, which can amount to a significant computational cost for the entire simulation. In this paper, we describe the development of a generic machine-learned Poisson solver specifically designed for the requirements of LTP simulations in complex 2D reactor geometries on structured Cartesian grids. Here, the reactor geometries can consist of inner electrodes and dielectric materials as often found in LTP simulations. The approach leverages a hybrid CNN-transformer network architecture in combination with a weighted multiterm loss function. We train the network using highly-randomized synthetic data to ensure the generalizability of the learned solver to unseen reactor geometries. The results demonstrate that the learned solver is able to produce quantitatively and qualitatively accurate solutions. Furthermore, it generalizes well on new reactor geometries such as reference geometries found in the literature. To increase the numerical accuracy of the solutions required in LTP simulations, we employ a conventional iterative solver to refine the raw predictions, especially to recover the high-frequency features not resolved by the initial prediction. With this, the proposed learned Poisson solver provides the required accuracy and is potentially faster than a pure GPU-based conventional iterative solver. This opens up new possibilities for developing a generic and high-performing learned Poisson solver for LTP systems in complex geometries.
This paper presents a new distance metric to compare two continuous probability density functions. The main advantage of this metric is that, unlike other statistical measurements, it can provide an analytic, closed-form expression for a mixture of Gaussian distributions while satisfying all metric properties. These characteristics enable fast, stable, and efficient calculations, which are highly desirable in real-world signal processing applications. The application in mind is Gaussian Mixture Reduction (GMR), which is widely used in density estimation, recursive tracking, and belief propagation. To address this problem, we developed a novel algorithm dubbed the Optimization-based Greedy GMR (OGGMR), which employs our metric as a criterion to approximate a high-order Gaussian mixture with a lower order. Experimental results show that the OGGMR algorithm is significantly faster and more efficient than state-of-the-art GMR algorithms while retaining the geometric shape of the original mixture.