It is well-known that classical optical cavities can exhibit localized phenomena associated to scattering resonances (using the Black Box Scattering Theory), leading to numerical instabilities in approximating the solution. Those localized phenomena concentrate at the inner boundary of the cavity and are called whispering gallery modes. In this paper we investigate scattering resonances for unbounded transmission problems with sign-changing coefficient (corresponding to optical cavities with negative optical propertie(s), for example made of metamaterials). Due to the change of sign of optical properties, previous results cannot be applied directly, and interface phenomena at the metamaterial-dielectric interface (such as the so-called surface plasmons) emerge. We establish the existence of scattering resonances for arbitrary two-dimensional smooth metamaterial cavities. The proof relies on an asymptotic characterization of the resonances, and extending the Black Box Scattering Theory to problems with sign-changing coefficient. Our asymptotic analysis reveals that, depending on the metamaterial's properties, scattering resonances situated closed to the real axis are associated to surface plasmons. Examples for several metamaterial cavities are provided.
Event cameras are bio-inspired sensors that perform well in challenging illumination conditions and have high temporal resolution. However, their concept is fundamentally different from traditional frame-based cameras. The pixels of an event camera operate independently and asynchronously. They measure changes of the logarithmic brightness and return them in the highly discretised form of time-stamped events indicating a relative change of a certain quantity since the last event. New models and algorithms are needed to process this kind of measurements. The present work looks at several motion estimation problems with event cameras. The flow of the events is modelled by a general homographic warping in a space-time volume, and the objective is formulated as a maximisation of contrast within the image of warped events. Our core contribution consists of deriving globally optimal solutions to these generally non-convex problems, which removes the dependency on a good initial guess plaguing existing methods. Our methods rely on branch-and-bound optimisation and employ novel and efficient, recursive upper and lower bounds derived for six different contrast estimation functions. The practical validity of our approach is demonstrated by a successful application to three different event camera motion estimation problems.
Offline policy evaluation (OPE) is considered a fundamental and challenging problem in reinforcement learning (RL). This paper focuses on the value estimation of a target policy based on pre-collected data generated from a possibly different policy, under the framework of infinite-horizon Markov decision processes. Motivated by the recently developed marginal importance sampling method in RL and the covariate balancing idea in causal inference, we propose a novel estimator with approximately projected state-action balancing weights for the policy value estimation. We obtain the convergence rate of these weights and show that the proposed value estimator is semi-parametric efficient under technical conditions. In terms of asymptotics, our results scale with both the number of trajectories and the number of decision points at each trajectory. As such, consistency can still be achieved with a limited number of subjects when the number of decision points diverges. In addition, we develop a necessary and sufficient condition for establishing the well-posedness of the Bellman operator in the off-policy setting, which characterizes the difficulty of OPE and may be of independent interest. Numerical experiments demonstrate the promising performance of our proposed estimator.
Astrophysical modeling of processes in environments that are not in local thermal equilibrium requires the knowledge of state-to-state rate coefficients of rovibrational transitions in molecular collisions. These rate coefficients can be obtained from coupled-channel (CC) quantum scattering calculations which are very demanding, however. Here we present various approximate, but more efficient methods based on the coupled-states approximation (CSA) which neglects the off-diagonal Coriolis coupling in the scattering Hamiltonian in body-fixed coordinates. In particular, we investigated a method called NNCC (nearest-neighbor Coriolis coupling) [D. Yang, X. Hu, D. H. Zhang, and D. Xie, J. Chem. Phys. 148, 084101 (2018)] that includes Coriolis coupling to first order. The NNCC method is more demanding than the common CSA method, but still much more efficient than full CC calculations, and it is substantially more accurate than CSA. All of this is illustrated by showing state-to-state cross sections and rate coefficients of rovibrational transitions induced in CO$_2$ by collisions with He atoms. It is also shown that a further reduction of CPU time, practically without loss of accuracy, can be obtained by combining the NNCC method with the multi-channel distorted-wave Born approximation (MC-DWBA) that we applied in full CC calculations in a previous paper.
Nowadays, the environments of smart systems for Industry 4.0 and Internet of Things (IoT) are experiencing fast industrial upgrading. Big data technologies such as design making, event detection, and classification are developed to help manufacturing organizations to achieve smart systems. By applying data analysis, the potential values of rich data can be maximized and thus help manufacturing organizations to finish another round of upgrading. In this paper, we propose two new algorithms with respect to big data analysis, namely UFC$_{gen}$ and UFC$_{fast}$. Both algorithms are designed to collect three types of patterns to help people determine the market positions for different product combinations. We compare these algorithms on various types of datasets, both real and synthetic. The experimental results show that both algorithms can successfully achieve pattern classification by utilizing three different types of interesting patterns from all candidate patterns based on user-specified thresholds of utility and frequency. Furthermore, the list-based UFC$_{fast}$ algorithm outperforms the level-wise-based UFC$_{gen}$ algorithm in terms of both execution time and memory consumption.
In experiments that study social phenomena, such as peer influence or herd immunity, the treatment of one unit may influence the outcomes of others. Such "interference between units" violates traditional approaches for causal inference, so that additional assumptions are often imposed to model or limit the underlying social mechanism. For binary outcomes, we propose an approach that does not require such assumptions, allowing for interference that is both unmodeled and strong, with confidence intervals derived using only the randomization of treatment. However, the estimates will have wider confidence intervals and weaker causal implications than those attainable under stronger assumptions. The approach allows for the usage of regression, matching, or weighting, as may best fit the application at hand. Inference is done by bounding the distribution of the estimation error over all possible values of the unknown counterfactual, using an integer program. Examples are shown using using a vaccination trial and two experiments investigating social influence.
Restless Multi-Armed Bandits (RMAB) is an apt model to represent decision-making problems in public health interventions (e.g., tuberculosis, maternal, and child care), anti-poaching planning, sensor monitoring, personalized recommendations and many more. Existing research in RMAB has contributed mechanisms and theoretical results to a wide variety of settings, where the focus is on maximizing expected value. In this paper, we are interested in ensuring that RMAB decision making is also fair to different arms while maximizing expected value. In the context of public health settings, this would ensure that different people and/or communities are fairly represented while making public health intervention decisions. To achieve this goal, we formally define the fairness constraints in RMAB and provide planning and learning methods to solve RMAB in a fair manner. We demonstrate key theoretical properties of fair RMAB and experimentally demonstrate that our proposed methods handle fairness constraints without sacrificing significantly on solution quality.
Machine learning typically presupposes classical probability theory which implies that aggregation is built upon expectation. There are now multiple reasons to motivate looking at richer alternatives to classical probability theory as a mathematical foundation for machine learning. We systematically examine a powerful and rich class of such alternatives, known variously as spectral risk measures, Choquet integrals or Lorentz norms. We present a range of characterization results, and demonstrate what makes this spectral family so special. In doing so we demonstrate a natural stratification of all coherent risk measures in terms of the upper probabilities that they induce by exploiting results from the theory of rearrangement invariant Banach spaces. We empirically demonstrate how this new approach to uncertainty helps tackling practical machine learning problems.
Strong physical unclonable functions (PUFs) provide a low-cost authentication primitive for resource constrained devices. However, most strong PUF architectures can be modeled through learning algorithms with a limited number of CRPs. In this paper, we introduce the concept of non-monotonic response quantization for strong PUFs. Responses depend not only on which path is faster, but also on the distance between the arriving signals. Our experiments show that the resulting PUF has increased security against learning attacks. To demonstrate, we designed and implemented a non-monotonically quantized ring-oscillator based PUF in 65 nm technology. Measurement results show nearly ideal uniformity and uniqueness, with bit error rate of 13.4% over the temperature range from 0 C to 50 C.
Voting forms the most important tool for arriving at a decision in any institution. The changing needs of the civilization currently demands a practical yet secure electronic voting system, but any flaw related to the applied voting technology can lead to tampering of the results with the malicious outcomes. Currently, blockchain technology due to its transparent structure forms an emerging area of investigation for the development of voting systems with a far greater security. However, various apprehensions are yet to be conclusively resolved before using blockchain in high stakes elections. Other than this, the blockchain based voting systems are vulnerable to possible attacks by upcoming noisy intermediate scale quantum (NISQ) computer. To circumvent, most of these limitations, in this work, we propose an anonymous voting scheme based on quantum assisted blockchain by enhancing the advantages offered by blockchain with the quantum resources such as quantum random number generators and quantum key distribution. The purposed scheme is shown to satisfy the requirements of a good voting scheme. Further, the voting scheme is auditable and can be implemented using the currently available technology.
Deep Learning has revolutionized the fields of computer vision, natural language understanding, speech recognition, information retrieval and more. However, with the progressive improvements in deep learning models, their number of parameters, latency, resources required to train, etc. have all have increased significantly. Consequently, it has become important to pay attention to these footprint metrics of a model as well, not just its quality. We present and motivate the problem of efficiency in deep learning, followed by a thorough survey of the five core areas of model efficiency (spanning modeling techniques, infrastructure, and hardware) and the seminal work there. We also present an experiment-based guide along with code, for practitioners to optimize their model training and deployment. We believe this is the first comprehensive survey in the efficient deep learning space that covers the landscape of model efficiency from modeling techniques to hardware support. Our hope is that this survey would provide the reader with the mental model and the necessary understanding of the field to apply generic efficiency techniques to immediately get significant improvements, and also equip them with ideas for further research and experimentation to achieve additional gains.