The aim of the work presented in this paper is to develop and evaluate an integrated system that provides automated lecture style evaluation, allowing teachers to get instant feedback related to the goodness of their lecturing style. The proposed system aims to promote improvement of lecture quality, that could upgrade the overall student learning experience. The proposed application utilizes specific measurable biometric characteristics, such as facial expressions, body activity, speech rate and intonation, hand movement, and facial pose, extracted from a video showing the lecturer from the audience point of view. Measurable biometric features extracted during a lecture are combined to provide teachers with a score reflecting lecture style quality both at frame rate and by providing lecture quality metrics for the whole lecture. The acceptance of the proposed lecture style evaluation system was evaluated by chief education officers, teachers and students regarding the functionality, usefulness of the application, and possible improvements. The results indicate that participants found the application novel and useful in providing automated feedback regarding lecture quality. Furthermore, the performance evaluation of the proposed system was compared with the performance of humans in the task of lecture style evaluation. Results indicate that the proposed system not only achieves similar performance to human observers, but in some cases, it outperforms them.
We propose a framework where Fer and Wilcox expansions for the solution of differential equations are derived from two particular choices for the initial transformation that seeds the product expansion. In this scheme intermediate expansions can also be envisaged. Recurrence formulas are developed. A new lower bound for the convergence of the Wilcox expansion is provided as well as some applications of the results. In particular, two examples are worked out up to high order of approximation to illustrate the behavior of the Wilcox expansion.
In this paper, we characterize the class of {\em contraction perfect} graphs which are the graphs that remain perfect after the contraction of any edge set. We prove that a graph is contraction perfect if and only if it is perfect and the contraction of any single edge preserves its perfection. This yields a characterization of contraction perfect graphs in terms of forbidden induced subgraphs, and a polynomial algorithm to recognize them. We also define the utter graph $u(G)$ which is the graph whose stable sets are in bijection with the co-2-plexes of $G$, and prove that $u(G)$ is perfect if and only if $G$ is contraction perfect.
In this paper, we introduce a mixed integer quadratic formulation for the congested variant of the partial set covering location problem, which involves determining a subset of facility locations to open and efficiently allocating customers to these facilities to minimize the combined costs of facility opening and congestion while ensuring target coverage. To enhance the resilience of the solution against demand fluctuations, we address the case under uncertain customer demand using $\Gamma$-robustness. We formulate the deterministic problem and its robust counterpart as mixed-integer quadratic problems. We investigate the effect of the protection level in adapted instances from the literature to provide critical insights into how sensitive the planning is to the protection level. Moreover, since the size of the robust counterpart grows with the number of customers, which could be significant in real-world contexts, we propose the use of Benders decomposition to effectively reduce the number of variables by projecting out of the master problem all the variables dependent on the number of customers. We illustrate how to incorporate our Benders approach within a mixed-integer second-order cone programming (MISOCP) solver, addressing explicitly all the ingredients that are instrumental for its success. We discuss single-tree and multi-tree approaches and introduce a perturbation technique to deal with the degeneracy of the Benders subproblem efficiently. Our tailored Benders approaches outperform the perspective reformulation solved using the state-of-the-art MISOCP solver Gurobi on adapted instances from the literature.
In this article, we propose an interval constraint programming method for globally solving catalog-based categorical optimization problems. It supports catalogs of arbitrary size and properties of arbitrary dimension, and does not require any modeling effort from the user. A novel catalog-based contractor (or filtering operator) guarantees consistency between the categorical properties and the existing catalog items. This results in an intuitive and generic approach that is exact, rigorous (robust to roundoff errors) and can be easily implemented in an off-the-shelf interval-based continuous solver that interleaves branching and constraint propagation. We demonstrate the validity of the approach on a numerical problem in which a categorical variable is described by a two-dimensional property space. A Julia prototype is available as open-source software under the MIT license at //github.com/cvanaret/CateGOrical.jl
Compared with traditional design methods, generative design significantly attracts engineers in various disciplines. In thiswork, howto achieve the real-time generative design of optimized structures with various diversities and controllable structural complexities is investigated. To this end, a modified Moving Morphable Component (MMC) method together with novel strategies are adopted to generate high-quality dataset. The complexity level of optimized structures is categorized by the topological invariant. By improving the cost function, the WGAN is trained to produce optimized designs with the input of loading position and complexity level in real time. It is found that, diverse designs with a clear load transmission path and crisp boundary, even not requiring further optimization and different from any reference in the dataset, can be generated by the proposed model. This method holds great potential for future applications of machine learning enhanced intelligent design.
We study propositional proof systems with inference rules that formalize restricted versions of the ability to make assumptions that hold without loss of generality, commonly used informally to shorten proofs. Each system we study is built on resolution. They are called BC${}^-$, RAT${}^-$, SBC${}^-$, and GER${}^-$, denoting respectively blocked clauses, resolution asymmetric tautologies, set-blocked clauses, and generalized extended resolution - all "without new variables." They may be viewed as weak versions of extended resolution (ER) since they are defined by first generalizing the extension rule and then taking away the ability to introduce new variables. Except for SBC${}^-$, they are known to be strictly between resolution and extended resolution. Several separations between these systems were proved earlier by exploiting the fact that they effectively simulate ER. We answer the questions left open: We prove exponential lower bounds for SBC${}^-$ proofs of a binary encoding of the pigeonhole principle, which separates ER from SBC${}^-$. Using this new separation, we prove that both RAT${}^-$ and GER${}^-$ are exponentially separated from SBC${}^-$. This completes the picture of their relative strengths.
In this paper we provide a new linear sampling method based on the same data but a different definition of the data operator for two inverse problems: the multi-frequency inverse source problem for a fixed observation direction and the Born inverse scattering problems. We show that the associated regularized linear sampling indicator converges to the average of the unknown in a small neighborhood as the regularization parameter approaches to zero. We develop both a shape identification theory and a parameter identification theory which are stimulated, analyzed, and implemented with the help of the prolate spheroidal wave functions and their generalizations. We further propose a prolate-based implementation of the linear sampling method and provide numerical experiments to demonstrate how this linear sampling method is capable of reconstructing both the shape and the parameter.
Deterministic communication is required for applications of several industry verticals including manufacturing, automotive, financial, and health care, etc. These applications rely on reliable and time-synchronized delivery of information among the communicating devices. Therefore, large delay variations in packet delivery or inaccuracies in time synchronization cannot be tolerated. In particular, the industrial revolution on digitization, connectivity of digital and physical systems, and flexible production design require deterministic and time-synchronized communication. A network supporting deterministic communication guarantees data delivery in a specified time with high reliability. The IEEE 802.1 TSN task group is developing standards to provide deterministic communication through IEEE 802 networks. The IEEE 802.1AS standard defines time synchronization mechanism for accurate distribution of time among the communicating devices. The time synchronization accuracy depends on the accurate calculation of the residence time which is the time between the ingress and the egress ports of the bridge and includes the processing, queuing, transmission, and link latency of the timing information. This paper discusses time synchronization mechanisms supported in current wired and wireless integrated systems.
In the context of content-based recommender systems, the aim of this paper is to determine how better profiles can be built and how these affect the recommendation process based on the incorporation of temporality, i.e. the inclusion of time in the recommendation process, and topicality, i.e. the representation of texts associated with users and items using topics and their combination. The main contribution of the paper is to present two different ways of hybridising these two dimensions and to evaluate and compare them with other alternatives.
The paper proposes a conceptual model of how different perceived levels of experiential AR application features have effects on customer experience, and in turn their satisfaction and purchase behavior. In addition, it put forward the mediation role of immersion between perceived levels of experiential AR application features and customers experience.