In today's world, many technologically advanced countries have realized that real power lies not in physical strength but in educated minds. As a result, every country has embarked on restructuring its education system to meet the demands of technology. As a country in the midst of these developments, we cannot remain indifferent to this transformation in education. In the Information Age of the 21st century, rapid access to information is crucial for the development of individuals and societies. To take our place among the knowledge societies in a world moving rapidly towards globalization, we must closely follow technological innovations and meet the requirements of technology. This can be achieved by providing learning opportunities to anyone interested in acquiring education in their area of interest. This study focuses on the advantages and disadvantages of internet-based learning compared to traditional teaching methods, the importance of computer usage in internet-based learning, negative factors affecting internet-based learning, and the necessary recommendations for addressing these issues. In today's world, it is impossible to talk about education without technology or technology without education.
We consider the performance of the difference-in-means estimator in a two-arm randomized experiment under common experimental endpoints such as continuous (regression), incidence, proportion and survival. We examine performance under both equal and unequal allocation to treatment groups and we consider both the Neyman randomization model and the population model. We show that in the Neyman model, where the only source of randomness is the treatment manipulation, there is no free lunch: complete randomization is minimax for the estimator's mean squared error. In the population model, where each subject experiences response noise with zero mean, the optimal design is the deterministic perfect-balance allocation. However, this allocation is generally NP-hard to compute and moreover, depends on unknown response parameters. When considering the tail criterion of Kapelner et al. (2021), we show the optimal design is less random than complete randomization and more random than the deterministic perfect-balance allocation. We prove that Fisher's blocking design provides the asymptotically optimal degree of experimental randomness. Theoretical results are supported by simulations in all considered experimental settings.
Past works have shown that the Bernstein-von Mises theorem, on the asymptotic normality of posterior distributions, holds if the parameter dimension $d$ grows slower than the cube root of sample size $n$. Here, we prove the first Bernstein-von Mises theorem in the regime $d^2\ll n$. We establish this result for 1) exponential families and 2) logistic regression with Gaussian design. The proof builds on our recent work on the accuracy of the Laplace approximation to posterior distributions, in which we showed the approximation error in TV distance scales as $d/\sqrt n$.
The Internet of Things (IoT) devices are rapidly increasing in popularity, with more individuals using Internet-connected devices that continuously monitor their activities. This work explores privacy concerns and expectations of end-users related to Trigger-Action platforms (TAPs) in the context of the Internet of Things (IoT). TAPs allow users to customize their smart environments by creating rules that trigger actions based on specific events or conditions. As personal data flows between different entities, there is a potential for privacy concerns. In this study, we aimed to identify the privacy factors that impact users' concerns and preferences for using IoT TAPs. To address this research objective, we conducted three focus groups with 15 participants and we extracted nine themes related to privacy factors using thematic analysis. Our participants particularly prefer to have control and transparency over the automation and are concerned about unexpected data inferences, risks and unforeseen consequences for themselves and for bystanders that are caused by the automation. The identified privacy factors can help researchers derive predefined and selectable profiles of privacy permission settings for IoT TAPs that represent the privacy preferences of different types of users as a basis for designing usable privacy controls for IoT TAPs.
Physics Informed Neural Networks is a numerical method which uses neural networks to approximate solutions of partial differential equations. It has received a lot of attention and is currently used in numerous physical and engineering problems. The mathematical understanding of these methods is limited, and in particular, it seems that, a consistent notion of stability is missing. Towards addressing this issue we consider model problems of partial differential equations, namely linear elliptic and parabolic PDEs. We consider problems with different stability properties, and problems with time discrete training. Motivated by tools of nonlinear calculus of variations we systematically show that coercivity of the energies and associated compactness provide the right framework for stability. For time discrete training we show that if these properties fail to hold then methods may become unstable. Furthermore, using tools of $\Gamma-$convergence we provide new convergence results for weak solutions by only requiring that the neural network spaces are chosen to have suitable approximation properties.
The generation and verification of quantum states are fundamental tasks for quantum information processing that have recently been investigated by Irani, Natarajan, Nirkhe, Rao and Yuen [CCC 2022], Rosenthal and Yuen [ITCS 2022], Metger and Yuen [FOCS 2023] under the term \emph{state synthesis}. This paper studies this concept from the viewpoint of quantum distributed computing, and especially distributed quantum Merlin-Arthur (dQMA) protocols. We first introduce a novel task, on a line, called state generation with distributed inputs (SGDI). In this task, the goal is to generate the quantum state $U\ket{\psi}$ at the rightmost node of the line, where $\ket{\psi}$ is a quantum state given at the leftmost node and $U$ is a unitary matrix whose description is distributed over the nodes of the line. We give a dQMA protocol for SGDI and utilize this protocol to construct a dQMA protocol for the Set Equality problem studied by Naor, Parter and Yogev [SODA 2020], and complement our protocol by showing classical lower bounds for this problem. Our second contribution is a dQMA protocol, based on a recent work by Zhu and Hayashi [Physical Review A, 2019], to create EPR-pairs between adjacent nodes of a network without quantum communication. As an application of this dQMA protocol, we prove a general result showing how to convert any dQMA protocol on an arbitrary network into another dQMA protocol where the verification stage does not require any quantum communication.
Environmental perception is a key element of autonomous driving because the information received from the perception module influences core driving decisions. An outstanding challenge in real-time perception for autonomous driving lies in finding the best trade-off between detection quality and latency. Major constraints on both computation and power have to be taken into account for real-time perception in autonomous vehicles. Larger object detection models tend to produce the best results, but are also slower at runtime. Since the most accurate detectors cannot run in real-time locally, we investigate the possibility of offloading computation to edge and cloud platforms, which are less resource-constrained. We create a synthetic dataset to train object detection models and evaluate different offloading strategies. Using real hardware and network simulations, we compare different trade-offs between prediction quality and end-to-end delay. Since sending raw frames over the network implies additional transmission delays, we also explore the use of JPEG and H.265 compression at varying qualities and measure their impact on prediction metrics. We show that models with adequate compression can be run in real-time on the cloud while outperforming local detection performance.
Deep neural networks have revolutionized many machine learning tasks in power systems, ranging from pattern recognition to signal processing. The data in these tasks is typically represented in Euclidean domains. Nevertheless, there is an increasing number of applications in power systems, where data are collected from non-Euclidean domains and represented as the graph-structured data with high dimensional features and interdependency among nodes. The complexity of graph-structured data has brought significant challenges to the existing deep neural networks defined in Euclidean domains. Recently, many studies on extending deep neural networks for graph-structured data in power systems have emerged. In this paper, a comprehensive overview of graph neural networks (GNNs) in power systems is proposed. Specifically, several classical paradigms of GNNs structures (e.g., graph convolutional networks, graph recurrent neural networks, graph attention networks, graph generative networks, spatial-temporal graph convolutional networks, and hybrid forms of GNNs) are summarized, and key applications in power systems such as fault diagnosis, power prediction, power flow calculation, and data generation are reviewed in detail. Furthermore, main issues and some research trends about the applications of GNNs in power systems are discussed.
Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.
With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.
The era of big data provides researchers with convenient access to copious data. However, people often have little knowledge about it. The increasing prevalence of big data is challenging the traditional methods of learning causality because they are developed for the cases with limited amount of data and solid prior causal knowledge. This survey aims to close the gap between big data and learning causality with a comprehensive and structured review of traditional and frontier methods and a discussion about some open problems of learning causality. We begin with preliminaries of learning causality. Then we categorize and revisit methods of learning causality for the typical problems and data types. After that, we discuss the connections between learning causality and machine learning. At the end, some open problems are presented to show the great potential of learning causality with data.