Depleting lake ice can serve as an indicator for climate change, just like sea level rise or glacial retreat. Several Lake Ice Phenological (LIP) events serve as sentinels to understand the regional and global climate change. Hence, it is useful to monitor long-term lake freezing and thawing patterns. In this paper we report a case study for the Oberengadin region of Switzerland, where there are several small- and medium-sized mountain lakes. We observe the LIP events, such as freeze-up, break-up and ice cover duration, across two decades (2000-2020) from optical satellite images. We analyse time-series of MODIS imagery by estimating spatially resolved maps of lake ice for these Alpine lakes with supervised machine learning (and additionally cross-check with VIIRS data when available). To train the classifier we rely on reference data annotated manually based on webcam images. From the ice maps we derive long-term LIP trends. Since the webcam data is only available for two winters, we also validate our results against the operational MODIS and VIIRS snow products. We find a change in complete freeze duration of -0.76 and -0.89 days per annum for lakes Sils and Silvaplana, respectively. Furthermore, we observe plausible correlations of the LIP trends with climate data measured at nearby meteorological stations. We notice that mean winter air temperature has negative correlation with the freeze duration and break-up events, and positive correlation with the freeze-up events. Additionally, we observe strong negative correlation of sunshine during the winter months with the freeze duration and break-up events.
We investigate the capabilities of transfer learning in the area of structural health monitoring. In particular, we are interested in damage detection for concrete structures. Typical image datasets for such problems are relatively small, calling for the transfer of learned representation from a related large-scale dataset. Past efforts of damage detection using images have mainly considered cross-domain transfer learning approaches using pre-trained IMAGENET models that are subsequently fine-tuned for the target task. However, there are rising concerns about the generalizability of IMAGENET representations for specific target domains, such as for visual inspection and medical imaging. We, therefore, evaluate a combination of in-domain and cross-domain transfer learning strategies for damage detection in bridges. We perform comprehensive comparisons to study the impact of cross-domain and in-domain transfer, with various initialization strategies, using six publicly available visual inspection datasets. The pre-trained models are also evaluated for their ability to cope with the extremely low-data regime. We show that the combination of cross-domain and in-domain transfer persistently shows superior performance specially with tiny datasets. Likewise, we also provide visual explanations of predictive models to enable algorithmic transparency and provide insights to experts about the intrinsic decision logic of typically black-box deep models.
The distributed subgraph detection asks, for a fixed graph $H$, whether the $n$-node input graph contains $H$ as a subgraph or not. In the standard CONGEST model of distributed computing, the complexity of clique/cycle detection and listing has received a lot of attention recently. In this paper we consider the induced variant of subgraph detection, where the goal is to decide whether the $n$-node input graph contains $H$ as an \emph{induced} subgraph or not. We first show a $\tilde{\Omega}(n)$ lower bound for detecting the existence of an induced $k$-cycle for any $k\geq 4$ in the CONGEST model. This lower bound is tight for $k=4$, and shows that the induced variant of $k$-cycle detection is much harder than the non-induced version. This lower bound is proved via a reduction from two-party communication complexity. We complement this result by showing that for $5\leq k\leq 7$, this $\tilde{\Omega}(n)$ lower bound cannot be improved via the two-party communication framework. We then show how to prove stronger lower bounds for larger values of $k$. More precisely, we show that detecting an induced $k$-cycle for any $k\geq 8$ requires $\tilde{\Omega}(n^{2-\Theta{(1/k)}})$ rounds in the CONGEST model, nearly matching the known upper bound $\tilde{O}(n^{2-\Theta{(1/k)}})$ of the general $k$-node subgraph detection (which also applies to the induced version) by Eden, Fiat, Fischer, Kuhn, and Oshman~[DISC 2019]. Finally, we investigate the case where $H$ is the diamond (the diamond is obtained by adding an edge to a 4-cycle, or equivalently removing an edge from a 4-clique), and show non-trivial upper and lower bounds on the complexity of the induced version of diamond detecting and listing.
Much of the world's population experiences some form of disability during their lifetime. Caution must be exercised while designing natural language processing (NLP) systems to prevent systems from inadvertently perpetuating ableist bias against people with disabilities, i.e., prejudice that favors those with typical abilities. We report on various analyses based on word predictions of a large-scale BERT language model. Statistically significant results demonstrate that people with disabilities can be disadvantaged. Findings also explore overlapping forms of discrimination related to interconnected gender and race identities.
COVID-19 has been a public health emergency of international concern since early 2020. Reliable forecasting is critical to diminish the impact of this disease. To date, a large number of different forecasting models have been proposed, mainly including statistical models, compartmental models, and deep learning models. However, due to various uncertain factors across different regions such as economics and government policy, no forecasting model appears to be the best for all scenarios. In this paper, we perform quantitative analysis of COVID-19 forecasting of confirmed cases and deaths across different regions in the United States with different forecasting horizons, and evaluate the relative impacts of the following three dimensions on the predictive performance (improvement and variation) through different evaluation metrics: model selection, hyperparameter tuning, and the length of time series required for training. We find that if a dimension brings about higher performance gains, if not well-tuned, it may also lead to harsher performance penalties. Furthermore, model selection is the dominant factor in determining the predictive performance. It is responsible for both the largest improvement and the largest variation in performance in all prediction tasks across different regions. While practitioners may perform more complicated time series analysis in practice, they should be able to achieve reasonable results if they have adequate insight into key decisions like model selection.
Deep Learning has implemented a wide range of applications and has become increasingly popular in recent years. The goal of multimodal deep learning is to create models that can process and link information using various modalities. Despite the extensive development made for unimodal learning, it still cannot cover all the aspects of human learning. Multimodal learning helps to understand and analyze better when various senses are engaged in the processing of information. This paper focuses on multiple types of modalities, i.e., image, video, text, audio, body gestures, facial expressions, and physiological signals. Detailed analysis of past and current baseline approaches and an in-depth study of recent advancements in multimodal deep learning applications has been provided. A fine-grained taxonomy of various multimodal deep learning applications is proposed, elaborating on different applications in more depth. Architectures and datasets used in these applications are also discussed, along with their evaluation metrics. Last, main issues are highlighted separately for each domain along with their possible future research directions.
Adapting semantic segmentation models to new domains is an important but challenging problem. Recently enlightening progress has been made, but the performance of existing methods are unsatisfactory on real datasets where the new target domain comprises of heterogeneous sub-domains (e.g., diverse weather characteristics). We point out that carefully reasoning about the multiple modalities in the target domain can improve the robustness of adaptation models. To this end, we propose a condition-guided adaptation framework that is empowered by a special attentive progressive adversarial training (APAT) mechanism and a novel self-training policy. The APAT strategy progressively performs condition-specific alignment and attentive global feature matching. The new self-training scheme exploits the adversarial ambivalences of easy and hard adaptation regions and the correlations among target sub-domains effectively. We evaluate our method (DCAA) on various adaptation scenarios where the target images vary in weather conditions. The comparisons against baselines and the state-of-the-art approaches demonstrate the superiority of DCAA over the competitors.
Deep neural networks (DNN) have achieved unprecedented success in numerous machine learning tasks in various domains. However, the existence of adversarial examples has raised concerns about applying deep learning to safety-critical applications. As a result, we have witnessed increasing interests in studying attack and defense mechanisms for DNN models on different data types, such as images, graphs and text. Thus, it is necessary to provide a systematic and comprehensive overview of the main threats of attacks and the success of corresponding countermeasures. In this survey, we review the state of the art algorithms for generating adversarial examples and the countermeasures against adversarial examples, for the three popular data types, i.e., images, graphs and text.
Sufficient supervised information is crucial for any machine learning models to boost performance. However, labeling data is expensive and sometimes difficult to obtain. Active learning is an approach to acquire annotations for data from a human oracle by selecting informative samples with a high probability to enhance performance. In recent emerging studies, a generative adversarial network (GAN) has been integrated with active learning to generate good candidates to be presented to the oracle. In this paper, we propose a novel model that is able to obtain labels for data in a cheaper manner without the need to query an oracle. In the model, a novel reward for each sample is devised to measure the degree of uncertainty, which is obtained from a classifier trained with existing labeled data. This reward is used to guide a conditional GAN to generate informative samples with a higher probability for a certain label. With extensive evaluations, we have confirmed the effectiveness of the model, showing that the generated samples are capable of improving the classification performance in popular image classification tasks.
Can we detect common objects in a variety of image domains without instance-level annotations? In this paper, we present a framework for a novel task, cross-domain weakly supervised object detection, which addresses this question. For this paper, we have access to images with instance-level annotations in a source domain (e.g., natural image) and images with image-level annotations in a target domain (e.g., watercolor). In addition, the classes to be detected in the target domain are all or a subset of those in the source domain. Starting from a fully supervised object detector, which is pre-trained on the source domain, we propose a two-step progressive domain adaptation technique by fine-tuning the detector on two types of artificially and automatically generated samples. We test our methods on our newly collected datasets containing three image domains, and achieve an improvement of approximately 5 to 20 percentage points in terms of mean average precision (mAP) compared to the best-performing baselines.
Discrete random structures are important tools in Bayesian nonparametrics and the resulting models have proven effective in density estimation, clustering, topic modeling and prediction, among others. In this paper, we consider nested processes and study the dependence structures they induce. Dependence ranges between homogeneity, corresponding to full exchangeability, and maximum heterogeneity, corresponding to (unconditional) independence across samples. The popular nested Dirichlet process is shown to degenerate to the fully exchangeable case when there are ties across samples at the observed or latent level. To overcome this drawback, inherent to nesting general discrete random measures, we introduce a novel class of latent nested processes. These are obtained by adding common and group-specific completely random measures and, then, normalising to yield dependent random probability measures. We provide results on the partition distributions induced by latent nested processes, and develop an Markov Chain Monte Carlo sampler for Bayesian inferences. A test for distributional homogeneity across groups is obtained as a by product. The results and their inferential implications are showcased on synthetic and real data.