亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

According to the Complementary Learning Systems (CLS) theory~\cite{mcclelland1995there} in neuroscience, humans do effective \emph{continual learning} through two complementary systems: a fast learning system centered on the hippocampus for rapid learning of the specifics, individual experiences; and a slow learning system located in the neocortex for the gradual acquisition of structured knowledge about the environment. Motivated by this theory, we propose \emph{DualNets} (for Dual Networks), a general continual learning framework comprising a fast learning system for supervised learning of pattern-separated representation from specific tasks and a slow learning system for representation learning of task-agnostic general representation via Self-Supervised Learning (SSL). DualNets can seamlessly incorporate both representation types into a holistic framework to facilitate better continual learning in deep neural networks. Via extensive experiments, we demonstrate the promising results of DualNets on a wide range of continual learning protocols, ranging from the standard offline, task-aware setting to the challenging online, task-free scenario. Notably, on the CTrL~\cite{veniat2020efficient} benchmark that has unrelated tasks with vastly different visual images, DualNets can achieve competitive performance with existing state-of-the-art dynamic architecture strategies~\cite{ostapenko2021continual}. Furthermore, we conduct comprehensive ablation studies to validate DualNets efficacy, robustness, and scalability. Code will be made available at \url{//github.com/phquang/DualNet}.

相關內容

讓 iOS 8 和 OS X Yosemite 無縫切換的一個新特性。 > Apple products have always been designed to work together beautifully. But now they may really surprise you. With iOS 8 and OS X Yosemite, you’ll be able to do more wonderful things than ever before.

Source:

While modern society benefits from a range of technological advancements, it also is exposed to an ever-increasing set of cybersecurity threats. These affect all areas of life including business, government, and individuals. To complement technology solutions to this problem, it is crucial to understand more about cybercriminal perpetrators themselves, their use of technology, psychological aspects, and profiles. This is a topic that has received little socio-technical research emphasis in the technology community, has few concrete research findings, and is thus a prime area for development. The aim of this article is to explore cybercriminal activities and behavior from a psychology and human aspects perspective, through a series of notable case studies. We examine motivations, psychological and other interdisciplinary concepts as they may impact/influence cybercriminal activities. We expect this paper to be of value and particularly insightful for those studying technology, psychology, and criminology, with a focus on cybersecurity and cybercrime.

(Hyper)Graph decomposition is a family of problems that aim to break down large (hyper)graphs into smaller sub(hyper)graphs for easier analysis. The importance of this lies in its ability to enable efficient computation on large and complex (hyper)graphs, such as social networks, chemical compounds, and computer networks. This dissertation explores several types of (hyper)graph decomposition problems, including graph partitioning, hypergraph partitioning, local graph clustering, process mapping, and signed graph clustering. Our main focus is on streaming algorithms, local algorithms and multilevel algorithms. In terms of streaming algorithms, we make contributions with highly efficient and effective algorithms for (hyper)graph partitioning and process mapping. In terms of local algorithms, we propose sub-linear algorithms which are effective in detecting high-quality local communities around a given seed node in a graph based on the distribution of a given motif. In terms of multilevel algorithms, we engineer high-quality multilevel algorithms for process mapping and signed graph clustering. We provide a thorough discussion of each algorithm along with experimental results demonstrating their superiority over existing state-of-the-art techniques. The results show that the proposed algorithms achieve improved performance and better solutions in various metrics, making them highly promising for practical applications. Overall, this dissertation showcases the effectiveness of advanced combinatorial algorithmic techniques in solving challenging (hyper)graph decomposition problems.

We study the \textsc{$\alpha$-Fixed Cardinality Graph Partitioning ($\alpha$-FCGP)} problem, the generic local graph partitioning problem introduced by Bonnet et al. [Algorithmica 2015]. In this problem, we are given a graph $G$, two numbers $k,p$ and $0\leq\alpha\leq 1$, the question is whether there is a set $S\subseteq V$ of size $k$ with a specified coverage function $cov_{\alpha}(S)$ at least $p$ (or at most $p$ for the minimization version). The coverage function $cov_{\alpha}(\cdot)$ counts edges with exactly one endpoint in $S$ with weight $\alpha$ and edges with both endpoints in $S$ with weight $1 - \alpha$. $\alpha$-FCGP generalizes a number of fundamental graph problems such as \textsc{Densest $k$-Subgraph}, \textsc{Max $k$-Vertex Cover}, and \textsc{Max $(k,n-k)$-Cut}. A natural question in the study of $\alpha$-FCGP is whether the algorithmic results known for its special cases, like \textsc{Max $k$-Vertex Cover}, could be extended to more general settings. One of the simple but powerful methods for obtaining parameterized approximation [Manurangsi, SOSA 2019] and subexponential algorithms [Fomin et al. IPL 2011] for \textsc{Max $k$-Vertex Cover} is based on the greedy vertex degree orderings. The main insight of our work is that the idea of greed vertex degree ordering could be used to design fixed-parameter approximation schemes (FPT-AS) for $\alpha > 0$ and the subexponential-time algorithms for the problem on apex-minor free graphs for maximization with $\alpha > 1/3$ and minimization with $\alpha < 1/3$.

For a complexity class $C$ and language $L$, a constructive separation of $L \notin C$ gives an efficient algorithm (also called a refuter) to find counterexamples (bad inputs) for every $C$-algorithm attempting to decide $L$. We study the questions: Which lower bounds can be made constructive? What are the consequences of constructive separations? We build a case that "constructiveness" serves as a dividing line between many weak lower bounds we know how to prove, and strong lower bounds against $P$, $ZPP$, and $BPP$. Put another way, constructiveness is the opposite of a complexity barrier: it is a property we want lower bounds to have. Our results fall into three broad categories. 1. Our first set of results shows that, for many well-known lower bounds against streaming algorithms, one-tape Turing machines, and query complexity, as well as lower bounds for the Minimum Circuit Size Problem, making these lower bounds constructive would imply breakthrough separations ranging from $EXP \neq BPP$ to even $P \neq NP$. 2. Our second set of results shows that for most major open problems in lower bounds against $P$, $ZPP$, and $BPP$, including $P \neq NP$, $P \neq PSPACE$, $P \neq PP$, $ZPP \neq EXP$, and $BPP \neq NEXP$, any proof of the separation would further imply a constructive separation. Our results generalize earlier results for $P \neq NP$ [Gutfreund, Shaltiel, and Ta-Shma, CCC 2005] and $BPP \neq NEXP$ [Dolev, Fandina and Gutfreund, CIAC 2013]. 3. Our third set of results shows that certain complexity separations cannot be made constructive. We observe that for all super-polynomially growing functions $t$, there are no constructive separations for detecting high $t$-time Kolmogorov complexity (a task which is known to be not in $P$) from any complexity class, unconditionally.

I study how the shadow prices of a linear program that allocates an endowment of $n\beta \in \mathbb{R}^{m}$ resources to $n$ customers behave as $n \rightarrow \infty$. I show the shadow prices (i) adhere to a concentration of measure, (ii) converge to a multivariate normal under central-limit-theorem scaling, and (iii) have a variance that decreases like $\Theta(1/n)$. I use these results to prove that the expected regret in \cites{Li2019b} online linear program is $\Theta(\log n)$, both when the customer variable distribution is known upfront and must be learned on the fly. I thus tighten \citeauthors{Li2019b} upper bound from $O(\log n \log \log n)$ to $O(\log n)$, and extend \cites{Lueker1995} $\Omega(\log n)$ lower bound to the multi-dimensional setting. I illustrate my new techniques with a simple analysis of \cites{Arlotto2019} multisecretary problem.

The continuous representation of spatiotemporal data commonly relies on using abstract data types, such as \textit{moving regions}, to represent entities whose shape and position continuously change over time. Creating this representation from discrete snapshots of real-world entities requires using interpolation methods to compute in-between data representations and estimate the position and shape of the object of interest at arbitrary temporal points. Existing region interpolation methods often fail to generate smooth and realistic representations of a region's evolution. However, recent advancements in deep learning techniques have revealed the potential of deep models trained on discrete observations to capture spatiotemporal dependencies through implicit feature learning. In this work, we explore the capabilities of Conditional Variational Autoencoder (C-VAE) models to generate smooth and realistic representations of the spatiotemporal evolution of moving regions. We evaluate our proposed approach on a sparsely annotated dataset on the burnt area of a forest fire. We apply compression operations to sample from the dataset and use the C-VAE model and other commonly used interpolation algorithms to generate in-between region representations. To evaluate the performance of the methods, we compare their interpolation results with manually annotated data and regions generated by a U-Net model. We also assess the quality of generated data considering temporal consistency metrics. The proposed C-VAE-based approach demonstrates competitive results in geometric similarity metrics. It also exhibits superior temporal consistency, suggesting that C-VAE models may be a viable alternative to modelling the spatiotemporal evolution of 2D moving regions.

We study regret minimization in online episodic linear Markov Decision Processes, and obtain rate-optimal $\widetilde O (\sqrt K)$ regret where $K$ denotes the number of episodes. Our work is the first to establish the optimal (w.r.t.~$K$) rate of convergence in the stochastic setting with bandit feedback using a policy optimization based approach, and the first to establish the optimal (w.r.t.~$K$) rate in the adversarial setup with full information feedback, for which no algorithm with an optimal rate guarantee is currently known.

We study the task of performing hierarchical queries based on summary reports from the {\em Attribution Reporting API} for ad conversion measurement. We demonstrate that methods from optimization and differential privacy can help cope with the noise introduced by privacy guardrails in the API. In particular, we present algorithms for (i) denoising the API outputs and ensuring consistency across different levels of the tree, and (ii) optimizing the privacy budget across different levels of the tree. We provide an experimental evaluation of the proposed algorithms on public datasets.

Recent progress in deep learning and natural language processing has given rise to powerful models that are primarily trained on a cloze-like task and show some evidence of having access to substantial linguistic information, including some constructional knowledge. This groundbreaking discovery presents an exciting opportunity for a synergistic relationship between computational methods and Construction Grammar research. In this chapter, we explore three distinct approaches to the interplay between computational methods and Construction Grammar: (i) computational methods for text analysis, (ii) computational Construction Grammar, and (iii) deep learning models, with a particular focus on language models. We touch upon the first two approaches as a contextual foundation for the use of computational methods before providing an accessible, yet comprehensive overview of deep learning models, which also addresses reservations construction grammarians may have. Additionally, we delve into experiments that explore the emergence of constructionally relevant information within these models while also examining the aspects of Construction Grammar that may pose challenges for these models. This chapter aims to foster collaboration between researchers in the fields of natural language processing and Construction Grammar. By doing so, we hope to pave the way for new insights and advancements in both these fields.

For $1\le p \le \infty$, the Fr\'echet $p$-mean of a probability measure on a metric space is an important notion of central tendency that generalizes the usual notions in the real line of mean ($p=2$) and median ($p=1$). In this work we prove a collection of limit theorems for Fr\'echet means and related objects, which, in general, constitute a sequence of random closed sets. On the one hand, we show that many limit theorems (a strong law of large numbers, an ergodic theorem, and a large deviations principle) can be simply descended from analogous theorems on the space of probability measures via purely topological considerations. On the other hand, we provide the first sufficient conditions for the strong law of large numbers to hold in a $T_2$ topology (in particular, the Fell topology), and we show that this condition is necessary in some special cases. We also discuss statistical and computational implications of the results herein.

北京阿比特科技有限公司