Even with the vast potential that cloud computing has, so far, it has not been adopted by the consumers with the enthusiasm and pace that it be worthy; this is a very reason statement why consumers still hesitated of using cloud computing for their sensitive data and the threats that prevent the consumers from shifting to use cloud computing in general and cloud storage in particular. The cloud computing inherits the traditional potential security and privacy threats besides its own issues due to its unique structures. Some threats related to cloud computing are the insider malicious attacks from the employees that even sometime the provider unconscious about, the lack of transparency of agreement between consumer and provider, data loss, traffic hijacking, shared technology and insecure application interface. Such threats need remedies to make the consumer use its features in secure way. In this review, we spot the light on the most security and privacy issues which can be attributed as gaps that sometimes the consumers or even the enterprises are not aware of. We also define the parties that involve in scenario of cloud computing that also may attack the entire cloud systems. We also show the consequences of these threats.
The semantics of HPC storage systems are defined by the consistency models to which they abide. Storage consistency models have been less studied than their counterparts in memory systems, with the exception of the POSIX standard and its strict consistency model. The use of POSIX consistency imposes a performance penalty that becomes more significant as the scale of parallel file systems increases and the access time to storage devices, such as node-local solid storage devices, decreases. While some efforts have been made to adopt relaxed storage consistency models, these models are often defined informally and ambiguously as by-products of a particular implementation. In this work, we establish a connection between memory consistency models and storage consistency models and revisit the key design choices of storage consistency models from a high-level perspective. Further, we propose a formal and unified framework for defining storage consistency models and a layered implementation that can be used to easily evaluate their relative performance for different I/O workloads. Finally, we conduct a comprehensive performance comparison of two relaxed consistency models on a range of commonly-seen parallel I/O workloads, such as checkpoint/restart of scientific applications and random reads of deep learning applications. We demonstrate that for certain I/O scenarios, a weaker consistency model can significantly improve the I/O performance. For instance, in small random reads that typically found in deep learning applications, session consistency achieved an 5x improvement in I/O bandwidth compared to commit consistency, even at small scales.
Networked control systems (NCSs), which are feedback control loops closed over a communication network, have been a popular research topic over the past decades. Numerous works in the literature propose novel algorithms and protocols with joint consideration of communication and control. However, the vast majority of the recent research results, which have shown remarkable performance improvements if a cross-layer methodology is followed, have not been widely adopted by the industry. In this work, we review the shortcomings of today's mobile networks that render cross-layer solutions, such as semantic and goal-oriented communications, very challenging in practice. To tackle this, we propose a new framework for 6G user plane design that simplifies the adoption of recent research results in networked control, thereby facilitating the joint communication and control design in next-generation mobile networks.
Despite the importance of having a measure of confidence in recommendation results, it has been surprisingly overlooked in the literature compared to the accuracy of the recommendation. In this dissertation, I propose a model calibration framework for recommender systems for estimating accurate confidence in recommendation results based on the learned ranking scores. Moreover, I subsequently introduce two real-world applications of confidence on recommendations: (1) Training a small student model by treating the confidence of a big teacher model as additional learning guidance, (2) Adjusting the number of presented items based on the expected user utility estimated with calibrated probability.
Given in the plane a set of points and a set of halfplanes, we consider the problem of computing a smallest subset of halfplanes whose union covers all points. In this paper, we present an $O(n^{4/3}\log^{5/3}n\log^{O(1)}\log n)$-time algorithm for the problem, where $n$ is the total number of all points and halfplanes. This improves the previously best algorithm of $n^{10/3}2^{O(\log^*n)}$ time by roughly a quadratic factor. For the special case where all halfplanes are lower ones, our algorithm runs in $O(n\log n)$ time, which improves the previously best algorithm of $n^{4/3}2^{O(\log^*n)}$ time and matches an $\Omega(n\log n)$ lower bound. Further, our techniques can be extended to solve a star-shaped polygon coverage problem in $O(n\log n)$ time, which in turn leads to an $O(n\log n)$-time algorithm for computing an instance-optimal $\epsilon$-kernel of a set of $n$ points in the plane. Agarwal and Har-Peled presented an $O(nk\log n)$-time algorithm for this problem in SoCG 2023, where $k$ is the size of the $\epsilon$-kernel; they also raised an open question whether the problem can be solved in $O(n\log n)$ time. Our result thus answers the open question affirmatively.
Developing robust and interpretable vision systems is a crucial step towards trustworthy artificial intelligence. In this regard, a promising paradigm considers embedding task-required invariant structures, e.g., geometric invariance, in the fundamental image representation. However, such invariant representations typically exhibit limited discriminability, limiting their applications in larger-scale trustworthy vision tasks. For this open problem, we conduct a systematic investigation of hierarchical invariance, exploring this topic from theoretical, practical, and application perspectives. At the theoretical level, we show how to construct over-complete invariants with a Convolutional Neural Networks (CNN)-like hierarchical architecture yet in a fully interpretable manner. The general blueprint, specific definitions, invariant properties, and numerical implementations are provided. At the practical level, we discuss how to customize this theoretical framework into a given task. With the over-completeness, discriminative features w.r.t. the task can be adaptively formed in a Neural Architecture Search (NAS)-like manner. We demonstrate the above arguments with accuracy, invariance, and efficiency results on texture, digit, and parasite classification experiments. Furthermore, at the application level, our representations are explored in real-world forensics tasks on adversarial perturbations and Artificial Intelligence Generated Content (AIGC). Such applications reveal that the proposed strategy not only realizes the theoretically promised invariance, but also exhibits competitive discriminability even in the era of deep learning. For robust and interpretable vision tasks at larger scales, hierarchical invariant representation can be considered as an effective alternative to traditional CNN and invariants.
We examine the possibility of approximating Maximum Vertex-Disjoint Shortest Paths. In this problem, the input is an edge-weighted (directed or undirected) $n$-vertex graph $G$ along with $k$ terminal pairs $(s_1,t_1),(s_2,t_2),\ldots,(s_k,t_k)$. The task is to connect as many terminal pairs as possible by pairwise vertex-disjoint paths such that each path is a shortest path between the respective terminals. Our work is anchored in the recent breakthrough by Lochet [SODA '21], which demonstrates the polynomial-time solvability of the problem for a fixed value of $k$. Lochet's result implies the existence of a polynomial-time $ck$-approximation for Maximum Vertex-Disjoint Shortest Paths, where $c \leq 1$ is a constant. Our first result suggests that this approximation algorithm is, in a sense, the best we can hope for. More precisely, assuming the gap-ETH, we exclude the existence of an $o(k)$-approximations within $f(k) \cdot $poly($n$) time for any function $f$ that only depends on $k$. Our second result demonstrates the infeasibility of achieving an approximation ratio of $n^{\frac{1}{2}-\varepsilon}$ in polynomial time, unless P = NP. It is not difficult to show that a greedy algorithm selecting a path with the minimum number of arcs results in a $\lceil\sqrt{\ell}\rceil$-approximation, where $\ell$ is the number of edges in all the paths of an optimal solution. Since $\ell \leq n$, this underscores the tightness of the $n^{\frac{1}{2}-\varepsilon}$-inapproximability bound. Additionally, we establish that Maximum Vertex-Disjoint Shortest Paths is fixed-parameter tractable when parameterized by $\ell$ but does not admit a polynomial kernel. Our hardness results hold for undirected graphs with unit weights, while our positive results extend to scenarios where the input graph is directed and features arbitrary (non-negative) edge weights.
3D stacked technology has emerged as an effective mechanism to overcome physical limits and communication delays found in 2D integration. However, 3D technology also presents several drawbacks that prevent its smooth application. Two of the major concerns are heat reduction and power density distribution. In our work, we propose a novel 3D thermal-aware floorplanner that includes: (1) an effective thermal-aware process with 3 different evolutionary algorithms that aim to solve the soft computing problem of optimizing the placement of functional units and through silicon vias, as well as the smooth inclusion of active cooling systems and new design strategies,(2) an approximated thermal model inside the optimization loop, (3) an optimizer for active cooling (liquid channels), and (4) a novel technique based on air channel placement designed to isolate thermal domains have been also proposed. The experimental work is conducted for a realistic many-core single-chip architecture based on the Niagara design. Results show promising improvements of the thermal and reliability metrics, and also show optimal scaling capabilities to target future-trend many-core systems.
Typical recommendation and ranking methods aim to optimize the satisfaction of users, but they are often oblivious to their impact on the items (e.g., products, jobs, news, video) and their providers. However, there has been a growing understanding that the latter is crucial to consider for a wide range of applications, since it determines the utility of those being recommended. Prior approaches to fairness-aware recommendation optimize a regularized objective to balance user satisfaction and item fairness based on some notion such as exposure fairness. These existing methods have been shown to be effective in controlling fairness, however, most of them are computationally inefficient, limiting their applications to only unrealistically small-scale situations. This indeed implies that the literature does not yet provide a solution to enable a flexible control of exposure in the industry-scale recommender systems where millions of users and items exist. To enable a computationally efficient exposure control even for such large-scale systems, this work develops a scalable, fast, and fair method called \emph{\textbf{ex}posure-aware \textbf{ADMM} (\textbf{exADMM})}. exADMM is based on implicit alternating least squares (iALS), a conventional scalable algorithm for collaborative filtering, but optimizes a regularized objective to achieve a flexible control of accuracy-fairness tradeoff. A particular technical challenge in developing exADMM is the fact that the fairness regularizer destroys the separability of optimization subproblems for users and items, which is an essential property to ensure the scalability of iALS. Therefore, we develop a set of optimization tools to enable yet scalable fairness control with provable convergence guarantees as a basis of our algorithm.
Hyperproperties are commonly used in computer security to define information-flow policies and other requirements that reason about the relationship between multiple computations. In this paper, we study a novel class of hyperproperties where the individual computation paths are chosen by the strategic choices of a coalition of agents in a multi-agent system. We introduce HyperATL*, an extension of computation tree logic with path variables and strategy quantifiers. Our logic can express strategic hyperproperties, such as that the scheduler in a concurrent system has a strategy to avoid information leakage. HyperATL* is particularly useful to specify asynchronous hyperproperties, i.e., hyperproperties where the speed of the execution on the different computation paths depends on the choices of the scheduler. Unlike other recent logics for the specification of asynchronous hyperproperties, our logic is the first to admit decidable model checking for the full logic. We present a model checking algorithm for HyperATL* based on alternating automata, and show that our algorithm is asymptotically optimal by providing a matching lower bound. We have implemented a prototype model checker for a fragment of HyperATL*, able to check various security properties on small programs.
The new era of technology has brought us to the point where it is convenient for people to share their opinions over an abundance of platforms. These platforms have a provision for the users to express themselves in multiple forms of representations, including text, images, videos, and audio. This, however, makes it difficult for users to obtain all the key information about a topic, making the task of automatic multi-modal summarization (MMS) essential. In this paper, we present a comprehensive survey of the existing research in the area of MMS.