亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper we propose a protocol that can be used to covertly send a distress signal through a seemingly normal webserver, even if the adversary is monitoring both the network and the user's device. This allows a user to call for help even when they are in the same physical space as their adversaries. We model such a scenario by introducing a strong adversary model that captures a high degree of access to the user's device and full control over the network. Our model fits into scenarios where a user is under surveillance and wishes to inform a trusted party of the situation. To do this, our method uses existing websites to act as intermediaries between the user and a trusted backend; this enables the user to initiate the distress signal without arousing suspicion, even while being actively monitored. We accomplish this by utilising the TLS handshake to convey additional information; this means that any website wishing to participate can do so with minimal effort and anyone monitoring the traffic will just see common TLS connections. In order for websites to be willing to host such a functionality the protocol must coexist gracefully with users who use normal TLS and the computational overhead must be minimal. We provide a full security analysis of the architecture and prove that the adversary cannot distinguish between a set of communications which contains a distress call and a normal communication.

相關內容

The paper advocates for LLMs to enhance the accessibility, usage and explainability of rule-based legal systems, contributing to a democratic and stakeholder-oriented view of legal technology. A methodology is developed to explore the potential use of LLMs for translating the explanations produced by rule-based systems, from high-level programming languages to natural language, allowing all users a fast, clear, and accessible interaction with such technologies. The study continues by building upon these explanations to empower laypeople with the ability to execute complex juridical tasks on their own, using a Chain of Prompts for the autonomous legal comparison of different rule-based inferences, applied to the same factual case.

Can governments build AI? In this paper, we describe an ongoing effort to develop ``public AI'' -- publicly accessible AI models funded, provisioned, and governed by governments or other public bodies. Public AI presents both an alternative and a complement to standard regulatory approaches to AI, but it also suggests new technical and policy challenges. We present a roadmap for how the ML research community can help shape this initiative and support its implementation, and how public AI can complement other responsible AI initiatives.

Online makespan minimization is a classic model in the field of scheduling. In this paper, we consider the over-time version, where each job is associated with a release time and a processing time. We only know a job after its release time and should schedule it on one machine afterward. The Longest Processing Time First (LPT) algorithm, as proven by Chen and Vestjens in 1997, achieves a competitive ratio of 1.5. However, for the case of two machines, Noga and Seiden introduced the SLEEPY algorithm, which achieves a competitive ratio of 1.382. Unfortunately, for the case of $m\geq 3$, there has been no convincing result that surpasses the performance of LPT. we propose a natural generalization that involves locking all the other machines for a certain period after starting a job, thereby preventing them from initiating new jobs. We show this simple approach can beat the $1.5$ barrier and achieve $1.482$-competitive when $m=3$. However, when $m$ becomes large, we observe that this simple generalization fails to beat $1.5$. Meanwhile, we introduce a novel technique called dynamic locking to overcome the new challenge. As a result, we achieve a competitive ratio of $1.5-\frac{1}{O(m^2)}$, which beats the LPT algorithm ($1.5$-comeptitive) for every constant $m$.

Many engineering applications rely on the evaluation of expensive, non-linear high-dimensional functions. In this paper, we propose the RONAALP algorithm (Reduced Order Nonlinear Approximation with Active Learning Procedure) to incrementally learn a fast and accurate reduced-order surrogate model of a target function on-the-fly as the application progresses. First, the combination of nonlinear auto-encoder, community clustering and radial basis function networks allows to learn an efficient and compact surrogate model with limited training data. Secondly, the active learning procedure overcome any extrapolation issue when evaluating the surrogate model outside of its initial training range during the online stage. This results in generalizable, fast and accurate reduced-order models of high-dimensional functions. The method is demonstrated on three direct numerical simulations of hypersonic flows in chemical nonequilibrium. Accurate simulations of these flows rely on detailed thermochemical gas models that dramatically increase the cost of such calculations. Using RONAALP to learn a reduced-order thermodynamic model surrogate on-the-fly, the cost of such simulation was reduced by up to 75% while maintaining an error of less than 10% on relevant quantities of interest.

This paper investigates an emerging cache side channel attack defense approach involving the use of hardware performance counters (HPCs). These counters monitor microarchitectural events and analyze statistical deviations to differentiate between malicious and benign software. With numerous proposals and promising reported results, we seek to investigate whether published HPC-based detection methods are evaluated in a proper setting and under the right assumptions, such that their quality can be ensured for real-word deployment against cache side-channel attacks. To achieve this goal, this paper presents a comprehensive evaluation and scrutiny of existing literature on the subject matter in a form of a survey, accompanied by experimental evidences to support our evaluation.

This study presents an ensemble approach that addresses the challenges of identification and analysis of research articles in rapidly evolving fields, using the field of Artificial Intelligence (AI) as a case study. Our approach included using decision tree, sciBERT and regular expression matching on different fields of the articles, and a SVM to merge the results from different models. We evaluated the effectiveness of our method on a manually labeled dataset, finding that our combined approach captured around 97% of AI-related articles in the web of science (WoS) corpus with a precision of 0.92. This presents a 0.15 increase in F1 score compared with existing search term based approach. Following this, we analyzed the publication volume trends and common research themes.We found that compared with existing methods, our ensemble approach revealed an increased degree of interdisciplinarity, and was able to identify more articles in certain subfields like feature extraction and optimization. This study demonstrates the potential of our approach as a tool for the accurate identification of scholarly articles, which is also capable of providing insights into the volume and content of a research area.

In this article we consider the filtering problem associated to partially observed diffusions, with observations following a marked point process. In the model, the data form a point process with observation times that have its intensity driven by a diffusion, with the associated marks also depending upon the diffusion process. We assume that one must resort to time-discretizing the diffusion process and develop particle and multilevel particle filters to recursively approximate the filter. In particular, we prove that our multilevel particle filter can achieve a mean square error (MSE) of $\mathcal{O}(\epsilon^2)$ ($\epsilon>0$ and arbitrary) with a cost of $\mathcal{O}(\epsilon^{-2.5})$ versus using a particle filter which has a cost of $\mathcal{O}(\epsilon^{-3})$ to achieve the same MSE. We then show how this methodology can be extended to give unbiased (that is with no time-discretization error) estimators of the filter, which are proved to have finite variance and with high-probability have finite cost. Finally, we extend our methodology to the problem of online static-parameter estimation.

This paper proposes a recommender system to alleviate the cold-start problem that can estimate user preferences based on only a small number of items. To identify a user's preference in the cold state, existing recommender systems, such as Netflix, initially provide items to a user; we call those items evidence candidates. Recommendations are then made based on the items selected by the user. Previous recommendation studies have two limitations: (1) the users who consumed a few items have poor recommendations and (2) inadequate evidence candidates are used to identify user preferences. We propose a meta-learning-based recommender system called MeLU to overcome these two limitations. From meta-learning, which can rapidly adopt new task with a few examples, MeLU can estimate new user's preferences with a few consumed items. In addition, we provide an evidence candidate selection strategy that determines distinguishing items for customized preference estimation. We validate MeLU with two benchmark datasets, and the proposed model reduces at least 5.92% mean absolute error than two comparative models on the datasets. We also conduct a user study experiment to verify the evidence selection strategy.

In this paper we address issues with image retrieval benchmarking on standard and popular Oxford 5k and Paris 6k datasets. In particular, annotation errors, the size of the dataset, and the level of challenge are addressed: new annotation for both datasets is created with an extra attention to the reliability of the ground truth. Three new protocols of varying difficulty are introduced. The protocols allow fair comparison between different methods, including those using a dataset pre-processing stage. For each dataset, 15 new challenging queries are introduced. Finally, a new set of 1M hard, semi-automatically cleaned distractors is selected. An extensive comparison of the state-of-the-art methods is performed on the new benchmark. Different types of methods are evaluated, ranging from local-feature-based to modern CNN based methods. The best results are achieved by taking the best of the two worlds. Most importantly, image retrieval appears far from being solved.

Salient object detection is a problem that has been considered in detail and many solutions proposed. In this paper, we argue that work to date has addressed a problem that is relatively ill-posed. Specifically, there is not universal agreement about what constitutes a salient object when multiple observers are queried. This implies that some objects are more likely to be judged salient than others, and implies a relative rank exists on salient objects. The solution presented in this paper solves this more general problem that considers relative rank, and we propose data and metrics suitable to measuring success in a relative objects saliency landscape. A novel deep learning solution is proposed based on a hierarchical representation of relative saliency and stage-wise refinement. We also show that the problem of salient object subitizing can be addressed with the same network, and our approach exceeds performance of any prior work across all metrics considered (both traditional and newly proposed).

北京阿比特科技有限公司