亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Manufacturing industries are increasingly adopting additive manufacturing (AM) technologies to produce functional parts in critical systems. However, the inherent complexity of both AM designs and AM processes render them attractive targets for cyber-attacks. Risk-based Information Technology (IT) and Operational Technology (OT) security guidance standards are useful resources for AM security practitioners, but the guidelines they provide are insufficient without additional AM-specific revisions. Therefore, a structured layering approach is needed to efficiently integrate these revisions with preexisting IT and OT security guidance standards. To implement such an approach, this paper proposes leveraging the National Institute of Standards and Technology's Cybersecurity Framework (CSF) to develop layered, risk-based guidance for fulfilling specific security outcomes. It begins with an in-depth literature review that reveals the importance of AM data and asset management to risk-based security. Next, this paper adopts the CSF asset identification and management security outcomes as an example for providing AM-specific guidance and identifies the AM geometry and process definitions to aid manufacturers in mapping data flows and documenting processes. Finally, this paper uses the Open Security Controls Assessment Language to integrate the AM-specific guidance together with existing IT and OT security guidance in a rigorous and traceable manner. This paper's contribution is to show how a risk-based layered approach enables the authoring, publishing, and management of AM-specific security guidance that is currently lacking. The authors believe implementation of the layered approach would result in value-added, non-redundant security guidance for AM that is consistent with the preexisting guidance.

相關內容

Advancement in large pretrained language models has significantly improved their performance for conditional language generation tasks including summarization albeit with hallucinations. To reduce hallucinations, conventional methods proposed improving beam search or using a fact checker as a postprocessing step. In this paper, we investigate the use of the Natural Language Inference (NLI) entailment metric to detect and prevent hallucinations in summary generation. We propose an NLI-assisted beam re-ranking mechanism by computing entailment probability scores between the input context and summarization model-generated beams during saliency-enhanced greedy decoding. Moreover, a diversity metric is introduced to compare its effectiveness against vanilla beam search. Our proposed algorithm significantly outperforms vanilla beam decoding on XSum and CNN/DM datasets.

Compositional generalisation (CG), in NLP and in machine learning more generally, has been assessed mostly using artificial datasets. It is important to develop benchmarks to assess CG also in real-world natural language tasks in order to understand the abilities and limitations of systems deployed in the wild. To this end, our GenBench Collaborative Benchmarking Task submission utilises the distribution-based compositionality assessment (DBCA) framework to split the Europarl translation corpus into a training and a test set in such a way that the test set requires compositional generalisation capacity. Specifically, the training and test sets have divergent distributions of dependency relations, testing NMT systems' capability of translating dependencies that they have not been trained on. This is a fully-automated procedure to create natural language compositionality benchmarks, making it simple and inexpensive to apply it further to other datasets and languages. The code and data for the experiments is available at //github.com/aalto-speech/dbca.

Solving partially observable Markov decision processes (POMDPs) with high dimensional and continuous observations, such as camera images, is required for many real life robotics and planning problems. Recent researches suggested machine learned probabilistic models as observation models, but their use is currently too computationally expensive for online deployment. We deal with the question of what would be the implication of using simplified observation models for planning, while retaining formal guarantees on the quality of the solution. Our main contribution is a novel probabilistic bound based on a statistical total variation distance of the simplified model. We show that it bounds the theoretical POMDP value w.r.t. original model, from the empirical planned value with the simplified model, by generalizing recent results of particle-belief MDP concentration bounds. Our calculations can be separated into offline and online parts, and we arrive at formal guarantees without having to access the costly model at all during planning, which is also a novel result. Finally, we demonstrate in simulation how to integrate the bound into the routine of an existing continuous online POMDP solver.

We investigate the dimension-parametric complexity of the reachability problem in vector addition systems with states (VASS) and its extension with pushdown stack (pushdown VASS). Up to now, the problem is known to be $\mathcal{F}_k$-hard for VASS of dimension $3k+2$ (the complexity class $\mathcal{F}_k$ corresponds to the $k$th level of the fast-growing hierarchy), and no essentially better bound is known for pushdown VASS. We provide a new construction that improves the lower bound for VASS: $\mathcal{F}_k$-hardness in dimension $2k+3$. Furthermore, building on our new insights we show a new lower bound for pushdown VASS: $\mathcal{F}_k$-hardness in dimension $\frac k 2 + 4$. This dimension-parametric lower bound is strictly stronger than the upper bound for VASS, which suggests that the (still unknown) complexity of the reachability problem in pushdown VASS is higher than in plain VASS (where it is Ackermann-complete).

Live migration of an application or VM is a well-known technique for load balancing, performance optimization, and resource management. To minimize the total downtime during migration, two popular methods -- pre-copy or post-copy -- are used in practice. These methods scale to large VMs and applications since the downtime is independent of the memory footprint of an application. However, in a secure, trusted execution environment (TEE) like Intel's scalable SGX, the state-of-the-art still uses the decade-old stop-and-copy method, where the total downtime is proportional to the application's memory footprint. This is primarily due to the fact that TEEs like Intel SGX do not expose memory and page table accesses to the OS, quite unlike unsecure applications. However, with modern TEE solutions that efficiently support large applications, such as Intel's Scalable SGX and AMD's Epyc, it is high time that TEE migration methods also evolve to enable live migration of large TEE applications with minimal downtime (stop-and-copy cannot be used any more). We present OptMig, an end-to-end solution for live migrating large memory footprints in TEE-enabled applications. Our approach does not require a developer to modify the application; however, we need a short, separate compilation pass and specialized software library support. Our optimizations reduce the total downtime by 98% for a representative microbenchmark that uses 20GB of secure memory and by 90 -- 96% for a suite of Intel SGX applications that have multi-GB memory footprints.

It is unclear how to restructure ownership when an asset is privately held, and there is uncertainty about the owners' subjective valuations. When ownership is divided equally between two owners, a commonly used mechanism is called a BMBY mechanism. This mechanism works as follows: each owner can initiate a BMBY by naming her price. Once an owner declares a price, the other chooses to sell his holdings or buy the shares of the initiator at the given price. This mechanism is simple and tractable; however, it does not elicit actual owner valuations, does not guarantee an efficient allocation, and, most importantly, is limited to an equal partnership of two owners. In this paper, we extend this rationale to a multi-owner setting. Our proposed mechanism elicits owner valuations truthfully. Additionally, our proposed mechanism exhibits several desirable traits: it is easy to implement, budget balanced, robust to collusion (weakly group strategyproof), individually rational, and ex-post efficient.

Autonomous driving systems are always built on motion-related modules such as the planner and the controller. An accurate and robust trajectory tracking method is indispensable for these motion-related modules as a primitive routine. Current methods often make strong assumptions about the model such as the context and the dynamics, which are not robust enough to deal with the changing scenarios in a real-world system. In this paper, we propose a Deep Reinforcement Learning (DRL)-based trajectory tracking method for the motion-related modules in autonomous driving systems. The representation learning ability of DL and the exploration nature of RL bring strong robustness and improve accuracy. Meanwhile, it enhances versatility by running the trajectory tracking in a model-free and data-driven manner. Through extensive experiments, we demonstrate both the efficiency and effectiveness of our method compared to current methods.

Parameterized convex minorant (PCM) method is proposed for the approximation of the objective function in amortized optimization. In the proposed method, the objective function approximator is expressed by the sum of a PCM and a nonnegative gap function, where the objective function approximator is bounded from below by the PCM convex in the optimization variable. The proposed objective function approximator is a universal approximator for continuous functions, and the global minimizer of the PCM attains the global minimum of the objective function approximator. Therefore, the global minimizer of the objective function approximator can be obtained by a single convex optimization. As a realization of the proposed method, extended parameterized log-sum-exp network is proposed by utilizing a parameterized log-sum-exp network as the PCM. Numerical simulation is performed for parameterized non-convex objective function approximation and for learning-based nonlinear model predictive control to demonstrate the performance and characteristics of the proposed method. The simulation results support that the proposed method can be used to learn objective functions and to find a global minimizer reliably and quickly by using convex optimization algorithms.

The rapid changes in the finance industry due to the increasing amount of data have revolutionized the techniques on data processing and data analysis and brought new theoretical and computational challenges. In contrast to classical stochastic control theory and other analytical approaches for solving financial decision-making problems that heavily reply on model assumptions, new developments from reinforcement learning (RL) are able to make full use of the large amount of financial data with fewer model assumptions and to improve decisions in complex financial environments. This survey paper aims to review the recent developments and use of RL approaches in finance. We give an introduction to Markov decision processes, which is the setting for many of the commonly used RL approaches. Various algorithms are then introduced with a focus on value and policy based methods that do not require any model assumptions. Connections are made with neural networks to extend the framework to encompass deep RL algorithms. Our survey concludes by discussing the application of these RL algorithms in a variety of decision-making problems in finance, including optimal execution, portfolio optimization, option pricing and hedging, market making, smart order routing, and robo-advising.

Incorporating knowledge graph into recommender systems has attracted increasing attention in recent years. By exploring the interlinks within a knowledge graph, the connectivity between users and items can be discovered as paths, which provide rich and complementary information to user-item interactions. Such connectivity not only reveals the semantics of entities and relations, but also helps to comprehend a user's interest. However, existing efforts have not fully explored this connectivity to infer user preferences, especially in terms of modeling the sequential dependencies within and holistic semantics of a path. In this paper, we contribute a new model named Knowledge-aware Path Recurrent Network (KPRN) to exploit knowledge graph for recommendation. KPRN can generate path representations by composing the semantics of both entities and relations. By leveraging the sequential dependencies within a path, we allow effective reasoning on paths to infer the underlying rationale of a user-item interaction. Furthermore, we design a new weighted pooling operation to discriminate the strengths of different paths in connecting a user with an item, endowing our model with a certain level of explainability. We conduct extensive experiments on two datasets about movie and music, demonstrating significant improvements over state-of-the-art solutions Collaborative Knowledge Base Embedding and Neural Factorization Machine.

北京阿比特科技有限公司