亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Integrated Sensing and Communication (ISAC) is recognized as a promising technology for the next-generation wireless networks. In this paper, we provide a general framework to reveal the fundamental tradeoff between sensing and communications (S&C), where a unified ISAC waveform is exploited to perform dual-functional tasks. In particular, we define the Cramer-Rao bound (CRB)-rate region to characterize the S&C tradeoff, and propose a pentagon inner bound of the region. We show that the two corner points of the CRB-rate region can be achieved by the conventional Gaussian waveform and a novel strategy referred to as successive hypersphere coding, respectively. Moreover, we also offer our insights into transmission approaches achieving the boundary of the CRB-rate region, namely the Shannon-Fisher information flow.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · 控制器 · INFORMS · 優化器 · Attention ·
2022 年 6 月 5 日

The ability to accurately predict human behavior is central to the safety and efficiency of robot autonomy in interactive settings. Unfortunately, robots often lack access to key information on which these predictions may hinge, such as people's goals, attention, and willingness to cooperate. Dual control theory addresses this challenge by treating unknown parameters of a predictive model as stochastic hidden states and inferring their values at runtime using information gathered during system operation. While able to optimally and automatically trade off exploration and exploitation, dual control is computationally intractable for general interactive motion planning, mainly due to the fundamental coupling between robot trajectory optimization and human intent inference. In this paper, we present a novel algorithmic approach to enable active uncertainty reduction for interactive motion planning based on the implicit dual control paradigm. Our approach relies on sampling-based approximation of stochastic dynamic programming, leading to a model predictive control problem that can be readily solved by real-time gradient-based optimization methods. The resulting policy is shown to preserve the dual control effect for a broad class of predictive human models with both continuous and categorical uncertainty. The efficacy of our approach is demonstrated with simulated driving examples.

Given a family of squares in the plane, their $packing \ problem$ asks for the maximum number, $\nu$, of pairwise disjoint squares among them, while their $hitting \ problem$ asks for the minimum number, $\tau$, of points hitting all of them, $\tau \ge \nu$. Both problems are NP-hard even if all the rectangles are unit squares and their sides are parallel to the axes. The main results of this work are providing the first bounds for the $\tau / \nu$ ratio on not necessarily axis-parallel squares. We establish an upper bound of $6$ for unit squares and $10$ for squares of varying sizes. The worst ratios we can provide with examples are $3$ and $4$, respectively. For comparison, in the axis-parallel case, the supremum of the considered ratio is in the interval $[\frac{3}{2},2]$ for unit squares and $[\frac{3}{2},4]$ for arbitrary squares. The new bounds necessitate a mixture of novel and classical techniques of possibly extendable use. Furthermore, we study rectangles with a bounded ``aspect ratio'', where the $aspect \ ratio$ of a rectangle is the larger side of a rectangle divided by its smaller side. We improve on the well-known best $\tau/\nu$ bound, which is quadratic in terms of the aspect ratio. We reduce it from quadratic to linear for rectangles, even if they are not axis-parallel, and from linear to logarithmic, for axis-parallel rectangles. Finally, we prove similar bounds for the chromatic numbers of squares and rectangles with a bounded aspect ratio.

From a model-building perspective, in this paper we propose a paradigm shift for fitting over-parameterized models. Philosophically, the mindset is to fit models to future observations rather than to the observed sample. Technically, choosing an imputation model for generating future observations, we fit over-parameterized models to future observations via optimizing an approximation to the desired expected loss-function based on its sample counterpart and an adaptive simplicity-preference function. This technique is discussed in detail to both creating bootstrap imputation and final estimation with bootstrap imputation. The method is illustrated with the many-normal-means problem, $n < p$ linear regression, and deep convolutional neural networks for image classification of MNIST digits. The numerical results demonstrate superior performance across these three different types of applications. For example, for the many-normal-means problem, our method uniformly dominates James-Stein and Efron's $g-$modeling, and for the MNIST image classification, it performs better than all existing methods and reaches arguably the best possible result. While this paper is largely expository because of the ambitious task of taking a look at over-parameterized models from the new perspective, fundamental theoretical properties are also investigated. We conclude the paper with a few remarks.

We study the problem of learning, from observational data, fair and interpretable policies that effectively match heterogeneous individuals to scarce resources of different types. We model this problem as a multi-class multi-server queuing system where both individuals and resources arrive stochastically over time. Each individual, upon arrival, is assigned to a queue where they wait to be matched to a resource. The resources are assigned in a first come first served (FCFS) fashion according to an eligibility structure that encodes the resource types that serve each queue. We propose a methodology based on techniques in modern causal inference to construct the individual queues as well as learn the matching outcomes and provide a mixed-integer optimization (MIO) formulation to optimize the eligibility structure. The MIO problem maximizes policy outcome subject to wait time and fairness constraints. It is very flexible, allowing for additional linear domain constraints. We conduct extensive analyses using synthetic and real-world data. In particular, we evaluate our framework using data from the U.S. Homeless Management Information System (HMIS). We obtain wait times as low as an FCFS policy while improving the rate of exit from homelessness for underserved or vulnerable groups (7% higher for the Black individuals and 15% higher for those below 17 years old) and overall.

In more recent years, there has been increasing research interest in exploiting the use of application specific hardware for solving optimisation problems. Examples of solvers that use specialised hardware are IBM's Quantum System One and D-wave's Quantum Annealer (QA) and Fujitsu's Digital Annealer (DA). These solvers have been developed to optimise problems faster than traditional meta-heuristics implemented on general purpose machines. Previous research has shown that these solvers (can optimise many problems much quicker than exact solvers such as GUROBI and CPLEX. Such conclusions have not been made when comparing hardware solvers with classical evolutionary algorithms. Making a fair comparison between traditional evolutionary algorithms, such as Genetic Algorithm (GA), and the DA (or other similar solvers) is challenging because the later benefits from the use of application specific hardware while evolutionary algorithms are often implemented on general-purpose machines. Moreover, quantum or quantum-inspired solvers are limited to solving problems in a specific format. A common formulation used is Quadratic Unconstrained Binary Optimisation (QUBO). Many optimisation problems are however constrained and have natural representations that are non-binary. Converting such problems to QUBO can lead to more problem difficulty and/or larger search space. The question addressed in this paper is whether quantum or quantum-inspired solvers can optimise QUBO transformations of combinatorial optimisation problems faster than classical evolutionary algorithms applied to the same problems in their natural representations. We show that the DA often present better average objective function values than GA on Travelling Salesman, Quadratic Assignment and Multi-dimensional Knapsack Problem instances.

Nowadays, the shipbuilding industry is facing a radical change towards solutions with a smaller environmental impact. This can be achieved with low emissions engines, optimized shape designs with lower wave resistance and noise generation, and by reducing the metal raw materials used during the manufacturing. This work focuses on the last aspect by presenting a complete structural optimization pipeline for modern passenger ship hulls which exploits advanced model order reduction techniques to reduce the dimensionality of both input parameters and outputs of interest. We introduce a novel approach which incorporates parameter space reduction through active subspaces into the proper orthogonal decomposition with interpolation method. This is done in a multi-fidelity setting. We test the whole framework on a simplified model of a midship section and on the full model of a passenger ship, controlled by 20 and 16 parameters, respectively. We present a comprehensive error analysis and show the capabilities and usefulness of the methods especially during the preliminary design phase, finding new unconsidered designs while handling high dimensional parameterizations.

Major Depressive Disorder (MDD) is a common worldwide mental health issue with high associated socioeconomic costs. The prediction and automatic detection of MDD can, therefore, make a huge impact on society. Speech, as a non-invasive, easy to collect signal, is a promising marker to aid the diagnosis and assessment of MDD. In this regard, speech samples were collected as part of the Remote Assessment of Disease and Relapse in Major Depressive Disorder (RADAR-MDD) research programme. RADAR-MDD was an observational cohort study in which speech and other digital biomarkers were collected from a cohort of individuals with a history of MDD in Spain, United Kingdom and the Netherlands. In this paper, the RADAR-MDD speech corpus was taken as an experimental framework to test the efficacy of a Sequence-to-Sequence model with a local attention mechanism in a two-class depression severity classification paradigm. Additionally, a novel training method, HARD-Training, is proposed. It is a methodology based on the selection of more ambiguous samples for the model training, and inspired by the curriculum learning paradigm. HARD-Training was found to consistently improve - with an average increment of 8.6% - the performance of our classifiers for both of two speech elicitation tasks used and each collection site of the RADAR-MDD speech corpus. With this novel methodology, our Sequence-to-Sequence model was able to effectively detect MDD severity regardless of language. Finally, recognising the need for greater awareness of potential algorithmic bias, we conduct an additional analysis of our results separately for each gender.

Compositionality is a key property for dealing with complexity, which has been studied from many points of view in diverse fields. Particularly, the composition of individual computations (or programs) has been widely studied almost since the inception of computer science. Unlike existing composition theories, this paper presents an algebraic model not for composing individual programs but for inductively composing spaces of sequential and/or parallel constructs. We particularly describe the semantics of the proposed model and present an abstract example to demonstrate its application.

Federated learning (FL) is an effective mechanism for data privacy in recommender systems by running machine learning model training on-device. While prior FL optimizations tackled the data and system heterogeneity challenges faced by FL, they assume the two are independent of each other. This fundamental assumption is not reflective of real-world, large-scale recommender systems -- data and system heterogeneity are tightly intertwined. This paper takes a data-driven approach to show the inter-dependence of data and system heterogeneity in real-world data and quantifies its impact on the overall model quality and fairness. We design a framework, RF^2, to model the inter-dependence and evaluate its impact on state-of-the-art model optimization techniques for federated recommendation tasks. We demonstrate that the impact on fairness can be severe under realistic heterogeneity scenarios, by up to 15.8--41x compared to a simple setup assumed in most (if not all) prior work. It means when realistic system-induced data heterogeneity is not properly modeled, the fairness impact of an optimization can be downplayed by up to 41x. The result shows that modeling realistic system-induced data heterogeneity is essential to achieving fair federated recommendation learning. We plan to open-source RF^2 to enable future design and evaluation of FL innovations.

Visual information extraction (VIE) has attracted considerable attention recently owing to its various advanced applications such as document understanding, automatic marking and intelligent education. Most existing works decoupled this problem into several independent sub-tasks of text spotting (text detection and recognition) and information extraction, which completely ignored the high correlation among them during optimization. In this paper, we propose a robust visual information extraction system (VIES) towards real-world scenarios, which is a unified end-to-end trainable framework for simultaneous text detection, recognition and information extraction by taking a single document image as input and outputting the structured information. Specifically, the information extraction branch collects abundant visual and semantic representations from text spotting for multimodal feature fusion and conversely, provides higher-level semantic clues to contribute to the optimization of text spotting. Moreover, regarding the shortage of public benchmarks, we construct a fully-annotated dataset called EPHOIE (//github.com/HCIILAB/EPHOIE), which is the first Chinese benchmark for both text spotting and visual information extraction. EPHOIE consists of 1,494 images of examination paper head with complex layouts and background, including a total of 15,771 Chinese handwritten or printed text instances. Compared with the state-of-the-art methods, our VIES shows significant superior performance on the EPHOIE dataset and achieves a 9.01% F-score gain on the widely used SROIE dataset under the end-to-end scenario.

北京阿比特科技有限公司