亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Vertical decomposition is a widely used general technique for decomposing the cells of arrangements of semi-algebraic sets in $d$-space into constant-complexity subcells. In this paper, we settle in the affirmative a few long-standing open problems involving the vertical decomposition of substructures of arrangements for $d=3,4$: (i) Let $\mathcal{S}$ be a collection of $n$ semi-algebraic sets of constant complexity in 3D, and let $U(m)$ be an upper bound on the complexity of the union $\mathcal{U}(\mathcal{S}')$ of any subset $\mathcal{S}'\subseteq \mathcal{S}$ of size at most $m$. We prove that the complexity of the vertical decomposition of the complement of $\mathcal{U}(\mathcal{S})$ is $O^*(n^2+U(n))$ (where the $O^*(\cdot)$ notation hides subpolynomial factors). We also show that the complexity of the vertical decomposition of the entire arrangement $\mathcal{A}(\mathcal{S})$ is $O^*(n^2+X)$, where $X$ is the number of vertices in $\mathcal{A}(\mathcal{S})$. (ii) Let $\mathcal{F}$ be a collection of $n$ trivariate functions whose graphs are semi-algebraic sets of constant complexity. We show that the complexity of the vertical decomposition of the portion of the arrangement $\mathcal{A}(\mathcal{F})$ in 4D lying below the lower envelope of $\mathcal{F}$ is $O^*(n^3)$. These results lead to efficient algorithms for a variety of problems involving these decompositions, including algorithms for constructing the decompositions themselves, and for constructing $(1/r)$-cuttings of substructures of arrangements of the kinds considered above. One additional algorithm of interest is for output-sensitive point enclosure queries amid semi-algebraic sets in three or four dimensions. In addition, as a main domain of applications, we study various proximity problems involving points and lines in 3D.

相關內容

3D是(shi)(shi)英文“Three Dimensions”的簡稱,中文是(shi)(shi)指三維、三個維度、三個坐標(biao),即有長、有寬(kuan)、有高,換(huan)句話(hua)說,就是(shi)(shi)立體的,是(shi)(shi)相對于只有長和(he)寬(kuan)的平面(mian)(2D)而言。

Resource allocation is a fundamental task in cell-free (CF) massive multi-input multi-output (MIMO) systems, which can effectively improve the network performance. In this paper, we study the downlink of CF MIMO networks with network clustering and linear precoding, and develop a sequential multiuser scheduling and power allocation scheme. In particular, we present a multiuser scheduling algorithm based on greedy techniques and a gradient ascent {(GA)} power allocation algorithm for sum-rate maximization when imperfect channel state information (CSI) is considered. Numerical results show the superiority of the proposed sequential scheduling and power allocation scheme and algorithms to existing approaches while reducing the computational complexity and the signaling load.

State space models (SSMs) are widely used to describe dynamic systems. However, when the likelihood of the observations is intractable, parameter inference for SSMs cannot be easily carried out using standard Markov chain Monte Carlo or sequential Monte Carlo methods. In this paper, we propose a particle Gibbs sampler as a general strategy to handle SSMs with intractable likelihoods in the approximate Bayesian computation (ABC) setting. The proposed sampler incorporates a conditional auxiliary particle filter, which can help mitigate the weight degeneracy often encountered in ABC. To illustrate the methodology, we focus on a classic stochastic volatility model (SVM) used in finance and econometrics for analyzing and interpreting volatility. Simulation studies demonstrate the accuracy of our sampler for SVM parameter inference, compared to existing particle Gibbs samplers based on the conditional bootstrap filter. As a real data application, we apply the proposed sampler for fitting an SVM to S&P 500 Index time-series data during the 2008 financial crisis.

The use of vibrotactile feedback is of growing interest in the field of prosthetics, but few devices fully integrate this technology in the prosthesis to transmit high-frequency contact information (such as surface roughness and first contact) arising from the interaction of the prosthetic device with external items. This study describes a wearable vibrotactile system for high-frequency tactile information embedded in the prosthetic socket. The device consists of two compact planar vibrotactile actuators in direct contact with the user's skin to transmit tactile cues. These stimuli are directly related to the acceleration profiles recorded with two IMUS placed on the distal phalanx of a soft under-actuated robotic prosthesis (SoftHand Pro). We characterized the system from a psychophysical point of view with fifteen able-bodied participants by computing participants' Just Noticeable Difference (JND) related to the discrimination of vibrotactile cues delivered on the index finger, which are associated with the exploration of different sandpapers. Moreover, we performed a pilot experiment with one SoftHand Pro prosthesis user by designing a task, i.e. Active Texture Identification, to investigate if our feedback could enhance users' roughness discrimination. Results indicate that the device can effectively convey contact and texture cues, which users can readily detect and distinguish.

Industrial process tomography (IPT) is a specialized imaging technique widely used in industrial scenarios for process supervision and control. Today, augmented/mixed reality (AR/MR) is increasingly being adopted in many industrial occasions, even though there is still an obvious gap when it comes to IPT. To bridge this gap, we propose the first systematic AR approach using optical see-through (OST) head mounted displays (HMDs) with comparative evaluation for domain users towards IPT visualization analysis. The proof-of-concept was demonstrated by a within-subject user study (n=20) with counterbalancing design. Both qualitative and quantitative measurements were investigated. The results showed that our AR approach outperformed conventional settings for IPT data visualization analysis in bringing higher understandability, reduced task completion time, lower error rates for domain tasks, increased usability with enhanced user experience, and a better recommendation level. We summarize the findings and suggest future research directions for benefiting IPT users with AR/MR.

To effectively process high volume of data across a fleet of dynamic and distributed vehicles, it is crucial to implement resource provisioning techniques that can provide reliable, cost-effective, and timely computing services. This article explores computation-intensive task scheduling over mobile vehicular clouds (MVCs). We use undirected weighted graphs (UWGs) to model both the execution of tasks and communication patterns among vehicles in an MVC. We then study reliable and timely scheduling of UWG tasks through a novel mechanism, operating on two complementary decision-making stages: Plan A and Plan B. Plan A entails a proactive decision-making approach, leveraging historical statistical data for the preemptive creation of an optimal mapping ($\alpha$) between tasks and the MVC prior to practical task scheduling. In contrast, Plan B explores a real-time decision-making paradigm, functioning as a reliable contingency plan. It seeks a viable mapping ($\beta$) if $\alpha$ encounters failures during task scheduling due to the unpredictable nature of the network. Furthermore, we provide an in-depth exploration of the procedural intricacies and key contributing factors that underpin the success of our mechanism. Additionally, we present a case study showcasing the superior performance on time efficiency and computation overhead. We further discuss a series of open directions for future research.

The fusion of causal models with deep learning introducing increasingly intricate data sets, such as the causal associations within images or between textual components, has surfaced as a focal research area. Nonetheless, the broadening of original causal concepts and theories to such complex, non-statistical data has been met with serious challenges. In response, our study proposes redefinitions of causal data into three distinct categories from the standpoint of causal structure and representation: definite data, semi-definite data, and indefinite data. Definite data chiefly pertains to statistical data used in conventional causal scenarios, while semi-definite data refers to a spectrum of data formats germane to deep learning, including time-series, images, text, and others. Indefinite data is an emergent research sphere inferred from the progression of data forms by us. To comprehensively present these three data paradigms, we elaborate on their formal definitions, differences manifested in datasets, resolution pathways, and development of research. We summarize key tasks and achievements pertaining to definite and semi-definite data from myriad research undertakings, present a roadmap for indefinite data, beginning with its current research conundrums. Lastly, we classify and scrutinize the key datasets presently utilized within these three paradigms.

Deep neural networks have revolutionized many machine learning tasks in power systems, ranging from pattern recognition to signal processing. The data in these tasks is typically represented in Euclidean domains. Nevertheless, there is an increasing number of applications in power systems, where data are collected from non-Euclidean domains and represented as the graph-structured data with high dimensional features and interdependency among nodes. The complexity of graph-structured data has brought significant challenges to the existing deep neural networks defined in Euclidean domains. Recently, many studies on extending deep neural networks for graph-structured data in power systems have emerged. In this paper, a comprehensive overview of graph neural networks (GNNs) in power systems is proposed. Specifically, several classical paradigms of GNNs structures (e.g., graph convolutional networks, graph recurrent neural networks, graph attention networks, graph generative networks, spatial-temporal graph convolutional networks, and hybrid forms of GNNs) are summarized, and key applications in power systems such as fault diagnosis, power prediction, power flow calculation, and data generation are reviewed in detail. Furthermore, main issues and some research trends about the applications of GNNs in power systems are discussed.

The notion of uncertainty is of major importance in machine learning and constitutes a key element of machine learning methodology. In line with the statistical tradition, uncertainty has long been perceived as almost synonymous with standard probability and probabilistic predictions. Yet, due to the steadily increasing relevance of machine learning for practical applications and related issues such as safety requirements, new problems and challenges have recently been identified by machine learning scholars, and these problems may call for new methodological developments. In particular, this includes the importance of distinguishing between (at least) two different types of uncertainty, often refereed to as aleatoric and epistemic. In this paper, we provide an introduction to the topic of uncertainty in machine learning as well as an overview of hitherto attempts at handling uncertainty in general and formalizing this distinction in particular.

We propose a novel attention gate (AG) model for medical imaging that automatically learns to focus on target structures of varying shapes and sizes. Models trained with AGs implicitly learn to suppress irrelevant regions in an input image while highlighting salient features useful for a specific task. This enables us to eliminate the necessity of using explicit external tissue/organ localisation modules of cascaded convolutional neural networks (CNNs). AGs can be easily integrated into standard CNN architectures such as the U-Net model with minimal computational overhead while increasing the model sensitivity and prediction accuracy. The proposed Attention U-Net architecture is evaluated on two large CT abdominal datasets for multi-class image segmentation. Experimental results show that AGs consistently improve the prediction performance of U-Net across different datasets and training sizes while preserving computational efficiency. The code for the proposed architecture is publicly available.

Object detection typically assumes that training and test data are drawn from an identical distribution, which, however, does not always hold in practice. Such a distribution mismatch will lead to a significant performance drop. In this work, we aim to improve the cross-domain robustness of object detection. We tackle the domain shift on two levels: 1) the image-level shift, such as image style, illumination, etc, and 2) the instance-level shift, such as object appearance, size, etc. We build our approach based on the recent state-of-the-art Faster R-CNN model, and design two domain adaptation components, on image level and instance level, to reduce the domain discrepancy. The two domain adaptation components are based on H-divergence theory, and are implemented by learning a domain classifier in adversarial training manner. The domain classifiers on different levels are further reinforced with a consistency regularization to learn a domain-invariant region proposal network (RPN) in the Faster R-CNN model. We evaluate our newly proposed approach using multiple datasets including Cityscapes, KITTI, SIM10K, etc. The results demonstrate the effectiveness of our proposed approach for robust object detection in various domain shift scenarios.

北京阿比特科技有限公司