亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Many different worldwide initiatives are promoting the transformation from machine dominant manufacturing to digital manufacturing. Thus, to achieve a successful transformation to Industry 4.0 standard, manufacturing enterprises are required to implement a clear roadmap. However, Small and Medium Manufacturing Enterprises (SMEs) encounter many barriers and difficulties (economical, technical, cultural, etc.) in the implementation of Industry 4.0. Although several works deal with the incorporation of Industry 4.0 technologies in the area of the product and supply chain life cycles, which SMEs could use as reference, this is not the case for the customer life cycle. Thus, we present two contributions that can help the software engineers of those SMEs to incorporate Industry 4.0 technologies in the context of the customer life cycle. The first contribution is a methodology that can help those software engineers in the task of creating new software services, aligned with Industry 4.0, that allow to change how customers interact with enterprises and the experiences they have while interacting with them. The methodology details a set of stages that are divided into phases which in turn are made up of activities. It places special emphasis on the incorporation of semantics descriptions and 3D visualization in the implementation of those new services. The second contribution is a system developed for a real manufacturing scenario, using the proposed methodology, which allows to observe the possibilities that this kind of systems can offer to SMEs in two phases of the customer life cycle: Discover & Shop, and Use & Service.

相關內容

IFIP TC13 Conference on Human-Computer Interaction是人機交互領域的研究者和實踐者展示其工作的重要平臺。多年來,這些會議吸引了來自幾個國家和文化的研究人員。官網鏈接: · INTERACT · Networking · Analysis · 劃分 ·
2024 年 3 月 12 日

Signed graphs are an emergent way of representing data in a variety of contexts were conflicting interactions exist. These include data from biological, ecological, and social systems. Here we propose the concept of communicability geometry for signed graphs, proving that metrics in this space, such as the communicability distance and angles, are Euclidean and spherical. We then apply these metrics to solve several problems in data analysis of signed graphs in a unified way. They include the partitioning of signed graphs, dimensionality reduction, finding hierarchies of alliances in signed networks as well as the quantification of the degree of polarization between the existing factions in systems represented by this type of graphs.

Periodic autoregressive (PAR) time series with finite variance is considered as one of the most common models of second-order cyclostationary processes. However, in the real applications, the signals with periodic characteristics may be disturbed by additional noise related to measurement device disturbances or to other external sources. Thus, the known estimation techniques dedicated for PAR models may be inefficient for such cases. When the variance of the additive noise is relatively small, it can be ignored and the classical estimation techniques can be applied. However, for extreme cases, the additive noise can have a significant influence on the estimation results. In this paper, we propose four estimation techniques for the noise-corrupted PAR models with finite variance distributions. The methodology is based on Yule-Walker equations utilizing the autocovariance function. It can be used for any type of the finite variance additive noise. The presented simulation study clearly indicates the efficiency of the proposed techniques, also for extreme case, when the additive noise is a sum of the Gaussian additive noise and additive outliers. The proposed estimation techniques are also applied for testing if the data corresponds to noise-corrupted PAR model. This issue is strongly related to the identification of informative component in the data in case when the model is disturbed by additive non-informative noise. The power of the test is studied for simulated data. Finally, the testing procedure is applied for two real time series describing particulate matter concentration in the air.

Purpose: To investigate whether Fractal Dimension (FD)-based oculomics could be used for individual risk prediction by evaluating repeatability and robustness. Methods: We used two datasets: Caledonia, healthy adults imaged multiple times in quick succession for research (26 subjects, 39 eyes, 377 colour fundus images), and GRAPE, glaucoma patients with baseline and follow-up visits (106 subjects, 196 eyes, 392 images). Mean follow-up time was 18.3 months in GRAPE, thus it provides a pessimistic lower-bound as vasculature could change. FD was computed with DART and AutoMorph. Image quality was assessed with QuickQual, but no images were initially excluded. Pearson, Spearman, and Intraclass Correlation (ICC) were used for population-level repeatability. For individual-level repeatability, we introduce measurement noise parameter {\lambda} which is within-eye Standard Deviation (SD) of FD measurements in units of between-eyes SD. Results: In Caledonia, ICC was 0.8153 for DART and 0.5779 for AutoMorph, Pearson/Spearman correlation (first and last image) 0.7857/0.7824 for DART, and 0.3933/0.6253 for AutoMorph. In GRAPE, Pearson/Spearman correlation (first and next visit) was 0.7479/0.7474 for DART, and 0.7109/0.7208 for AutoMorph (all p<0.0001). Median {\lambda} in Caledonia without exclusions was 3.55\% for DART and 12.65\% for AutoMorph, and improved to up to 1.67\% and 6.64\% with quality-based exclusions, respectively. Quality exclusions primarily mitigated large outliers. Worst quality in an eye correlated strongly with {\lambda} (Pearson 0.5350-0.7550, depending on dataset and method, all p<0.0001). Conclusions: Repeatability was sufficient for individual-level predictions in heterogeneous populations. DART performed better on all metrics and might be able to detect small, longitudinal changes, highlighting the potential of robust methods.

To simplify the analysis of Boolean networks, a reduction in the number of components is often considered. A popular reduction method consists in eliminating components that are not autoregulated, using variable substitution. In this work, we show how this method can be extended, for asynchronous dynamics of Boolean networks, to the elimination of vertices that have a negative autoregulation, and study the effects on the dynamics and interaction structure. For elimination of non-autoregulated variables, the preservation of attractors is in general guaranteed only for fixed points. Here we give sufficient conditions for the preservation of complex attractors. The removal of so called mediator nodes (i.e. vertices with indegree and outdegree one) is often considered, and frequently does not affect the attractor landscape. We clarify that this is not always the case, and in some situations even subtle changes in the interaction structure can lead to a different asymptotic behaviour. Finally, we use properties of the more general elimination method introduced here to give an alternative proof for a bound on the number of attractors of asynchronous Boolean networks in terms of the cardinality of positive feedback vertex sets of the interaction graph.

A higher-order change-of-measure multilevel Monte Carlo (MLMC) method is developed for computing weak approximations of the invariant measures of SDE with drift coefficients that do not satisfy the contractivity condition. This is achieved by introducing a spring term in the pairwise coupling of the MLMC trajectories employing the order 1.5 strong It\^o--Taylor method. Through this, we can recover the contractivity property of the drift coefficient while still retaining the telescoping sum property needed for implementing the MLMC method. We show that the variance of the change-of-measure MLMC method grows linearly in time $T$ for all $T > 0$, and for all sufficiently small timestep size $h > 0$. For a given error tolerance $\epsilon > 0$, we prove that the method achieves a mean-square-error accuracy of $O(\epsilon^2)$ with a computational cost of $O(\epsilon^{-2} \big\vert \log \epsilon \big\vert^{3/2} (\log \big\vert \log \epsilon \big\vert)^{1/2})$ for uniformly Lipschitz continuous payoff functions and $O \big( \epsilon^{-2} \big\vert \log \epsilon \big\vert^{5/3 + \xi} \big)$ for discontinuous payoffs, respectively, where $\xi > 0$. We also observe an improvement in the constant associated with the computational cost of the higher-order change-of-measure MLMC method, marking an improvement over the Milstein change-of-measure method in the aforementioned seminal work by M. Giles and W. Fang. Several numerical tests were performed to verify the theoretical results and assess the robustness of the method.

Nowadays human interactions largely take place on social networks, with online users' behavior often falling into a few general typologies or "social roles". Among these, opinion leaders are of crucial importance as they have the ability to spread an idea or opinion on a large scale across the network, with possible tangible consequences in the real world. In this work we extract and characterize the different social roles of users within the Reddit WallStreetBets community, around the time of the GameStop short squeeze of January 2021 -- when a handful of committed users led the whole community to engage in a large and risky financial operation. We identify the profiles of both average users and of relevant outliers, including opinion leaders, using an iterative, semi-supervised classification algorithm, which allows us to discern the characteristics needed to play a particular social role. The key features of opinion leaders are large risky investments and constant updates on a single stock, which allowed them to attract a large following and, in the case of GameStop, ignite the interest of the community. Finally, we observe a substantial change in the behavior and attitude of users after the short squeeze event: no new opinion leaders are found and the community becomes less focused on investments. Overall, this work sheds light on the users' roles and dynamics that led to the GameStop short squeeze, while also suggesting why WallStreetBets no longer wielded such large influence on financial markets, in the aftermath of this event.

This article is concerned with the multilevel Monte Carlo (MLMC) methods for approximating expectations of some functions of the solution to the Heston 3/2-model from mathematical finance, which takes values in $(0, \infty)$ and possesses superlinearly growing drift and diffusion coefficients. To discretize the SDE model, a new Milstein-type scheme is proposed to produce independent sample paths. The proposed scheme can be explicitly solved and is positivity-preserving unconditionally, i.e., for any time step-size $h>0$. This positivity-preserving property for large discretization time steps is particularly desirable in the MLMC setting. Furthermore, a mean-square convergence rate of order one is proved in the non-globally Lipschitz regime, which is not trivial, as the diffusion coefficient grows super-linearly. The obtained order-one convergence in turn promises the desired relevant variance of the multilevel estimator and justifies the optimal complexity $\mathcal{O}(\epsilon^{-2})$ for the MLMC approach, where $\epsilon > 0$ is the required target accuracy. Numerical experiments are finally reported to confirm the theoretical findings.

Language models (LMs) show promise as tools for communicating science to the general public by simplifying and summarizing complex language. Because models can be prompted to generate text for a specific audience (e.g., college-educated adults), LMs might be used to create multiple versions of plain language summaries for people with different familiarities of scientific topics. However, it is not clear what the benefits and pitfalls of adaptive plain language are. When is simplifying necessary, what are the costs in doing so, and do these costs differ for readers with different background knowledge? Through three within-subjects studies in which we surface summaries for different envisioned audiences to participants of different backgrounds, we found that while simpler text led to the best reading experience for readers with little to no familiarity in a topic, high familiarity readers tended to ignore certain details in overly plain summaries (e.g., study limitations). Our work provides methods and guidance on ways of adapting plain language summaries beyond the single "general" audience.

Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-of-the-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.

Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.

北京阿比特科技有限公司