亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Context: Software engineering is becoming more and more distributed. Developers and other stakeholders are often located in different locations, departments, and countries and operating within different time zones. Most online software design and modeling tools are not adequate for distributed collaboration since they do not support awareness and lack features for effective communication. Objective: The aim of our research is to support distributed software design activities in Virtual Reality (VR). Method: Using design science research methodology, we design and evaluate a tool for collaborative design in VR. We evaluate the collaboration efficiency and recall of design information when using the VR software design environment compared to a non-VR software design environment. Moreover, we collect the perceptions and preferences of users to explore the opportunities and challenges that were incurred by using the VR software design environment. Results: We find that there is no significant difference in the efficiency and recall of design information when using the VR compared to the non-VR environment. Furthermore, we find that developers are more satisfied with collaboration in VR. Conclusion: The results of our research and similar studies show that working in VR is not yet faster or more efficient than working on standard desktops. It is very important to improve the interface in VR (gestures with haptics, keyboard and voice input), as confirmed by the difference in results between the first and second evaluation.

相關內容

Reliability analysis aims at estimating the failure probability of an engineering system. It often requires multiple runs of a limit-state function, which usually relies on computationally intensive simulations. Traditionally, these simulations have been considered deterministic, i.e., running them multiple times for a given set of input parameters always produces the same output. However, this assumption does not always hold, as many studies in the literature report non-deterministic computational simulations (also known as noisy models). In such cases, running the simulations multiple times with the same input will result in different outputs. Similarly, data-driven models that rely on real-world data may also be affected by noise. This characteristic poses a challenge when performing reliability analysis, as many classical methods, such as FORM and SORM, are tailored to deterministic models. To bridge this gap, this paper provides a novel methodology to perform reliability analysis on models contaminated by noise. In such cases, noise introduces latent uncertainty into the reliability estimator, leading to an incorrect estimation of the real underlying reliability index, even when using Monte Carlo simulation. To overcome this challenge, we propose the use of denoising regression-based surrogate models within an active learning reliability analysis framework. Specifically, we combine Gaussian process regression with a noise-aware learning function to efficiently estimate the probability of failure of the underlying noise-free model. We showcase the effectiveness of this methodology on standard benchmark functions and a finite element model of a realistic structural frame.

The interpretability of models has become a crucial issue in Machine Learning because of algorithmic decisions' growing impact on real-world applications. Tree ensemble methods, such as Random Forests or XgBoost, are powerful learning tools for classification tasks. However, while combining multiple trees may provide higher prediction quality than a single one, it sacrifices the interpretability property resulting in "black-box" models. In light of this, we aim to develop an interpretable representation of a tree-ensemble model that can provide valuable insights into its behavior. First, given a target tree-ensemble model, we develop a hierarchical visualization tool based on a heatmap representation of the forest's feature use, considering the frequency of a feature and the level at which it is selected as an indicator of importance. Next, we propose a mixed-integer linear programming (MILP) formulation for constructing a single optimal multivariate tree that accurately mimics the target model predictions. The goal is to provide an interpretable surrogate model based on oblique hyperplane splits, which uses only the most relevant features according to the defined forest's importance indicators. The MILP model includes a penalty on feature selection based on their frequency in the forest to further induce sparsity of the splits. The natural formulation has been strengthened to improve the computational performance of {mixed-integer} software. Computational experience is carried out on benchmark datasets from the UCI repository using a state-of-the-art off-the-shelf solver. Results show that the proposed model is effective in yielding a shallow interpretable tree approximating the tree-ensemble decision function.

Multivariate spatio-temporal models are widely applicable, but specifying their structure is complicated and may inhibit wider use. We introduce the R package tinyVAST from two viewpoints: the software user and the statistician. From the user viewpoint, tinyVAST adapts a widely used formula interface to specify generalized additive models, and combines this with arguments to specify spatial and spatio-temporal interactions among variables. These interactions are specified using arrow notation (from structural equation models), or an extended arrow-and-lag notation that allows simultaneous, lagged, and recursive dependencies among variables over time. The user also specifies a spatial domain for areal (gridded), continuous (point-count), or stream-network data. From the statistician viewpoint, tinyVAST constructs sparse precision matrices representing multivariate spatio-temporal variation, and parameters are estimated by specifying a generalized linear mixed model (GLMM). This expressive interface encompasses vector autoregressive, empirical orthogonal functions, spatial factor analysis, and ARIMA models. To demonstrate, we fit to data from two survey platforms sampling corals, sponges, rockfishes, and flatfishes in the Gulf of Alaska and Aleutian Islands. We then compare eight alternative model structures using different assumptions about habitat drivers and survey detectability. Model selection suggests that towed-camera and bottom trawl gears have spatial variation in detectability but sample the same underlying density of flatfishes and rockfishes, and that rockfishes are positively associated with sponges while flatfishes are negatively associated with corals. We conclude that tinyVAST can be used to test complicated dependencies representing alternative structural assumptions for research and real-world policy evaluation.

While deep neural networks have achieved remarkable performance, data augmentation has emerged as a crucial strategy to mitigate overfitting and enhance network performance. These techniques hold particular significance in industrial manufacturing contexts. Recently, image mixing-based methods have been introduced, exhibiting improved performance on public benchmark datasets. However, their application to industrial tasks remains challenging. The manufacturing environment generates massive amounts of unlabeled data on a daily basis, with only a few instances of abnormal data occurrences. This leads to severe data imbalance. Thus, creating well-balanced datasets is not straightforward due to the high costs associated with labeling. Nonetheless, this is a crucial step for enhancing productivity. For this reason, we introduce ContextMix, a method tailored for industrial applications and benchmark datasets. ContextMix generates novel data by resizing entire images and integrating them into other images within the batch. This approach enables our method to learn discriminative features based on varying sizes from resized images and train informative secondary features for object recognition using occluded images. With the minimal additional computation cost of image resizing, ContextMix enhances performance compared to existing augmentation techniques. We evaluate its effectiveness across classification, detection, and segmentation tasks using various network architectures on public benchmark datasets. Our proposed method demonstrates improved results across a range of robustness tasks. Its efficacy in real industrial environments is particularly noteworthy, as demonstrated using the passive component dataset.

User interaction is one of the most effective ways to improve the ontology alignment quality. However, this approach faces the challenge of how users can participate effectively in the matching process. To solve this challenge. In this paper, an interactive ontology alignment approach using compact differential evolution algorithm with adaptive parameter control (IOACDE) is proposed. In this method, the ontology alignment process is modeled as an interactive optimization problem and users are allowed to intervene in matching in two ways. One is that the mapping suggestions generated by IOACDE as a complete candidate alignment is evaluated by user during optimization process. The other is that the user ameliorates the alignment results by evaluating single mapping after the automatic matching process. To demonstrate the effectiveness of the proposed algorithm, the neural embedding model and K nearest neighbor (KNN) is employed to simulate user for the ontologies of the real world. The experimental results show that the proposed interactive approach can improve the alignment quality compared to the non-interactive. Compared with the state-of-the-art methods from OAEI, the results show that the proposed algorithm has a better performance under the same error rate.

Anderson acceleration (AA) is a technique for accelerating the convergence of an underlying fixed-point iteration. AA is widely used within computational science, with applications ranging from electronic structure calculation to the training of neural networks. Despite AA's widespread use, relatively little is understood about it theoretically. An important and unanswered question in this context is: To what extent can AA actually accelerate convergence of the underlying fixed-point iteration? While simple enough to state, this question appears rather difficult to answer. For example, it is unanswered even in the simplest (non-trivial) case where the underlying fixed-point iteration consists of applying a two-dimensional affine function. In this note we consider a restarted variant of AA applied to solve symmetric linear systems with restart window of size one. Several results are derived from the analytical solution of a nonlinear eigenvalue problem characterizing residual propagation of the AA iteration. This includes a complete characterization of the method to solve $2 \times 2$ linear systems, rigorously quantifying how the asymptotic convergence factor depends on the initial iterate, and quantifying by how much AA accelerates the underlying fixed-point iteration. We also prove that even if the underlying fixed-point iteration diverges, the associated AA iteration may still converge.

Full-reference image quality metrics (FR-IQMs) aim to measure the visual differences between a pair of reference and distorted images, with the goal of accurately predicting human judgments. However, existing FR-IQMs, including traditional ones like PSNR and SSIM and even perceptual ones such as HDR-VDP, LPIPS, and DISTS, still fall short in capturing the complexities and nuances of human perception. In this work, rather than devising a novel IQM model, we seek to improve upon the perceptual quality of existing FR-IQM methods. We achieve this by considering visual masking, an important characteristic of the human visual system that changes its sensitivity to distortions as a function of local image content. Specifically, for a given FR-IQM metric, we propose to predict a visual masking model that modulates reference and distorted images in a way that penalizes the visual errors based on their visibility. Since the ground truth visual masks are difficult to obtain, we demonstrate how they can be derived in a self-supervised manner solely based on mean opinion scores (MOS) collected from an FR-IQM dataset. Our approach results in enhanced FR-IQM metrics that are more in line with human prediction both visually and quantitatively.

The rise of AI in human contexts places new demands on automated systems to be transparent and explainable. We examine some anthropomorphic ideas and principles relevant to such accountablity in order to develop a theoretical framework for thinking about digital systems in complex human contexts and the problem of explaining their behaviour. Structurally, systems are made of modular and hierachical components, which we abstract in a new system model using notions of modes and mode transitions. A mode is an independent component of the system with its own objectives, monitoring data, and algorithms. The behaviour of a mode, including its transitions to other modes, is determined by functions that interpret each mode's monitoring data in the light of its objectives and algorithms. We show how these belief functions can help explain system behaviour by visualising their evaluation as trajectories in higher-dimensional geometric spaces. These ideas are formalised mathematically by abstract and concrete simplicial complexes. We offer three techniques: a framework for design heuristics, a general system theory based on modes, and a geometric visualisation, and apply them in three types of human-centred systems.

We initiate the study of Boolean function analysis on high-dimensional expanders. We give a random-walk based definition of high-dimensional expansion, which coincides with the earlier definition in terms of two-sided link expanders. Using this definition, we describe an analog of the Fourier expansion and the Fourier levels of the Boolean hypercube for simplicial complexes. Our analog is a decomposition into approximate eigenspaces of random walks associated with the simplicial complexes. Our random-walk definition and the decomposition have the additional advantage that they extend to the more general setting of posets, encompassing both high-dimensional expanders and the Grassmann poset, which appears in recent work on the unique games conjecture. We then use this decomposition to extend the Friedgut-Kalai-Naor theorem to high-dimensional expanders. Our results demonstrate that a constant-degree high-dimensional expander can sometimes serve as a sparse model for the Boolean slice or hypercube, and quite possibly additional results from Boolean function analysis can be carried over to this sparse model. Therefore, this model can be viewed as a derandomization of the Boolean slice, containing only $|X(k-1)|=O(n)$ points in contrast to $\binom{n}{k}$ points in the $(k)$-slice (which consists of all $n$-bit strings with exactly $k$ ones).

Open sets are central to mathematics, especially analysis and topology, in ways few notions are. In most, if not all, computational approaches to mathematics, open sets are only studied indirectly via their 'codes' or 'representations'. In this paper, we study how hard it is to compute, given an arbitrary open set of reals, the most common representation, i.e. a countable set of open intervals. We work in Kleene's higher-order computability theory, which was historically based on the S1-S9 schemes and which now has an intuitive lambda calculus formulation due to the authors. We establish many computational equivalences between on one hand the 'structure' functional that converts open sets to the aforementioned representation, and on the other hand functionals arising from mainstream mathematics, like basic properties of semi-continuous functions, the Urysohn lemma, and the Tietze extension theorem. We also compare these functionals to known operations on regulated and bounded variation functions, and the Lebesgue measure restricted to closed sets. We obtain a number of natural computational equivalences for the latter involving theorems from mainstream mathematics.

北京阿比特科技有限公司