亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The most frequently used method to collect research data online is crowdsouring and its use continues to grow rapidly. This report investigates for the first time whether researchers also have to expect significantly different hardware performance when deploying to Amazon Mechanical Turk (MTurk). This is assessed by collecting basic hardware parameters (Operating System, GPU, and used browser) from Amazon Mechanical Turk (MTurk) and a traditional recruitment method (i.e., snowballing). The significant hardware differences between crowdsourcing participants (MTurk) and snowball recruiting are reported including relevant descriptive statistics for assessing hardware performance of 3D web applications. The report suggests that hardware differences need to be considered to obtain valid results if the designed experiment application requires graphical intense computations and relies on a coherent user experience of MTurk and more established recruitment strategies (i.e. snowballing).

相關內容

Stochastic inversion problems are typically encountered when it is wanted to quantify the uncertainty affecting the inputs of computer models. They consist in estimating input distributions from noisy, observable outputs, and such problems are increasingly examined in Bayesian contexts where the targeted inputs are affected by stochastic uncertainties. In this regard, a stochastic input can be qualified as meaningful if it explains most of the output uncertainty. While such inverse problems are characterized by identifiability conditions, constraints of "signal to noise", that can formalize this meaningfulness, should be accounted for within the definition of the model, prior to inference. This article investigates the possibility of forcing a solution to be meaningful in the context of parametric uncertainty quantification, through the tools of global sensitivity analysis and information theory (variance, entropy, Fisher information). Such forcings have mainly the nature of constraints placed on the input covariance, and can be made explicit by considering linear or linearizable models. Simulated experiments indicate that, when injected into the modeling process, these constraints can limit the influence of measurement or process noise on the estimation of the input distribution, and let hope for future extensions in a full non-linear framework, for example through the use of linear Gaussian mixtures.

Hardware implementations of Spiking Neural Networks (SNNs) represent a promising approach to edge-computing for applications that require low-power and low-latency, and which cannot resort to external cloud-based computing services. However, most solutions proposed so far either support only relatively small networks, or take up significant hardware resources, to implement large networks. To realize large-scale and scalable SNNs it is necessary to develop an efficient asynchronous communication and routing fabric that enables the design of multi-core architectures. In particular the core interface that manages inter-core spike communication is a crucial component as it represents the bottleneck of Power-Performance-Area (PPA) especially for the arbitration architecture and the routing memory. In this paper we present an arbitration mechanism with the corresponding asynchronous encoding pipeline circuits, based on hierarchical arbiter trees. The proposed scheme reduces the latency by more than 70% in sparse-event mode, compared to the state-of-the-art arbitration architectures, with lower area cost. The routing memory makes use of asynchronous Content Addressable Memory (CAM) with Current Sensing Completion Detection (CSCD), which saves approximately 46% energy, and achieves a 40% increase in throughput against conventional asynchronous CAM using configurable delay lines, at the cost of only a slight increase in area. In addition as it radically reduces the core interface resources in multi-core neuromorphic processors, the arbitration architecture and CAM architecture we propose can be also applied to a wide range of general asynchronous circuits and systems.

Strong-form meshless methods received much attention in recent years and are being extensively researched and applied to a wide range of problems in science and engineering. However, the solution of elasto-plastic problems has proven to be elusive because of often non-smooth constitutive relations between stress and strain. The novelty in tackling them is the introduction of virtual finite difference stencils to formulate a hybrid radial basis function generated finite difference (RBF-FD) method, which is used to solve smallstrain von Mises elasto-plasticity for the first time by this original approach. The paper further contrasts the new method to two alternative legacy RBF-FD approaches, which fail when applied to this class of problems. The three approaches differ in the discretization of the divergence operator found in the balance equation that acts on the non-smooth stress field. Additionally, an innovative stabilization technique is employed to stabilize boundary conditions and is shown to be essential for any of the approaches to converge successfully. Approaches are assessed on elastic and elasto-plastic benchmarks where admissible ranges of newly introduced free parameters are studied regarding stability, accuracy, and convergence rate.

Nonparametric two-sample tests such as the Maximum Mean Discrepancy (MMD) are often used to detect differences between two distributions in machine learning applications. However, the majority of existing literature assumes that error-free samples from the two distributions of interest are available.We relax this assumption and study the estimation of the MMD under $\epsilon$-contamination, where a possibly non-random $\epsilon$ proportion of one distribution is erroneously grouped with the other. We show that under $\epsilon$-contamination, the typical estimate of the MMD is unreliable. Instead, we study partial identification of the MMD, and characterize sharp upper and lower bounds that contain the true, unknown MMD. We propose a method to estimate these bounds, and show that it gives estimates that converge to the sharpest possible bounds on the MMD as sample size increases, with a convergence rate that is faster than alternative approaches. Using three datasets, we empirically validate that our approach is superior to the alternatives: it gives tight bounds with a low false coverage rate.

We study the convergence and error estimates of a finite volume method for the compressible Navier-Stokes-Fourier system with Dirichlet boundary conditions. Physical fluid domain is typically smooth and needs to be approximated by a polygonal computational domain. This leads to domain-related discretization errors, the so-called variational crimes. To treat them efficiently we embed the fluid domain into a large enough cubed domain, and propose a finite volume scheme for the corresponding domain-penalized problem. Under the assumption that the numerical density and temperature are uniformly bounded, we derive the ballistic energy inequality, yielding a priori estimates and the consistency of the penalization finite volume approximations. Further, we show that the numerical solutions converge weakly to a generalized, the so-called dissipative measure-valued, solution of the corresponding Dirichlet problem. If a strong solution exists, we prove that our numerical approximations converge strongly with the rate 1/4. Additionally, assuming uniform boundedness of the approximate velocities, we obtain global existence of the strong solution. In this case we prove that the numerical solutions converge strongly to the strong solution with the optimal rate 1/2.

This paper derives a discrete dual problem for a prototypical hybrid high-order method for convex minimization problems. The discrete primal and dual problem satisfy a weak convex duality that leads to a priori error estimates with convergence rates under additional smoothness assumptions. This duality holds for general polytopal meshes and arbitrary polynomial degree of the discretization. A nouvelle postprocessing is proposed and allows for a~posteriori error estimates on simplicial meshes using primal-dual techniques. This motivates an adaptive mesh-refining algorithm, which performs superiorly compared to uniform mesh refinements.

We introduce a new tensor integration method for time-dependent PDEs that controls the tensor rank of the PDE solution via time-dependent diffeomorphic coordinate transformations. Such coordinate transformations are generated by minimizing the normal component of the PDE operator relative to the tensor manifold that approximates the PDE solution via a convex functional. The proposed method significantly improves upon and may be used in conjunction with the coordinate-adaptive algorithm we recently proposed in JCP (2023) Vol. 491, 112378, which is based on non-convex relaxations of the rank minimization problem and Riemannian optimization. Numerical applications demonstrating the effectiveness of the proposed coordinate-adaptive tensor integration method are presented and discussed for prototype Liouville and Fokker-Planck equations.

Customising AI technologies to each user's preferences is fundamental to them functioning well. Unfortunately, current methods require too much user involvement and fail to capture their true preferences. In fact, to avoid the nuisance of manually setting preferences, users usually accept the default settings even if these do not conform to their true preferences. Norms can be useful to regulate behaviour and ensure it adheres to user preferences but, while the literature has thoroughly studied norms, most proposals take a formal perspective. Indeed, while there has been some research on constructing norms to capture a user's privacy preferences, these methods rely on domain knowledge which, in the case of AI technologies, is difficult to obtain and maintain. We argue that a new perspective is required when constructing norms, which is to exploit the large amount of preference information readily available from whole systems of users. Inspired by recommender systems, we believe that collaborative filtering can offer a suitable approach to identifying a user's norm preferences without excessive user involvement.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

Recent advances in 3D fully convolutional networks (FCN) have made it feasible to produce dense voxel-wise predictions of volumetric images. In this work, we show that a multi-class 3D FCN trained on manually labeled CT scans of several anatomical structures (ranging from the large organs to thin vessels) can achieve competitive segmentation results, while avoiding the need for handcrafting features or training class-specific models. To this end, we propose a two-stage, coarse-to-fine approach that will first use a 3D FCN to roughly define a candidate region, which will then be used as input to a second 3D FCN. This reduces the number of voxels the second FCN has to classify to ~10% and allows it to focus on more detailed segmentation of the organs and vessels. We utilize training and validation sets consisting of 331 clinical CT images and test our models on a completely unseen data collection acquired at a different hospital that includes 150 CT scans, targeting three anatomical organs (liver, spleen, and pancreas). In challenging organs such as the pancreas, our cascaded approach improves the mean Dice score from 68.5 to 82.2%, achieving the highest reported average score on this dataset. We compare with a 2D FCN method on a separate dataset of 240 CT scans with 18 classes and achieve a significantly higher performance in small organs and vessels. Furthermore, we explore fine-tuning our models to different datasets. Our experiments illustrate the promise and robustness of current 3D FCN based semantic segmentation of medical images, achieving state-of-the-art results. Our code and trained models are available for download: //github.com/holgerroth/3Dunet_abdomen_cascade.

北京阿比特科技有限公司