亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Touchscreens equipped with friction modulation can provide rich tactile feedback to their users. To date, there are no standard metrics to properly quantify the benefit brought by haptic feedback.The definition of such metrics is not straightforward since friction modulation technologies can be achieved by either ultrasonic waves or with electroadhesion. In addition, the output depends strongly on the user, both because of the mechanical behavior of the fingertip and personal tactile somatosensory capabilities. This paper proposes a method to evaluate and compare the performance of haptic tablets on an objective scale. The method first defines multiple metrics using physical measurements of friction and latency. The comparison is completed with metrics derived from information theory and based on pointing tasks performed by users. We evaluated the comparison method with two haptic devices, one based on ultrasonic friction modulation and the other based on electroadhesion. This work paves the way toward the definitions of standard specifications for haptic tablets, to establish benchmarks and guidelines for improving surface haptic devices.

相關內容

In Bayesian theory, the role of information is central. The influence exerted by prior information on posterior outcomes often jeopardizes Bayesian studies, due to the potentially subjective nature of the prior choice. When the studied model is not enriched with sufficiently a priori information, the reference prior theory emerges as a proficient tool. Based on the mutual information criterion, the theory handles the construction of a non informative prior whose choice could be called objective. We propose a generalization of the mutual information definition, arguing our choice on an interpretation based on an analogy with Global Sensitivity Analysis. A class of our generalized metrics is studied and our results reinforce the Jeffreys' prior choice which satisfies our extended definition of reference prior.

Image codecs are typically optimized to trade-off bitrate vs, distortion metrics. At low bitrates, this leads to compression artefacts which are easily perceptible, even when training with perceptual or adversarial losses. To improve image quality, and to make it less dependent on the bitrate, we propose to decode with iterative diffusion models, instead of feed-forward decoders trained using MSE or LPIPS distortions used in most neural codecs. In addition to conditioning the model on a vector-quantized image representation, we also condition on a global textual image description to provide additional context. We dub our model PerCo for 'perceptual compression', and compare it to state-of-the-art codecs at rates from 0.1 down to 0.003 bits per pixel. The latter rate is an order of magnitude smaller than those considered in most prior work. At this bitrate a 512x768 Kodak image is encoded in less than 153 bytes. Despite this ultra-low bitrate, our approach maintains the ability to reconstruct realistic images. We find that our model leads to reconstructions with state-of-the-art visual quality as measured by FID and KID, and that the visual quality is less dependent on the bitrate than previous methods.

The original Hotelling-Solomons inequality indicates that an upper bound of |mean - median|/(standard deviation) is 1. In this note, we find a new bound depending on the sample size, which is exactly smaller than 1.

As models grow larger and more complex, achieving better off-sample generalization with minimal trial-and-error is critical to the reliability and economy of machine learning workflows. As a proxy for the well-studied heuristic of seeking "flat" local minima, gradient regularization is a natural avenue, and first-order approximations such as Flooding and sharpness-aware minimization (SAM) have received significant attention, but their performance depends critically on hyperparameters (flood threshold and neighborhood radius, respectively) that are non-trivial to specify in advance. In order to develop a procedure which is more resilient to misspecified hyperparameters, with the hard-threshold "ascent-descent" switching device used in Flooding as motivation, we propose a softened, pointwise mechanism called SoftAD that downweights points on the borderline, limits the effects of outliers, and retains the ascent-descent effect. We contrast formal stationarity guarantees with those for Flooding, and empirically demonstrate how SoftAD can realize classification accuracy competitive with SAM and Flooding while maintaining a much smaller loss generalization gap and model norm. Our empirical tests range from simple binary classification on the plane to image classification using neural networks with millions of parameters; the key trends are observed across all datasets and models studied, and suggest a potential new approach to implicit regularization.

Spectral independence is a recently-developed framework for obtaining sharp bounds on the convergence time of the classical Glauber dynamics. This new framework has yielded optimal $O(n \log n)$ sampling algorithms on bounded-degree graphs for a large class of problems throughout the so-called uniqueness regime, including, for example, the problems of sampling independent sets, matchings, and Ising-model configurations. Our main contribution is to relax the bounded-degree assumption that has so far been important in establishing and applying spectral independence. Previous methods for avoiding degree bounds rely on using $L^p$-norms to analyse contraction on graphs with bounded connective constant (Sinclair, Srivastava, Yin; FOCS'13). The non-linearity of $L^p$-norms is an obstacle to applying these results to bound spectral independence. Our solution is to capture the $L^p$-analysis recursively by amortising over the subtrees of the recurrence used to analyse contraction. Our method generalises previous analyses that applied only to bounded-degree graphs. As a main application of our techniques, we consider the random graph $G(n,d/n)$, where the previously known algorithms run in time $n^{O(\log d)}$ or applied only to large $d$. We refine these algorithmic bounds significantly, and develop fast $n^{1+o(1)}$ algorithms based on Glauber dynamics that apply to all $d$, throughout the uniqueness regime.

A key challenge when trying to understand innovation is that it is a dynamic, ongoing process, which can be highly contingent on ephemeral factors such as culture, economics, or luck. This means that any analysis of the real-world process must necessarily be historical - and thus probably too late to be most useful - but also cannot be sure what the properties of the web of connections between innovations is or was. Here I try to address this by designing and generating a set of synthetic innovation web "dictionaries" that can be used to host sampled innovation timelines, probe the overall statistics and behaviours of these processes, and determine the degree of their reliance on the structure or generating algorithm. Thus, inspired by the work of Fink, Reeves, Palma and Farr (2017) on innovation in language, gastronomy, and technology, I study how new symbol discovery manifests itself in terms of additional "word" vocabulary being available from dictionaries generated from a finite number of symbols. Several distinct dictionary generation models are investigated using numerical simulation, with emphasis on the scaling of knowledge as dictionary generators and parameters are varied, and the role of which order the symbols are discovered in.

Hierarchical time series are common in several applied fields. The forecasts for these time series are required to be coherent, that is, to satisfy the constraints given by the hierarchy. The most popular technique to enforce coherence is called reconciliation, which adjusts the base forecasts computed for each time series. However, recent works on probabilistic reconciliation present several limitations. In this paper, we propose a new approach based on conditioning to reconcile any type of forecast distribution. We then introduce a new algorithm, called Bottom-Up Importance Sampling, to efficiently sample from the reconciled distribution. It can be used for any base forecast distribution: discrete, continuous, or in the form of samples, providing a major speedup compared to the current methods. Experiments on several temporal hierarchies show a significant improvement over base probabilistic forecasts.

In this paper, we propose nonlocal diffusion models with Dirichlet boundary. These nonlocal diffusion models preserve the maximum principle and also have corresponding variational form. With these good properties, It is relatively easy to prove the well-posedness and the vanishing nonlocality convergence. Furthermore, by specifically designed weight function, we can get a nonlocal diffusion model with second order convergence which is optimal for nonlocal diffusion models.

We introduce a general differentiable solver for time-dependent deformation problems with contact and friction. Our approach uses a finite element discretization with a high-order time integrator coupled with the recently proposed incremental potential contact method for handling contact and friction forces to solve PDE- and ODE-constrained optimization problems on scenes with a complex geometry. It support static and dynamic problems and differentiation with respect to all physical parameters involved in the physical problem description, which include shape, material parameters, friction parameters, and initial conditions. Our analytically derived adjoint formulation is efficient, with a small overhead (typically less than 10% for nonlinear problems) over the forward simulation, and shares many similarities with the forward problem, allowing the reuse of large parts of existing forward simulator code. We implement our approach on top of the open-source PolyFEM library, and demonstrate the applicability of our solver to shape design, initial condition optimization, and material estimation on both simulated results and in physical validations.

In human-AI collaboration systems for critical applications, in order to ensure minimal error, users should set an operating point based on model confidence to determine when the decision should be delegated to human experts. Samples for which model confidence is lower than the operating point would be manually analysed by experts to avoid mistakes. Such systems can become truly useful only if they consider two aspects: models should be confident only for samples for which they are accurate, and the number of samples delegated to experts should be minimized. The latter aspect is especially crucial for applications where available expert time is limited and expensive, such as healthcare. The trade-off between the model accuracy and the number of samples delegated to experts can be represented by a curve that is similar to an ROC curve, which we refer to as confidence operating characteristic (COC) curve. In this paper, we argue that deep neural networks should be trained by taking into account both accuracy and expert load and, to that end, propose a new complementary loss function for classification that maximizes the area under this COC curve. This promotes simultaneously the increase in network accuracy and the reduction in number of samples delegated to humans. We perform experiments on multiple computer vision and medical image datasets for classification. Our results demonstrate that the proposed loss improves classification accuracy and delegates less number of decisions to experts, achieves better out-of-distribution samples detection and on par calibration performance compared to existing loss functions.

北京阿比特科技有限公司