亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Our interest was to evaluate changes in fixation duration as a function of time-on-task (TOT) during a random saccade task. We employed a large, publicly available dataset. The frequency histogram of fixation durations was multimodal and modelled as a Gaussian mixture. We found five fixation types. The ``ideal'' response would be a single accurate saccade after each target movement, with a typical saccade latency of 200-250 msec, followed by a long fixation (> 800 msec) until the next target jump. We found fixations like this, but they comprised only 10% of all fixations and were the first fixation after target movement only 23.4% of the time. More frequently (57.4% of the time), the first fixation after target movement was short (117.7 msec mean) and was commonly followed by a corrective saccade. Across the entire 100 sec of the task, median total fixation duration decreased. This decrease was approximated with a power law fit with R^2=0.94. A detailed examination of the frequency of each of our five fixation types over time on task (TOT) revealed that the three shortest duration fixation types became more and more frequent with TOT whereas the two longest fixations became less and less frequent. In all cases, the changes over TOT followed power law relationships, with R^2 values between 0.73 and 0.93. We concluded that, over the 100 second duration of our task, long fixations are common in the first 15 to 22 seconds but become less common after that. Short fixations are relatively uncommon in the first 15 to 22 seconds but become more and more common as the task progressed. Apparently. the ability to produce an ideal response, although somewhat likely in the first 22 seconds, rapidly declines. This might be related to a noted decline in saccade accuracy over time.

相關內容

Total variation regularization has proven to be a valuable tool in the context of optimal control of differential equations. This is particularly attributed to the observation that TV-penalties often favor piecewise constant minimizers with well-behaved jumpsets. On the downside, their intricate properties significantly complicate every aspect of their analysis, from the derivation of first-order optimality conditions to their discrete approximation and the choice of a suitable solution algorithm. In this paper, we investigate a general class of minimization problems with TV-regularization, comprising both continuous and discretized control spaces, from a convex geometry perspective. This leads to a variety of novel theoretical insights on minimization problems with total variation regularization as well as tools for their practical realization. First, by studying the extremal points of the respective total variation unit balls, we enable their efficient solution by geometry exploiting algorithms, e.g. fully-corrective generalized conditional gradient methods. We give a detailed account on the practical realization of such a method for piecewise constant finite element approximations of the control on triangulations of the spatial domain. Second, in the same setting and for suitable sequences of uniformly refined meshes, it is shown that minimizers to discretized PDE-constrained optimal control problems approximate solutions to a continuous limit problem involving an anisotropic total variation reflecting the fine-scale geometry of the mesh.

We theoretically explore boundary conditions for lattice Boltzmann methods, focusing on a toy two-velocities scheme. By mapping lattice Boltzmann schemes to Finite Difference schemes, we facilitate rigorous consistency and stability analyses. We develop kinetic boundary conditions for inflows and outflows, highlighting the trade-off between accuracy and stability, which we successfully overcome. Consistency analysis relies on modified equations, whereas stability is assessed using GKS (Gustafsson, Kreiss, and Sundstr{\"o}m) theory and -- when this approach fails on coarse meshes -- spectral and pseudo-spectral analyses of the scheme's matrix that explain effects germane to low resolutions.

The Gaussian Process (GP) is a highly flexible non-linear regression approach that provides a principled approach to handling our uncertainty over predicted (counterfactual) values. It does so by computing a posterior distribution over predicted point as a function of a chosen model space and the observed data, in contrast to conventional approaches that effectively compute uncertainty estimates conditionally on placing full faith in a fitted model. This is especially valuable under conditions of extrapolation or weak overlap, where model dependency poses a severe threat. We first offer an accessible explanation of GPs, and provide an implementation suitable to social science inference problems. In doing so we reduce the number of user-chosen hyperparameters from three to zero. We then illustrate the settings in which GPs can be most valuable: those where conventional approaches have poor properties due to model-dependency/extrapolation in data-sparse regions. Specifically, we apply it to (i) comparisons in which treated and control groups have poor covariate overlap; (ii) interrupted time-series designs, where models are fitted prior to an event by extrapolated after it; and (iii) regression discontinuity, which depends on model estimates taken at or just beyond the edge of their supporting data.

This work aims to improve texture inpainting after clutter removal in scanned indoor meshes. This is achieved with a new UV mapping pre-processing step which leverages semantic information of indoor scenes to more accurately match the UV islands with the 3D representation of distinct structural elements like walls and floors. Semantic UV Mapping enriches classic UV unwrapping algorithms by not only relying on geometric features but also visual features originating from the present texture. The segmentation improves the UV mapping and simultaneously simplifies the 3D geometric reconstruction of the scene after the removal of loose objects. Each segmented element can be reconstructed separately using the boundary conditions of the adjacent elements. Because this is performed as a pre-processing step, other specialized methods for geometric and texture reconstruction can be used in the future to improve the results even further.

Machine translation often suffers from biased data and algorithms that can lead to unacceptable errors in system output. While bias in gender norms has been investigated, less is known about whether MT systems encode bias about social relationships, e.g., "the lawyer kissed her wife." We investigate the degree of bias against same-gender relationships in MT systems, using generated template sentences drawn from several noun-gender languages (e.g., Spanish) and comprised of popular occupation nouns. We find that three popular MT services consistently fail to accurately translate sentences concerning relationships between entities of the same gender. The error rate varies considerably based on the context, and same-gender sentences referencing high female-representation occupations are translated with lower accuracy. We provide this work as a case study in the evaluation of intrinsic bias in NLP systems with respect to social relationships.

We present exact non-Gaussian joint likelihoods for auto- and cross-correlation functions on arbitrarily masked spherical Gaussian random fields. Our considerations apply to spin-0 as well as spin-2 fields but are demonstrated here for the spin-2 weak-lensing correlation function. We motivate that this likelihood cannot be Gaussian and show how it can nevertheless be calculated exactly for any mask geometry and on a curved sky, as well as jointly for different angular-separation bins and redshift-bin combinations. Splitting our calculation into a large- and small-scale part, we apply a computationally efficient approximation for the small scales that does not alter the overall non-Gaussian likelihood shape. To compare our exact likelihoods to correlation-function sampling distributions, we simulated a large number of weak-lensing maps, including shape noise, and find excellent agreement for one-dimensional as well as two-dimensional distributions. Furthermore, we compare the exact likelihood to the widely employed Gaussian likelihood and find significant levels of skewness at angular separations $\gtrsim 1^{\circ}$ such that the mode of the exact distributions is shifted away from the mean towards lower values of the correlation function. We find that the assumption of a Gaussian random field for the weak-lensing field is well valid at these angular separations. Considering the skewness of the non-Gaussian likelihood, we evaluate its impact on the posterior constraints on $S_8$. On a simplified weak-lensing-survey setup with an area of $10 \ 000 \ \mathrm{deg}^2$, we find that the posterior mean of $S_8$ is up to $2\%$ higher when using the non-Gaussian likelihood, a shift comparable to the precision of current stage-III surveys.

Electromagnetic stimulation probes and modulates the neural systems that control movement. Key to understanding their effects is the muscle recruitment curve, which maps evoked potential size against stimulation intensity. Current methods to estimate curve parameters require large samples; however, obtaining these is often impractical due to experimental constraints. Here, we present a hierarchical Bayesian framework that accounts for small samples, handles outliers, simulates high-fidelity data, and returns a posterior distribution over curve parameters that quantify estimation uncertainty. It uses a rectified-logistic function that estimates motor threshold and outperforms conventionally used sigmoidal alternatives in predictive performance, as demonstrated through cross-validation. In simulations, our method outperforms non-hierarchical models by reducing threshold estimation error on sparse data and requires fewer participants to detect shifts in threshold compared to frequentist testing. We present two common use cases involving electrical and electromagnetic stimulation data and provide an open-source library for Python, called hbMEP, for diverse applications.

Time sharing between activities remains an indispensable part of everyday activity pattern. However, the issue has not yet been fully acknowledged within the existing time allocation models, potentially resulting in inaccuracies in valuing travel time savings. Therefore this study is aimed at addressing this gap by investigating the potential impact of introducing time sharing within such a framework, as well as factors determining it as represented by travel activities. In doing so, time constraint in the time allocation model of Small was modified to enable sharing the same time interval between different activities. The resulting expression indicated that such an augmentation could lead to lower estimates of value of time as a resource. On the other hand, empirical research based on the data from the National Passenger Survey 2004 used for calibrating cross-nested logit model indicated a number of factors affecting the choice of travel activities. It was discovered that significant include possession of equipment allowing particular activities, e.g. newspaper, paperwork or ICT devices, companionship, gender, length of the journey, frequency of using the service, possibility of working on the train, journey planning in advance, first class travel, termination of the trip in central London, peak-time travel and availability of seating.

We prove the convergence of a damped Newton's method for the nonlinear system resulting from a discretization of the second boundary value problem for the Monge-Ampere equation. The boundary condition is enforced through the use of the notion of asymptotic cone. The differential operator is discretized based on a discrete analogue of the subdifferential.

Human-in-the-loop aims to train an accurate prediction model with minimum cost by integrating human knowledge and experience. Humans can provide training data for machine learning applications and directly accomplish some tasks that are hard for computers in the pipeline with the help of machine-based approaches. In this paper, we survey existing works on human-in-the-loop from a data perspective and classify them into three categories with a progressive relationship: (1) the work of improving model performance from data processing, (2) the work of improving model performance through interventional model training, and (3) the design of the system independent human-in-the-loop. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/ weaknesses, we have simple classification and discussion in natural language processing, computer vision, and others. Besides, we provide some open challenges and opportunities. This survey intends to provide a high-level summarization for human-in-the-loop and motivates interested readers to consider approaches for designing effective human-in-the-loop solutions.

北京阿比特科技有限公司