亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We apply a mean-field model of interactions between migrating barchan dunes, the CAFE model, which includes calving, aggregation, fragmentation, and mass-exchange, yielding a steady-state size distribution that can be resolved for different choices of interaction parameters. The CAFE model is applied to empirically measured distributions of dune sizes in two barchan swarms on Mars, three swarms in Morocco, and one in Mauritania, each containing ~1000 bedforms, comparing the observed size distributions to the steady-states of the CAFE model. We find that the distributions in the Martian swarm are very similar to the swarm measured in Mauritania, suggesting that the two very different planetary environments however share similar dune interaction dynamics. Optimisation of the model parameters of three specific configurations of the CAFE model shows that the fit of the theoretical steady-state is often superior to the typically assumed log-normal. In all cases, the optimised parameters indicate that mass-exchange is the most frequent type of interaction. Calving is found to occur rarely in most of the swarms, with a highest rate of only 9\% of events, showing that interactions between multiple dunes rather than spontaneous calving are the driver of barchan size distributions. Finally, the implementation of interaction parameters derived from 3D simulations of dune-pair collisions indicates that sand flux between dunes is more important in producing the size distributions of the Moroccan swarms than of those in Mauritania and on Mars.

相關內容

IFIP TC13 Conference on Human-Computer Interaction是人機交互領域的研究者和實踐者展示其工作的重要平臺。多年來,這些會議吸引了來自幾個國家和文化的研究人員。官網鏈接: · Performer · 容差 · 可辨認的 · SCAN ·
2021 年 12 月 31 日

Blades manufactured through flank and point milling will likely exhibit geometric variability. Gauging the aerodynamic repercussions of such variability, prior to manufacturing a component, is challenging enough, let alone trying to predict what the amplified impact of any in-service degradation will be. While rules of thumb that govern the tolerance band can be devised based on expected boundary layer characteristics at known regions and levels of degradation, it remains a challenge to translate these insights into quantitative bounds for manufacturing. In this work, we tackle this challenge by leveraging ideas from dimension reduction to construct low-dimensional representations of aerodynamic performance metrics. These low-dimensional models can identify a subspace which contains designs that are invariant in performance -- the inactive subspace. By sampling within this subspace, we design techniques for drafting manufacturing tolerances and for quantifying whether a scanned component should be used or scrapped. We introduce the blade envelope as a computational manufacturing guide for a blade that is also amenable to qualitative visualizations. In this paper, the first of two parts, we discuss its underlying concept and detail its computational methodology, assuming one is interested only in the single objective of ensuring that the loss of all manufactured blades remains constant. To demonstrate the utility of our ideas we devise a series of computational experiments with the Von Karman Institute's LS89 turbine blade.

Pearson's chi-squared test is widely used to test the goodness of fit between categorical data and a given discrete distribution function. When the number of sets of the categorical data, say $k$, is a fixed integer, Pearson's chi-squared test statistic converges in distribution to a chi-squared distribution with $k-1$ degrees of freedom when the sample size $n$ goes to infinity. In real applications, the number $k$ often changes with $n$ and may be even much larger than $n$. By using the martingale techniques, we prove that Pearson's chi-squared test statistic converges to the normal under quite general conditions. We also propose a new test statistic which is more powerful than chi-squared test statistic based on our simulation study. A real application to lottery data is provided to illustrate our methodology.

String vibration represents an active field of research in acoustics. Small-amplitude vibration is often assumed, leading to simplified physical models that can be simulated efficiently. However, the inclusion of nonlinear phenomena due to larger string stretchings is necessary to capture important features, and efficient numerical algorithms are currently lacking in this context. Of the available techniques, many lead to schemes which may only be solved iteratively, resulting in high computational cost, and the additional concerns of existence and uniqueness of solutions. Slow and fast waves are present concurrently in the transverse and longitudinal directions of motion, adding further complications concerning numerical dispersion. This work presents a linearly-implicit scheme for the simulation of the geometrically exact nonlinear string model. The scheme conserves a numerical energy, expressed as the sum of quadratic terms only, and including an auxiliary state variable yielding the nonlinear effects. This scheme allows to treat the transverse and longitudinal waves separately, using a mixed finite difference/modal scheme for the two directions of motion, thus allowing to accurately resolve the wave speeds at reference sample rates. Numerical experiments are presented throughout.

This study considers a new multi-term urn process that has a correlation in the same term and temporal correlation. The objective is to clarify the relationship between the urn model and the Hawkes process. Correlation in the same term is represented by the P\'{o}lya urn model and the temporal correlation is incorporated by introducing the conditional initial condition. In the double-scaling limit of this urn process, the self-exciting negative binomial distribution (SE-NBD) process, which is a marked point process, is obtained. In the standard continuous limit, this process becomes the Hawkes process, which has no correlation in the same term. The difference is the variance of the intensity function in that the phase transition from the steady to the non-steady state can be observed. The critical point, at which the power law distribution is obtained, is the same for the Hawkes and the urn processes. These two processes are used to analyze empirical data of financial default to estimate the parameters of the model. For the default portfolio, the results produced by the urn process are superior to those obtained with the Hawkes process and confirm self-excitation.

We introduce a nonparametric graphical model for discrete node variables based on additive conditional independence. Additive conditional independence is a three way statistical relation that shares similar properties with conditional independence by satisfying the semi-graphoid axioms. Based on this relation we build an additive graphical model for discrete variables that does not suffer from the restriction of a parametric model such as the Ising model. We develop an estimator of the new graphical model via the penalized estimation of the discrete version of the additive precision operator and establish the consistency of the estimator under the ultrahigh-dimensional setting. Along with these methodological developments, we also exploit the properties of discrete random variables to uncover a deeper relation between additive conditional independence and conditional independence than previously known. The new graphical model reduces to a conditional independence graphical model under certain sparsity conditions. We conduct simulation experiments and analysis of an HIV antiretroviral therapy data set to compare the new method with existing ones.

Data processing and analysis pipelines in cosmological survey experiments introduce data perturbations that can significantly degrade the performance of deep learning-based models. Given the increased adoption of supervised deep learning methods for processing and analysis of cosmological survey data, the assessment of data perturbation effects and the development of methods that increase model robustness are increasingly important. In the context of morphological classification of galaxies, we study the effects of perturbations in imaging data. In particular, we examine the consequences of using neural networks when training on baseline data and testing on perturbed data. We consider perturbations associated with two primary sources: 1) increased observational noise as represented by higher levels of Poisson noise and 2) data processing noise incurred by steps such as image compression or telescope errors as represented by one-pixel adversarial attacks. We also test the efficacy of domain adaptation techniques in mitigating the perturbation-driven errors. We use classification accuracy, latent space visualizations, and latent space distance to assess model robustness. Without domain adaptation, we find that processing pixel-level errors easily flip the classification into an incorrect class and that higher observational noise makes the model trained on low-noise data unable to classify galaxy morphologies. On the other hand, we show that training with domain adaptation improves model robustness and mitigates the effects of these perturbations, improving the classification accuracy by 23% on data with higher observational noise. Domain adaptation also increases by a factor of ~2.3 the latent space distance between the baseline and the incorrectly classified one-pixel perturbed image, making the model more robust to inadvertent perturbations.

The Banach-Picard iteration is widely used to find fixed points of locally contractive (LC) maps. This paper extends the Banach-Picard iteration to distributed settings; specifically, we assume the map of which the fixed point is sought to be the average of individual (not necessarily LC) maps held by a set of agents linked by a communication network. An additional difficulty is that the LC map is not assumed to come from an underlying optimization problem, which prevents exploiting strong global properties such as convexity or Lipschitzianity. Yet, we propose a distributed algorithm and prove its convergence, in fact showing that it maintains the linear rate of the standard Banach-Picard iteration for the average LC map. As another contribution, our proof imports tools from perturbation theory of linear operators, which, to the best of our knowledge, had not been used before in the theory of distributed computation.

Control architectures and autonomy stacks for complex engineering systems are often divided into layers to decompose a complex problem and solution into distinct, manageable sub-problems. To simplify designs, uncertainties are often ignored across layers, an approach with deep roots in classical notions of separation and certainty equivalence. But to develop robust architectures, especially as interactions between data-driven learning layers and model-based decision-making layers grow more intricate, more sophisticated interfaces between layers are required. We propose a basic architecture that couples a statistical parameter estimation layer with a constrained optimization layer. We show how the layers can be tightly integrated by combining bootstrap resampling with distributionally robust optimization. The approach allows a finite-data out-of-sample safety guarantee and an exact reformulation as a tractable finite-dimensional convex optimization problem.

The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications (eg. sentiment classification, span-prediction based question answering or machine translation). However, it builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time. This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information. Moreover, it is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime. The first goal of this thesis is to characterize the different forms this shift can take in the context of natural language processing, and propose benchmarks and evaluation metrics to measure its effect on current deep learning architectures. We then proceed to take steps to mitigate the effect of distributional shift on NLP models. To this end, we develop methods based on parametric reformulations of the distributionally robust optimization framework. Empirically, we demonstrate that these approaches yield more robust models as demonstrated on a selection of realistic problems. In the third and final part of this thesis, we explore ways of efficiently adapting existing models to new domains or tasks. Our contribution to this topic takes inspiration from information geometry to derive a new gradient update rule which alleviate catastrophic forgetting issues during adaptation.

Current training objectives of existing person Re-IDentification (ReID) models only ensure that the loss of the model decreases on selected training batch, with no regards to the performance on samples outside the batch. It will inevitably cause the model to over-fit the data in the dominant position (e.g., head data in imbalanced class, easy samples or noisy samples). %We call the sample that updates the model towards generalizing on more data a generalizable sample. The latest resampling methods address the issue by designing specific criterion to select specific samples that trains the model generalize more on certain type of data (e.g., hard samples, tail data), which is not adaptive to the inconsistent real world ReID data distributions. Therefore, instead of simply presuming on what samples are generalizable, this paper proposes a one-for-more training objective that directly takes the generalization ability of selected samples as a loss function and learn a sampler to automatically select generalizable samples. More importantly, our proposed one-for-more based sampler can be seamlessly integrated into the ReID training framework which is able to simultaneously train ReID models and the sampler in an end-to-end fashion. The experimental results show that our method can effectively improve the ReID model training and boost the performance of ReID models.

北京阿比特科技有限公司