亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Interacting particle or agent systems that display a rich variety of collection motions are ubiquitous in science and engineering. A fundamental and challenging goal is to understand the link between individual interaction rules and collective behaviors. In this paper, we study the data-driven discovery of distance-based interaction laws in second-order interacting particle systems. We propose a learning approach that models the latent interaction kernel functions as Gaussian processes, which can simultaneously fulfill two inference goals: one is the nonparametric inference of interaction kernel function with the pointwise uncertainty quantification, and the other one is the inference of unknown parameters in the non-collective forces of the system. We formulate learning interaction kernel functions as a statistical inverse problem and provide a detailed analysis of recoverability conditions, establishing that a coercivity condition is sufficient for recoverability. We provide a finite-sample analysis, showing that our posterior mean estimator converges at an optimal rate equal to the one in the classical 1-dimensional Kernel Ridge regression. Numerical results on systems that exhibit different collective behaviors demonstrate efficient learning of our approach from scarce noisy trajectory data.

相關內容

IFIP TC13 Conference on Human-Computer Interaction是人機交互領域的研究者和實踐者展示其工作的重要平臺。多年來,這些會議吸引了來自幾個國家和文化的研究人員。官網鏈接: · 高斯過程回歸 · CC · 輸入空間 · state-of-the-art ·
2021 年 7 月 30 日

The increased demand for online prediction and the growing availability of large data sets drives the need for computationally efficient models. While exact Gaussian process regression shows various favorable theoretical properties (uncertainty estimate, unlimited expressive power), the poor scaling with respect to the training set size prohibits its application in big data regimes in real-time. Therefore, this paper proposes dividing local Gaussian processes, which are a novel, computationally efficient modeling approach based on Gaussian process regression. Due to an iterative, data-driven division of the input space, they achieve a sublinear computational complexity in the total number of training points in practice, while providing excellent predictive distributions. A numerical evaluation on real-world data sets shows their advantages over other state-of-the-art methods in terms of accuracy as well as prediction and update speed.

While many approaches have been proposed for discovering abrupt changes in piecewise constant signals, few methods are available to capture these changes in piecewise polynomial signals. In this paper, we propose a change point detection method, PRUTF, based on trend filtering. By providing a comprehensive dual solution path for trend filtering, PRUTF allows us to discover change points of the underlying signal for either a given value of the regularization parameter or a specific number of steps of the algorithm. We demonstrate that the dual solution path constitutes a Gaussian bridge process that enables us to derive an exact and efficient stopping rule for terminating the search algorithm. We also prove that the estimates produced by this algorithm are asymptotically consistent in pattern recovery. This result holds even in the case of staircases (consecutive change points of the same sign) in the signal. Finally, we investigate the performance of our proposed method for various signals and then compare its performance against some state-of-the-art methods in the context of change point detection. We apply our method to three real-world datasets including the UK House Price Index (HPI), the GISS surface Temperature Analysis (GISTEMP) and the Coronavirus disease (COVID-19) pandemic.

Injection molding is one of the most popular manufacturing methods for the modeling of complex plastic objects. Faster numerical simulation of the technological process would allow for faster and cheaper design cycles of new products. In this work, we propose a baseline for a data processing pipeline that includes the extraction of data from Moldflow simulation projects and the prediction of the fill time and deflection distributions over 3-dimensional surfaces using machine learning models. We propose algorithms for engineering of features, including information of injector gates parameters that will mostly affect the time for plastic to reach the particular point of the form for fill time prediction, and geometrical features for deflection prediction. We propose and evaluate baseline machine learning models for fill time and deflection distribution prediction and provide baseline values of MSE and RMSE metrics. Finally, we measure the execution time of our solution and show that it significantly exceeds the time of simulation with Moldflow software: approximately 17 times and 14 times faster for mean and median total times respectively, comparing the times of all analysis stages for deflection prediction. Our solution has been implemented in a prototype web application that was approved by the management board of Fiat Chrysler Automobiles and Illogic SRL. As one of the promising applications of this surrogate modelling approach, we envision the use of trained models as a fast objective function in the task of optimization of technological parameters of the injection molding process (meaning optimal placement of gates), which could significantly aid engineers in this task, or even automate it.

We study data-driven assistants that provide congestion forecasts to users of shared facilities (roads, cafeterias, etc.), to support coordination between them, and increase efficiency of such collective systems. Key questions are: (1) when and how much can (accurate) predictions help for coordination, and (2) which assistant algorithms reach optimal predictions? First we lay conceptual ground for this setting where user preferences are a priori unknown and predictions influence outcomes. Addressing (1), we establish conditions under which self-fulfilling prophecies, i.e., "perfect" (probabilistic) predictions of what will happen, solve the coordination problem in the game-theoretic sense of selecting a Bayesian Nash equilibrium (BNE). Next we prove that such prophecies exist even in large-scale settings where only aggregated statistics about users are available. This entails a new (nonatomic) BNE existence result. Addressing (2), we propose two assistant algorithms that sequentially learn from users' reactions, together with optimality/convergence guarantees. We validate one of them in a large real-world experiment.

Experiments in predator-prey systems show the emergence of long-term cycles. Deterministic model typically fails in capturing these behaviors, which emerge from the microscopic interplay of individual based dynamics and stochastic effects. However, simulating stochastic individual based models can be extremely demanding, especially when the sample size is large. Hence we propose an alternative simulation approach, whose computation cost is lower than the one of the classic stochastic algorithms. First, we describe how starting from the individual description of predator-prey dynamics, it is possible to derive the mean-field equations for the homogeneous and heterogeneous space cases. Then, we see that the new approach is able to preserve the order and that it converges to the mean-field solutions as the sample size increases. We show how to simulate the dynamics with the new approach, performing different numerical experiments in order to test its efficiency. Finally, we analyze the different nature of oscillations between mean-field and stochastic simulations underling how the new algorithm can be useful also to study the collective behaviours at the population level.

Identifying the instances of jumps in a discrete time series sample of a jump diffusion model is a challenging task. We have developed a novel statistical technique for jump detection and volatility estimation in a return time series data using a threshold method. Since we derive the threshold and the volatility estimator simultaneously by solving an implicit equation, we obtain unprecedented accuracy across a wide range of parameter values. Using this method, the increments attributed to jumps have been removed from a large collection of historical data of Indian sectorial indices. Subsequently, we test the presence of regime switching dynamics in the volatility coefficient using a new discriminating statistic. The statistic is shown to be sensitive to the transition kernel of the regime switching model. We perform the testing using bootstrap method and find a clear indication of presence of multiple regimes of volatility in the data.

We present a priori and a posteriori error analysis of a high order hybridizable discontinuous Galerkin (HDG) method applied to a semi-linear elliptic problem posed on a piecewise curved, non polygonal domain. We approximate $\Omega$ by a polygonal subdomain $\Omega_h$ and propose an HDG discretization, which is shown to be optimal under mild assumptions related to the non-linear source term and the distance between the boundaries of the polygonal subdomain $\Omega_h$ and the true domain $\Omega$. Moreover, a local non-linear post-processing of the scalar unknown is proposed and shown to provide an additional order of convergence. A reliable and locally efficient a posteriori error estimator that takes into account the error in the approximation of the boundary data of $\Omega_h$ is also provided.

In this paper, the performance of a dual-hop relaying terahertz (THz) wireless communication system is investigated. In particular, the behaviors of the two THz hops are determined by three factors, which are the deterministic path loss, the fading effects, and pointing errors. Assuming that both THz links are subject to the $\alpha$-$\mu$ fading with pointing errors, we derive exact expressions for the cumulative distribution function (CDF) and probability density function (PDF) of the end-to-end signal-to-noise ratio (SNR). Relying on the CDF and PDF, important performance metrics are evaluated, such as the outage probability, average bit error rate, and average channel capacity. Moreover, the asymptotic analyses are presented to obtain more insights. Results show that the dual-hop relaying scheme has better performance than the single THz link. The system's diversity order is $\min\left\{\frac{\phi_1}{2},\frac{\alpha_1\mu_1}{2},\phi_2,\alpha_2\mu_2\right\}$, where $\alpha_i$ and $\mu_i$ represent the fading parameters of the $i$-th THz link for $i\in(1,2)$, and $\phi_i$ denotes the pointing error parameter. In addition, we extend the analysis to a multi-relay cooperative system and derive the asymptotic symbol error rate expressions. Results demonstrate that the diversity order of the multi-relay system is $K\min\left\{\frac{\phi_1}{2},\frac{\alpha_1\mu_1}{2},\phi_2,\alpha_2\mu_2\right\}$, where $K$ is the number of relays. Finally, the derived analytical expressions are verified by Monte Carlo simulation.

Seamlessly interacting with humans or robots is hard because these agents are non-stationary. They update their policy in response to the ego agent's behavior, and the ego agent must anticipate these changes to co-adapt. Inspired by humans, we recognize that robots do not need to explicitly model every low-level action another agent will make; instead, we can capture the latent strategy of other agents through high-level representations. We propose a reinforcement learning-based framework for learning latent representations of an agent's policy, where the ego agent identifies the relationship between its behavior and the other agent's future strategy. The ego agent then leverages these latent dynamics to influence the other agent, purposely guiding them towards policies suitable for co-adaptation. Across several simulated domains and a real-world air hockey game, our approach outperforms the alternatives and learns to influence the other agent.

User behavior data in recommender systems are driven by the complex interactions of many latent factors behind the users' decision making processes. The factors are highly entangled, and may range from high-level ones that govern user intentions, to low-level ones that characterize a user's preference when executing an intention. Learning representations that uncover and disentangle these latent factors can bring enhanced robustness, interpretability, and controllability. However, learning such disentangled representations from user behavior is challenging, and remains largely neglected by the existing literature. In this paper, we present the MACRo-mIcro Disentangled Variational Auto-Encoder (MacridVAE) for learning disentangled representations from user behavior. Our approach achieves macro disentanglement by inferring the high-level concepts associated with user intentions (e.g., to buy a shirt or a cellphone), while capturing the preference of a user regarding the different concepts separately. A micro-disentanglement regularizer, stemming from an information-theoretic interpretation of VAEs, then forces each dimension of the representations to independently reflect an isolated low-level factor (e.g., the size or the color of a shirt). Empirical results show that our approach can achieve substantial improvement over the state-of-the-art baselines. We further demonstrate that the learned representations are interpretable and controllable, which can potentially lead to a new paradigm for recommendation where users are given fine-grained control over targeted aspects of the recommendation lists.

北京阿比特科技有限公司