亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work we introduce the lag irreversibility function as a method to assess time-irreversibility in discrete time series. It quantifies the degree of time-asymmetry for the joint probability function of the state variable under study and the state variable lagged in time. We test its performance in a time-irreversible Markov chain model for which theoretical results are known. Moreover, we use our approach to analyze electrocardiographic recordings of four groups of subjects: healthy young individuals, healthy elderly individuals, and persons with two different disease conditions, namely, congestive heart failure and atrial fibrillation. We find that by studying jointly the variability of the amplitudes of the different waves in the electrocardiographic signals, one can obtain an efficient method to discriminate between the groups already mentioned. Finally, we test the accuracy of our method using the ROC analysis.

相關內容

We make two contributions to the Isolation Forest method for anomaly and outlier detection. The first contribution is an information-theoretically motivated generalisation of the score function that is used to aggregate the scores across random tree estimators. This generalisation allows one to take into account not just the ensemble average across trees but instead the whole distribution. The second contribution is an alternative scoring function at the level of the individual tree estimator, in which we replace the depth-based scoring of the Isolation Forest with one based on hyper-volumes associated to an isolation tree's leaf nodes. We motivate the use of both of these methods on generated data and also evaluate them on 34 datasets from the recent and exhaustive ``ADBench'' benchmark, finding significant improvement over the standard isolation forest for both variants on some datasets and improvement on average across all datasets for one of the two variants. The code to reproduce our results is made available as part of the submission.

Delays are inherent to most dynamical systems. Besides shifting the process in time, they can significantly affect their performance. For this reason, it is usually valuable to study the delay and account for it. Because they are dynamical systems, it is of no surprise that sequential decision-making problems such as Markov decision processes (MDP) can also be affected by delays. These processes are the foundational framework of reinforcement learning (RL), a paradigm whose goal is to create artificial agents capable of learning to maximise their utility by interacting with their environment. RL has achieved strong, sometimes astonishing, empirical results, but delays are seldom explicitly accounted for. The understanding of the impact of delay on the MDP is limited. In this dissertation, we propose to study the delay in the agent's observation of the state of the environment or in the execution of the agent's actions. We will repeatedly change our point of view on the problem to reveal some of its structure and peculiarities. A wide spectrum of delays will be considered, and potential solutions will be presented. This dissertation also aims to draw links between celebrated frameworks of the RL literature and the one of delays.

In this paper, we develop a general theory for adaptive nonparametric estimation of the mean function of a non-stationary and nonlinear time series model using deep neural networks (DNNs). We first consider two types of DNN estimators, non-penalized and sparse-penalized DNN estimators, and establish their generalization error bounds for general non-stationary time series. We then derive minimax lower bounds for estimating mean functions belonging to a wide class of nonlinear autoregressive (AR) models that include nonlinear generalized additive AR, single index, and threshold AR models. Building upon the results, we show that the sparse-penalized DNN estimator is adaptive and attains the minimax optimal rates up to a poly-logarithmic factor for many nonlinear AR models. Through numerical simulations, we demonstrate the usefulness of the DNN methods for estimating nonlinear AR models with intrinsic low-dimensional structures and discontinuous or rough mean functions, which is consistent with our theory.

In this work, we study the multi-agent assortment optimization problem in the two-sided sequential matching model introduced by Ashlagi et al. (2022). The setting is the following: we (the platform) offer a menu of suppliers to each customer. Then, every customer selects, simultaneously and independently, to match with a supplier or to remain unmatched. Each supplier observes the subset of customers that selected them, and choose either to match a customer or to leave the system. Therefore, a match takes place if both a customer and a supplier sequentially select each other. Each agent's behavior is probabilistic and determined by a discrete choice model. Our goal is to choose an assortment family that maximizes the expected revenue of the matching. Given the hardness of the problem, we show a $1-1/e$-approximation factor for the heterogeneous setting where customers follow general choice models and suppliers follow a general choice model whose demand function is monotone and submodular. Our approach is flexible enough to allow for different assortment constraints and for a revenue objective function. Furthermore, we design an algorithm that beats the $1-1/e$ barrier and, in fact, is asymptotically optimal when suppliers follow the classic multinomial-logit choice model and are sufficiently selective. We finally provide other results and further insights. Notably, in the unconstrained setting where customers and suppliers follow multinomial-logit models, we design a simple and efficient approximation algorithm that appropriately randomizes over a family of nested-assortments. Also, we analyze various aspects of the matching market model that lead to several operational insights, such as the fact that matching platforms can benefit from allowing the more selective agents to initiate the matchmaking process.

Smoothing splines are twice differentiable by construction, so they cannot capture potential discontinuities in the underlying signal. In this work, we consider a special case of the weak rod model of Blake and Zisserman (1987) that allows for discontinuities penalizing their number by a linear term. The corresponding estimates are cubic smoothing splines with discontinuities (CSSD) which serve as representations of piecewise smooth signals and facilitate exploratory data analysis. However, computing the estimates requires solving a non-convex optimization problem. So far, efficient and exact solvers exist only for a discrete approximation based on equidistantly sampled data. In this work, we propose an efficient solver for the continuous minimization problem with non-equidistantly sampled data. Its worst case complexity is quadratic in the number of data points, and if the number of detected discontinuities scales linearly with the signal length, we observe linear growth in runtime. This efficient algorithm allows to use cross validation for automatic selection of the hyperparameters within a reasonable time frame on standard hardware. We provide a reference implementation and supplementary material. We demonstrate the applicability of the approach for the aforementioned tasks using both simulated and real data.

Re-randomization has gained popularity as a tool for experiment-based causal inference due to its superior covariate balance and statistical efficiency compared to classic randomized experiments. However, the basic re-randomization method, known as ReM, and many of its extensions have been deemed sub-optimal as they fail to prioritize covariates that are more strongly associated with potential outcomes. To address this limitation and design more efficient re-randomization procedures, a more precise quantification of covariate heterogeneity and its impact on the causal effect estimator is in a great appeal. This work fills in this gap with a Bayesian criterion for re-randomization and a series of novel re-randomization procedures derived under such a criterion. Both theoretical analyses and numerical studies show that the proposed re-randomization procedures under the Bayesian criterion outperform existing ReM-based procedures significantly in effectively balancing covariates and precisely estimating the unknown causal effect.

In this article we consider the estimation of static parameters for partially observed diffusion process with discrete-time observations over a fixed time interval. In particular, we assume that one must time-discretize the partially observed diffusion process and work with the model with bias and consider maximizing the resulting log-likelihood. Using a novel double randomization scheme, based upon Markovian stochastic approximation we develop a new method to unbiasedly estimate the static parameters, that is, to obtain the maximum likelihood estimator with no time discretization bias. Under assumptions we prove that our estimator is unbiased and investigate the method in several numerical examples, showing that it can empirically out-perform existing unbiased methodology.

Although in theory we can decide whether a given D-finite function is transcendental, transcendence proofs remain a challenge in practice. Typically, transcendence is certified by checking certain incomplete sufficient conditions. In this paper we propose an additional such condition which catches some cases on which other tests fail.

In the presented work, we propose to apply the framework of graph neural networks (GNNs) to predict the dynamics of a rolling element bearing. This approach offers generalizability and interpretability, having the potential for scalable use in real-time operational digital twin systems for monitoring the health state of rotating machines. By representing the bearing's components as nodes in a graph, the GNN can effectively model the complex relationships and interactions among them. We utilize a dynamic spring-mass-damper model of a bearing to generate the training data for the GNN. In this model, discrete masses represent bearing components such as rolling elements, inner raceways, and outer raceways, while a Hertzian contact model is employed to calculate the forces between these components. We evaluate the learning and generalization capabilities of the proposed GNN framework by testing different bearing configurations that deviate from the training configurations. Through this approach, we demonstrate the effectiveness of the GNN-based method in accurately predicting the dynamics of rolling element bearings, highlighting its potential for real-time health monitoring of rotating machinery.

Graphical causal inference as pioneered by Judea Pearl arose from research on artificial intelligence (AI), and for a long time had little connection to the field of machine learning. This article discusses where links have been and should be established, introducing key concepts along the way. It argues that the hard open problems of machine learning and AI are intrinsically related to causality, and explains how the field is beginning to understand them.

北京阿比特科技有限公司