亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

One of the main features of interest in analysing the light curves of stars is the underlying periodic behaviour. The corresponding observations are a complex type of time series with unequally spaced time points and are sometimes accompanied by varying measures of accuracy. The main tools for analysing these type of data rely on the periodogram-like functions, constructed with a desired feature so that the peaks indicate the presence of a potential period. In this paper, we explore a particular periodogram for the irregularly observed time series data, similar to Thieler et. al. (2013). We identify the potential periods at the appropriate peaks and more importantly with a quantifiable uncertainty. Our approach is shown to easily generalise to non-parametric methods including a weighted Gaussian process regression periodogram. We also extend this approach to correlated background noise. The proposed method for period detection relies on a test based on quadratic forms with normally distributed components. We implement the saddlepoint approximation, as a faster and more accurate alternative to the simulation-based methods that are currently used. The power analysis of the testing methodology is reported together with applications using light curves from the Hunting Outbursting Young Stars citizen science project.

相關內容

We consider the sample complexity of learning with adversarial robustness. Most prior theoretical results for this problem have considered a setting where different classes in the data are close together or overlapping. Motivated by some real applications, we consider, in contrast, the well-separated case where there exists a classifier with perfect accuracy and robustness, and show that the sample complexity narrates an entirely different story. Specifically, for linear classifiers, we show a large class of well-separated distributions where the expected robust loss of any algorithm is at least $\Omega(\frac{d}{n})$, whereas the max margin algorithm has expected standard loss $O(\frac{1}{n})$. This shows a gap in the standard and robust losses that cannot be obtained via prior techniques. Additionally, we present an algorithm that, given an instance where the robustness radius is much smaller than the gap between the classes, gives a solution with expected robust loss is $O(\frac{1}{n})$. This shows that for very well-separated data, convergence rates of $O(\frac{1}{n})$ are achievable, which is not the case otherwise. Our results apply to robustness measured in any $\ell_p$ norm with $p > 1$ (including $p = \infty$).

The ubiquitous availability of mobile devices capable of location tracking led to a significant rise in the collection of GPS data. Several compression methods have been developed in order to reduce the amount of storage needed while keeping the important information. In this paper, we present an lstm-autoencoder based approach in order to compress and reconstruct GPS trajectories, which is evaluated on both a gaming and real-world dataset. We consider various compression ratios and trajectory lengths. The performance is compared to other trajectory compression algorithms, i.e., Douglas-Peucker. Overall, the results indicate that our approach outperforms Douglas-Peucker significantly in terms of the discrete Fr\'echet distance and dynamic time warping. Furthermore, by reconstructing every point lossy, the proposed methodology offers multiple advantages over traditional methods.

The electroencephalographic (EEG) signals provide highly informative data on brain activities and functions. However, their heterogeneity and high dimensionality may represent an obstacle for their interpretation. The introduction of a priori knowledge seems the best option to mitigate high dimensionality problems, but could lose some information and patterns present in the data, while data heterogeneity remains an open issue that often makes generalization difficult. In this study, we propose a genetic algorithm (GA) for feature selection that can be used with a supervised or unsupervised approach. Our proposal considers three different fitness functions without relying on expert knowledge. Starting from two publicly available datasets on cognitive workload and motor movement/imagery, the EEG signals are processed, normalized and their features computed in the time, frequency and time-frequency domains. The feature vector selection is performed by applying our GA proposal and compared with two benchmarking techniques. The results show that different combinations of our proposal achieve better results in respect to the benchmark in terms of overall performance and feature reduction. Moreover, the proposed GA, based on a novel fitness function here presented, outperforms the benchmark when the two different datasets considered are merged together, showing the effectiveness of our proposal on heterogeneous data.

In the context of missing data, the identifiability or "recoverability" of the average causal effect (ACE) depends on causal and missingness assumptions. The latter can be depicted by adding variable-specific missingness indicators to causal diagrams, creating "missingness-directed acyclic graphs" (m-DAGs). Previous research described ten canonical m-DAGs, representing typical multivariable missingness mechanisms in epidemiological studies, and determined the recoverability of the ACE in the absence of effect modification. We extend the research by determining the recoverability of the ACE in settings with effect modification and conducting a simulation study evaluating the performance of widely used missing data methods when estimating the ACE using correctly specified g-computation, which has not been previously studied. Methods assessed were complete case analysis (CCA) and various multiple imputation (MI) implementations regarding the degree of compatibility with the outcome model used in g-computation. Simulations were based on an example from the Victorian Adolescent Health Cohort Study (VAHCS), where interest was in estimating the ACE of adolescent cannabis use on mental health in young adulthood. In the canonical m-DAGs that excluded unmeasured common causes of missingness indicators, we derived the recoverable ACE if no incomplete variable causes its missingness, and non-recoverable otherwise. Besides, the simulation showed that compatible MI approaches may enable approximately unbiased ACE estimation, unless the outcome causes its missingness or it causes the missingness of a variable that causes its missingness. Researchers must consider sensitivity analysis methods incorporating external information in the latter setting. The VAHCS case study illustrates the practical implications of these findings.

To comprehensively evaluate a public policy intervention, researchers must consider the effects of the policy not just on the implementing region, but also nearby, indirectly-affected regions. For example, an excise tax on sweetened beverages in Philadelphia was shown to not only be associated with a decrease in volume sales of taxed beverages in Philadelphia, but also an increase in sales in bordering counties not subject to the tax. The latter association may be explained by cross-border shopping behaviors of Philadelphia residents and indicate a causal effect of the tax on nearby regions, which may offset the total effect of the intervention. To estimate causal effects in this setting, we extend difference-in-differences methodology to account for such interference between regions and adjust for potential confounding present in non-experimental evaluations. Our doubly robust estimators for the average treatment effect on the treated and neighboring control relax standard assumptions on interference and model specification. We apply these methods to the Philadelphia beverage tax study and find more pronounced effects of the tax on Philadelphia and neighboring county pharmacies than previously estimated. We also use our methods to explore the heterogeneity of effects across spatial and demographic features.

In this letter, we propose a Gaussian mixture model (GMM)-based channel estimator which is learned on imperfect training data, i.e., the training data is solely comprised of noisy and sparsely allocated pilot observations. In a practical application, recent pilot observations at the base station (BS) can be utilized for training. This is in sharp contrast to state-of-theart machine learning (ML) techniques where a reference dataset consisting of perfect channel state information (CSI) labels is a prerequisite, which is generally unaffordable. In particular, we propose an adapted training procedure for fitting the GMM which is a generative model that represents the distribution of all potential channels associated with a specific BS cell. To this end, the necessary modifications of the underlying expectation-maximization (EM) algorithm are derived. Numerical results show that the proposed estimator performs close to the case where perfect CSI is available for the training and exhibits a higher robustness against imperfections in the training data as compared to state-of-the-art ML techniques.

The 2D Euler equations are a simple but rich set of non-linear PDEs that describe the evolution of an ideal inviscid fluid, for which one dimension is negligible. Solving numerically these equations can be extremely demanding. Several techniques to obtain fast and accurate simulations have been developed during the last decades. In this paper, we present a novel approach which combines recent developments in the stochastic model reduction and conservative semi-discretization of the Euler equations. In particular, starting from the Zeitlin model on the 2-sphere, we derive reduced dynamics for large scales and we close the equations either deterministically or with a suitable stochastic term. Numerical experiments show that, after an initial turbulent regime, the influence of small scales to large scales is negligible, even though a non-zero transfer of energy among different modes is present.

Tight estimation of the Lipschitz constant for deep neural networks (DNNs) is useful in many applications ranging from robustness certification of classifiers to stability analysis of closed-loop systems with reinforcement learning controllers. Existing methods in the literature for estimating the Lipschitz constant suffer from either lack of accuracy or poor scalability. In this paper, we present a convex optimization framework to compute guaranteed upper bounds on the Lipschitz constant of DNNs both accurately and efficiently. Our main idea is to interpret activation functions as gradients of convex potential functions. Hence, they satisfy certain properties that can be described by quadratic constraints. This particular description allows us to pose the Lipschitz constant estimation problem as a semidefinite program (SDP). The resulting SDP can be adapted to increase either the estimation accuracy (by capturing the interaction between activation functions of different layers) or scalability (by decomposition and parallel implementation). We illustrate the utility of our approach with a variety of experiments on randomly generated networks and on classifiers trained on the MNIST and Iris datasets. In particular, we experimentally demonstrate that our Lipschitz bounds are the most accurate compared to those in the literature. We also study the impact of adversarial training methods on the Lipschitz bounds of the resulting classifiers and show that our bounds can be used to efficiently provide robustness guarantees.

In this paper, we have considered a Block-Basu type bivariate Pareto distribution. Here in the standard manner, first Marshall-Olkin type singular bivariate distribution has been constructed, and then by taking away the singular component similar to the Block and Basu model, an absolute continuous BB-BVPA model has been constructed. Further, the location and scale parameters also have been introduced. Therefore, the model has seven parameters. Different properties of this absolutely continuous distribution are derived. Since the maximum likelihood estimators of the parameters cannot be expressed in a closed form, we propose to use an EM algorithm to compute the estimators of the model parameters. Some simulation experiments have been performed for illustrative purposes. The model is fitted to rainfall data in the context of landslide risk estimation.

High spectral dimensionality and the shortage of annotations make hyperspectral image (HSI) classification a challenging problem. Recent studies suggest that convolutional neural networks can learn discriminative spatial features, which play a paramount role in HSI interpretation. However, most of these methods ignore the distinctive spectral-spatial characteristic of hyperspectral data. In addition, a large amount of unlabeled data remains an unexploited gold mine for efficient data use. Therefore, we proposed an integration of generative adversarial networks (GANs) and probabilistic graphical models for HSI classification. Specifically, we used a spectral-spatial generator and a discriminator to identify land cover categories of hyperspectral cubes. Moreover, to take advantage of a large amount of unlabeled data, we adopted a conditional random field to refine the preliminary classification results generated by GANs. Experimental results obtained using two commonly studied datasets demonstrate that the proposed framework achieved encouraging classification accuracy using a small number of data for training.

北京阿比特科技有限公司