亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present two algorithms designed to learn a pattern of correspondence between two data sets in situations where it is desirable to match elements that exhibit a relationship belonging to a known parametric model. In the motivating case study, the challenge is to better understand micro-RNA (miRNA) regulation in the striatum of Huntington's disease (HD) model mice. The two data sets contain miRNA and messenger-RNA (mRNA) data, respectively, each data point consisting in a multi-dimensional profile. The biological hypothesis is that if a miRNA induces the degradation of a target mRNA or blocks its translation into proteins, or both, then the profile of the former should be similar to minus the profile of the latter (a particular form of affine relationship).The algorithms unfold in two stages. During the first stage, an optimal transport plan P and an optimal affine transformation are learned, using the Sinkhorn-Knopp algorithm and a mini-batch gradient descent. During the second stage, P is exploited to derive either several co-clusters or several sets of matched elements.We share codes that implement our algorithms. A simulation study illustrates how they work and perform. A brief summary of the real data application in the motivating case-study further illustrates the applicability and interest of the algorithms.

相關內容

The Exact Circular Pattern Matching (ECPM) problem consists of reporting every occurrence of a rotation of a pattern $P$ in a text $T$. In many real-world applications, specifically in computational biology, circular rotations are of interest because of their prominence in virus DNA. Thus, given no restrictions on pre-processing time, how quickly all such circular rotation occurrences is of interest to many areas of study. We highlight, to the best of our knowledge, a novel approach to the ECPM problem and present four data structures that accompany this approach, each with their own time-space trade-offs, in addition to experimental results to determine the most computationally feasible data structure.

We consider the problem of extracting joint and individual signals from multi-view data, that is data collected from different sources on matched samples. While existing methods for multi-view data decomposition explore single matching of data by samples, we focus on double-matched multi-view data (matched by both samples and source features). Our motivating example is the miRNA data collected from both primary tumor and normal tissues of the same subjects; the measurements from two tissues are thus matched both by subjects and by miRNAs. Our proposed double-matched matrix decomposition allows to simultaneously extract joint and individual signals across subjects, as well as joint and individual signals across miRNAs. Our estimation approach takes advantage of double-matching by formulating a new type of optimization problem with explicit row space and column space constraints, for which we develop an efficient iterative algorithm. Numerical studies indicate that taking advantage of double-matching leads to superior signal estimation performance compared to existing multi-view data decomposition based on single-matching. We apply our method to miRNA data as well as data from the English Premier League soccer matches, and find joint and individual multi-view signals that align with domain specific knowledge.

This paper describes an energy-preserving and globally time-reversible code for weakly compressible smoothed particle hydrodynamics (SPH). We do not add any additional dynamics to the Monaghan's original SPH scheme at the level of ordinary differential equation, but we show how to discretize the equations by using a corrected expression for density and by invoking a symplectic integrator. Moreover, to achieve the global-in-time reversibility, we have to correct the initial state, implement a conservative fluid-wall interaction, and use the fixed-point arithmetic. Although the numerical scheme is reversible globally in time (solvable backwards in time while recovering the initial conditions), we observe thermalization of the particle velocities and growth of the Boltzmann entropy. In other words, when we do not see all the possible details, as in the Boltzmann entropy, which depends only on the one-particle distribution function, we observe the emergence of the second law of thermodynamics (irreversible behavior) from purely reversible dynamics.

Data collection and research methodology represents a critical part of the research pipeline. On the one hand, it is important that we collect data in a way that maximises the validity of what we are measuring, which may involve the use of long scales with many items. On the other hand, collecting a large number of items across multiple scales results in participant fatigue, and expensive and time consuming data collection. It is therefore important that we use the available resources optimally. In this work, we consider how a consideration for theory and the associated causal/structural model can help us to streamline data collection procedures by not wasting time collecting data for variables which are not causally critical for subsequent analysis. This not only saves time and enables us to redirect resources to attend to other variables which are more important, but also increases research transparency and the reliability of theory testing. In order to achieve this streamlined data collection, we leverage structural models, and Markov conditional independency structures implicit in these models to identify the substructures which are critical for answering a particular research question. In this work, we review the relevant concepts and present a number of didactic examples with the hope that psychologists can use these techniques to streamline their data collection process without invalidating the subsequent analysis. We provide a number of simulation results to demonstrate the limited analytical impact of this streamlining.

We consider networks of small, autonomous devices that communicate with each other wirelessly. Minimizing energy usage is an important consideration in designing algorithms for such networks, as battery life is a crucial and limited resource. Working in a model where both sending and listening for messages deplete energy, we consider the problem of finding a maximal matching of the nodes in a radio network of arbitrary and unknown topology. We present a distributed randomized algorithm that produces, with high probability, a maximal matching. The maximum energy cost per node is $O(\log^2 n)$, where $n$ is the size of the network. The total latency of our algorithm is $O(n \log n)$ time steps. We observe that there exist families of network topologies for which both of these bounds are simultaneously optimal up to polylog factors, so any significant improvement will require additional assumptions about the network topology. We also consider the related problem of assigning, for each node in the network, a neighbor to back up its data in case of node failure. Here, a key goal is to minimize the maximum load, defined as the number of nodes assigned to a single node. We present a decentralized low-energy algorithm that finds a neighbor assignment whose maximum load is at most a polylog($n$) factor bigger that the optimum.

Recent decades, the emergence of numerous novel algorithms makes it a gimmick to propose an intelligent optimization system based on metaphor, and hinders researchers from exploring the essence of search behavior in algorithms. However, it is difficult to directly discuss the search behavior of an intelligent optimization algorithm, since there are so many kinds of intelligent schemes. To address this problem, an intelligent optimization system is regarded as a simulated physical optimization system in this paper. The dynamic search behavior of such a simplified physical optimization system are investigated with quantum theory. To achieve this goal, the Schroedinger equation is employed as the dynamics equation of the optimization algorithm, which is used to describe dynamic search behaviours in the evolution process with quantum theory. Moreover, to explore the basic behaviour of the optimization system, the optimization problem is assumed to be decomposed and approximated. Correspondingly, the basic search behaviour is derived, which constitutes the basic iterative process of a simple optimization system. The basic iterative process is compared with some classical bare-bones schemes to verify the similarity of search behavior under different metaphors. The search strategies of these bare bones algorithms are analyzed through experiments.

Present-day atomistic simulations generate long trajectories of ever more complex systems. Analyzing these data, discovering metastable states, and uncovering their nature is becoming increasingly challenging. In this paper, we first use the variational approach to conformation dynamics to discover the slowest dynamical modes of the simulations. This allows the different metastable states of the system to be located and organized hierarchically. The physical descriptors that characterize metastable states are discovered by means of a machine learning method. We show in the cases of two proteins, Chignolin and Bovine Pancreatic Trypsin Inhibitor, how such analysis can be effortlessly performed in a matter of seconds. Another strength of our approach is that it can be applied to the analysis of both unbiased and biased simulations.

Sufficient dimension reduction (SDR) is a successful tool in regression models. It is a feasible method to solve and analyze the nonlinear nature of the regression problems. This paper introduces the \textbf{itdr} R package that provides several functions based on integral transformation methods to estimate the SDR subspaces in a comprehensive and user-friendly manner. In particular, the \textbf{itdr} package includes the Fourier method (FM) and the convolution method (CM) of estimating the SDR subspaces such as the central mean subspace (CMS) and the central subspace (CS). In addition, the \textbf{itdr} package facilitates the recovery of the CMS and the CS by using the iterative Hessian transformation (IHT) method and the Fourier transformation approach for inverse dimension reduction method (invFM), respectively. Moreover, the use of the package is illustrated by three datasets. \textcolor{black}{Furthermore, this is the first package that implements integral transformation methods to estimate SDR subspaces. Hence, the \textbf{itdr} package may provide a huge contribution to research in the SDR field.

In this paper, we propose a novel Feature Decomposition and Reconstruction Learning (FDRL) method for effective facial expression recognition. We view the expression information as the combination of the shared information (expression similarities) across different expressions and the unique information (expression-specific variations) for each expression. More specifically, FDRL mainly consists of two crucial networks: a Feature Decomposition Network (FDN) and a Feature Reconstruction Network (FRN). In particular, FDN first decomposes the basic features extracted from a backbone network into a set of facial action-aware latent features to model expression similarities. Then, FRN captures the intra-feature and inter-feature relationships for latent features to characterize expression-specific variations, and reconstructs the expression feature. To this end, two modules including an intra-feature relation modeling module and an inter-feature relation modeling module are developed in FRN. Experimental results on both the in-the-lab databases (including CK+, MMI, and Oulu-CASIA) and the in-the-wild databases (including RAF-DB and SFEW) show that the proposed FDRL method consistently achieves higher recognition accuracy than several state-of-the-art methods. This clearly highlights the benefit of feature decomposition and reconstruction for classifying expressions.

Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.

北京阿比特科技有限公司