亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

One of the biggest challenges in characterizing 2-D topographies is succinctly communicating the dominant nature of local configurations. In a 2-D grid composed of bistate units, this could be expressed as finding the characteristic configuration variables such as nearest-neighbor pairs and triplet combinations. The 2-D cluster variation method (CVM) provides a theoretical framework for associating a set of configuration variables with only two parameters, for a system that is at free energy equilibrium. This work presents a method for determining which of many possible two-parameter sets provides the ``most suitable'' match for a given 2-D topography, drawing from methods used for variational inference. This particular work focuses exclusively on topographies for which the activation enthalpy parameter (epsilon_0) is zero, so that the distribution between two states is equiprobable. This condition is used since, when the two states are equiprobable, there is an analytic solution giving the configuration variable values as functions of the h-value, where we define h in terms of the interaction enthalpy parameter (epsilon_1) as h = exp(2*epsilon_1). This allows the computationally-achieved configuration variable values to be compared with the analytically-predicted values for a given h-value. The method is illustrated using four patterns derived from three different naturally-occurring black-and-white topographies, where each pattern meets the equiprobability criterion. We achieve expected results, that is, as the patterns progress from having relatively low numbers of like-near-like nodes to increasing like-near-like masses, the h-values for each corresponding free energy-minimized model also increase. Further, the corresponding configuration variable values for the (free energy-minimized) model patterns are in approximate alignment with the analytically-predicted values.

相關內容

Digital sensors can lead to noisy results under many circumstances. To be able to remove the undesired noise from images, proper noise modeling and an accurate noise parameter estimation is crucial. In this project, we use a Poisson-Gaussian noise model for the raw-images captured by the sensor, as it fits the physical characteristics of the sensor closely. Moreover, we limit ourselves to the case where observed (noisy), and ground-truth (noise-free) image pairs are available. Using such pairs is beneficial for the noise estimation and is not widely studied in literature. Based on this model, we derive the theoretical maximum likelihood solution, discuss its practical implementation and optimization. Further, we propose two algorithms based on variance and cumulant statistics. Finally, we compare the results of our methods with two different approaches, a CNN we trained ourselves, and another one taken from literature. The comparison between all these methods shows that our algorithms outperform the others in terms of MSE and have good additional properties.

Successful coordination in Dec-POMDPs requires agents to adopt robust strategies and interpretable styles of play for their partner. A common failure mode is symmetry breaking, when agents arbitrarily converge on one out of many equivalent but mutually incompatible policies. Commonly these examples include partial observability, e.g. waving your right hand vs. left hand to convey a covert message. In this paper, we present a novel equivariant network architecture for use in Dec-POMDPs that prevents the agent from learning policies which break symmetries, doing so more effectively than prior methods. Our method also acts as a "coordination-improvement operator" for generic, pre-trained policies, and thus may be applied at test-time in conjunction with any self-play algorithm. We provide theoretical guarantees of our work and test on the AI benchmark task of Hanabi, where we demonstrate our methods outperforming other symmetry-aware baselines in zero-shot coordination, as well as able to improve the coordination ability of a variety of pre-trained policies. In particular, we show our method can be used to improve on the state of the art for zero-shot coordination on the Hanabi benchmark.

Supplier selection and order allocation (SSOA) are key strategic decisions in supply chain management which greatly impact the performance of the supply chain. The SSOA problem has been studied extensively but the lack of attention paid to scalability presents a significant gap preventing adoption of SSOA algorithms by industrial practitioners. This paper presents a novel real-time large-scale industrial SSOA problem, which involves a multi-item, multi-supplier environment with dual-sourcing and penalty constraints across two-tiers of a supply chain of a manufacturing company. The problem supports supplier preferences to work with other suppliers through bidding. This is the largest scale studied so far in literature, and needs to be solved in a real-time auction environment, making computational complexity a key issue. Furthermore, order allocation needs to be undertaken on both supply tiers, with dynamically presented constraints where non-preferred allocation may results in penalties by the suppliers. We subsequently propose Mixed Integer Programming models for individual-tiers as well as an integrated problem, which are complex due to NP-hard nature. The use case allows us to highlight how problem formulation, modelling and choice of modelling can help reduce complexity using Mathematical Programming (MP) and Genetic Algorithm (GA) approaches. The results show an interesting observation that MP outperforms GA to solve the individual-tiers problem as well as the integrated problem. Sensitivity analysis is presented for sourcing strategy, penalty threshold and penalty factor. The developed model was successfully deployed in a supplier conference which helped in significant procurement cost reductions to the manufacturing company.

This work proposes a fast iterative method for local steric Poisson--Boltzmann (PB) theories, in which the electrostatic potential is governed by the Poisson's equation and ionic concentrations satisfy equilibrium conditions. To present the method, we focus on a local steric PB theory derived from a lattice-gas model, as an example. The advantages of the proposed method in efficiency are achieved by treating ionic concentrations as scalar implicit functions of the electrostatic potential, though such functions are only numerically achievable. The existence, uniqueness, boundness, and smoothness of such functions are rigorously established. A Newton iteration method with truncation is proposed to solve a nonlinear system discretized from the generalized PB equations. The existence and uniqueness of the solution to the discretized nonlinear system are established by showing that it is a unique minimizer of a constructed convex energy. Thanks to the boundness of ionic concentrations, truncation bounds for the potential are obtained by using the extremum principle. The truncation step in iterations is shown to be energy and error decreasing. To further speed-up computations, we propose a novel precomputing-interpolation strategy, which is applicable to other local steric PB theories and makes the proposed methods for solving steric PB theories as efficient as for solving the classical PB theory. Analysis on the Newton iteration method with truncation shows local quadratic convergence for the proposed numerical methods. Applications to realistic biomolecular solvation systems reveal that counterions with steric hindrance stratify in an order prescribed by the parameter of ionic valence-to-volume ratio. Finally, we remark that the proposed iterative methods for local steric PB theories can be readily incorporated in well-known classical PB solvers.

High-dimensional matrix-variate time series data are becoming widely available in many scientific fields, such as economics, biology, and meteorology. To achieve significant dimension reduction while preserving the intrinsic matrix structure and temporal dynamics in such data, Wang et al. (2017) proposed a matrix factor model that is shown to provide effective analysis. In this paper, we establish a general framework for incorporating domain or prior knowledge in the matrix factor model through linear constraints. The proposed framework is shown to be useful in achieving parsimonious parameterization, facilitating interpretation of the latent matrix factor, and identifying specific factors of interest. Fully utilizing the prior-knowledge-induced constraints results in more efficient and accurate modeling, inference, dimension reduction as well as a clear and better interpretation of the results. In this paper, constrained, multi-term, and partially constrained factor models for matrix-variate time series are developed, with efficient estimation procedures and their asymptotic properties. We show that the convergence rates of the constrained factor loading matrices are much faster than those of the conventional matrix factor analysis under many situations. Simulation studies are carried out to demonstrate the finite-sample performance of the proposed method and its associated asymptotic properties. We illustrate the proposed model with three applications, where the constrained matrix-factor models outperform their unconstrained counterparts in the power of variance explanation under the out-of-sample 10-fold cross-validation setting.

Tie-breaker designs trade off a statistical design objective with short-term gain from preferentially assigning a binary treatment to those with high values of a running variable $x$. The design objective is any continuous function of the expected information matrix in a two-line regression model, and short-term gain is expressed as the covariance between the running variable and the treatment indicator. We investigate how to specify design functions indicating treatment probabilities as a function of $x$ to optimize these competing objectives, under external constraints on the number of subjects receiving treatment. Our results include sharp existence and uniqueness guarantees, while accommodating the ethically appealing requirement that treatment probabilities are non-decreasing in $x$. Under such a constraint, there always exists an optimal design function that is constant below and above a single discontinuity. When the running variable distribution is not symmetric or the fraction of subjects receiving the treatment is not $1/2$, our optimal designs improve upon a $D$-optimality objective without sacrificing short-term gain, compared to the three level tie-breaker designs of Owen and Varian (2020) that fix treatment probabilities at $0$, $1/2$, and $1$. We illustrate our optimal designs with data from Head Start, an early childhood government intervention program.

With advancements in computer vision taking place day by day, recently a lot of light is being shed on activity recognition. With the range for real-world applications utilizing this field of study increasing across a multitude of industries such as security and healthcare, it becomes crucial for businesses to distinguish which machine learning methods perform better than others in the area. This paper strives to aid in this predicament i.e. building upon previous related work, it employs both classical and ensemble approaches on rich pose estimation (OpenPose) and HAR datasets. Making use of appropriate metrics to evaluate the performance for each model, the results show that overall, random forest yields the highest accuracy in classifying ADLs. Relatively all the models have excellent performance across both datasets, except for logistic regression and AdaBoost perform poorly in the HAR one. With the limitations of this paper also discussed in the end, the scope for further research is vast, which can use this paper as a base in aims of producing better results.

Sequential recommendation as an emerging topic has attracted increasing attention due to its important practical significance. Models based on deep learning and attention mechanism have achieved good performance in sequential recommendation. Recently, the generative models based on Variational Autoencoder (VAE) have shown the unique advantage in collaborative filtering. In particular, the sequential VAE model as a recurrent version of VAE can effectively capture temporal dependencies among items in user sequence and perform sequential recommendation. However, VAE-based models suffer from a common limitation that the representational ability of the obtained approximate posterior distribution is limited, resulting in lower quality of generated samples. This is especially true for generating sequences. To solve the above problem, in this work, we propose a novel method called Adversarial and Contrastive Variational Autoencoder (ACVAE) for sequential recommendation. Specifically, we first introduce the adversarial training for sequence generation under the Adversarial Variational Bayes (AVB) framework, which enables our model to generate high-quality latent variables. Then, we employ the contrastive loss. The latent variables will be able to learn more personalized and salient characteristics by minimizing the contrastive loss. Besides, when encoding the sequence, we apply a recurrent and convolutional structure to capture global and local relationships in the sequence. Finally, we conduct extensive experiments on four real-world datasets. The experimental results show that our proposed ACVAE model outperforms other state-of-the-art methods.

Modeling multivariate time series has long been a subject that has attracted researchers from a diverse range of fields including economics, finance, and traffic. A basic assumption behind multivariate time series forecasting is that its variables depend on one another but, upon looking closely, it is fair to say that existing methods fail to fully exploit latent spatial dependencies between pairs of variables. In recent years, meanwhile, graph neural networks (GNNs) have shown high capability in handling relational dependencies. GNNs require well-defined graph structures for information propagation which means they cannot be applied directly for multivariate time series where the dependencies are not known in advance. In this paper, we propose a general graph neural network framework designed specifically for multivariate time series data. Our approach automatically extracts the uni-directed relations among variables through a graph learning module, into which external knowledge like variable attributes can be easily integrated. A novel mix-hop propagation layer and a dilated inception layer are further proposed to capture the spatial and temporal dependencies within the time series. The graph learning, graph convolution, and temporal convolution modules are jointly learned in an end-to-end framework. Experimental results show that our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets and achieves on-par performance with other approaches on two traffic datasets which provide extra structural information.

The prevalence of networked sensors and actuators in many real-world systems such as smart buildings, factories, power plants, and data centers generate substantial amounts of multivariate time series data for these systems. The rich sensor data can be continuously monitored for intrusion events through anomaly detection. However, conventional threshold-based anomaly detection methods are inadequate due to the dynamic complexities of these systems, while supervised machine learning methods are unable to exploit the large amounts of data due to the lack of labeled data. On the other hand, current unsupervised machine learning approaches have not fully exploited the spatial-temporal correlation and other dependencies amongst the multiple variables (sensors/actuators) in the system for detecting anomalies. In this work, we propose an unsupervised multivariate anomaly detection method based on Generative Adversarial Networks (GANs). Instead of treating each data stream independently, our proposed MAD-GAN framework considers the entire variable set concurrently to capture the latent interactions amongst the variables. We also fully exploit both the generator and discriminator produced by the GAN, using a novel anomaly score called DR-score to detect anomalies by discrimination and reconstruction. We have tested our proposed MAD-GAN using two recent datasets collected from real-world CPS: the Secure Water Treatment (SWaT) and the Water Distribution (WADI) datasets. Our experimental results showed that the proposed MAD-GAN is effective in reporting anomalies caused by various cyber-intrusions compared in these complex real-world systems.

北京阿比特科技有限公司