亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In course of this work, we examine the process of plastic profile extrusion, where a polymer melt is shaped inside the so-called extrusion die and fixed in its shape by solidification in the downstream calibration unit. More precise, we focus on the development of a data-driven reduced order model (ROM) for the purpose of predicting temperature distributions within the extruded profiles inside the calibration unit. Therein, the ROM functions as a first step to our overall goal of prediction based process control in order to avoid undesired warpage and damages of the final product.

相關內容

We systematically study the spectrum of kernel-based graph Laplacian (GL) constructed from high-dimensional and noisy random point cloud in the nonnull setup. The problem is motived by studying the model when the clean signal is sampled from a manifold that is embedded in a low-dimensional Euclidean subspace, and corrupted by high-dimensional noise. We quantify how the signal and noise interact over different regions of signal-to-noise ratio (SNR), and report the resulting peculiar spectral behavior of GL. In addition, we explore the impact of chosen kernel bandwidth on the spectrum of GL over different regions of SNR, which lead to an adaptive choice of kernel bandwidth that coincides with the common practice in real data. This result paves the way to a theoretical understanding of how practitioners apply GL when the dataset is noisy.

Learning policies via preference-based reward learning is an increasingly popular method for customizing agent behavior, but has been shown anecdotally to be prone to spurious correlations and reward hacking behaviors. While much prior work focuses on causal confusion in reinforcement learning and behavioral cloning, we aim to study it in the context of reward learning. To study causal confusion, we perform a series of sensitivity and ablation analyses on three benchmark domains where rewards learned from preferences achieve minimal test error but fail to generalize to out-of-distribution states -- resulting in poor policy performance when optimized. We find that the presence of non-causal distractor features, noise in the stated preferences, partial state observability, and larger model capacity can all exacerbate causal confusion. We also identify a set of methods with which to interpret causally confused learned rewards: we observe that optimizing causally confused rewards drives the policy off the reward's training distribution, resulting in high predicted (learned) rewards but low true rewards. These findings illuminate the susceptibility of reward learning to causal confusion, especially in high-dimensional environments -- failure to consider even one of many factors (data coverage, state definition, etc.) can quickly result in unexpected, undesirable behavior.

In this document, some elements of the theory and algorithmics corresponding to the existence and computability of approximate joint eigenpairs for finite collections of matrices with applications to model order reduction, are presented. More specifically, given a finite collection $X_1,\ldots,X_d$ of Hermitian matrices in $\mathbb{C}^{n\times n}$, a positive integer $r\ll n$, and a collection of complex numbers $\hat{x}_{j,k}\in \mathbb{C}$ for $1\leq j\leq d$, $1\leq k\leq r$. First, we study the computability of a set of $r$ vectors $w_1,\ldots,w_r\in \mathbb{C}^{n}$, such that $w_k=\arg\min_{w\in \mathbb{C}^n}\sum_{j=1}^d\|X_jw-\hat{x}_{j,k} w\|^2$ for each $1\leq k \leq r$, then we present a model order reduction procedure based on the truncated joint approximate eigenbases computed with the aforementioned techniques. Some prototypical algorithms together with some numerical examples are presented as well.

The steadily high demand for cash contributes to the expansion of the network of Bank payment terminals. To optimize the amount of cash in payment terminals, it is necessary to minimize the cost of servicing them and ensure that there are no excess funds in the network. The purpose of this work is to create a cash management system in the network of payment terminals. The article discusses the solution to the problem of determining the optimal amount of funds to be loaded into the terminals, and the effective frequency of collection, which allows to get additional income by investing the released funds. The paper presents the results of predicting daily cash withdrawals at ATMs using a triple exponential smoothing model, a recurrent neural network with long short-term memory, and a model of singular spectrum analysis. These forecasting models allowed us to obtain a sufficient level of correct forecasts with good accuracy and completeness. The results of forecasting cash withdrawals were used to build a discrete optimal control model, which was used to develop an optimal schedule for adding funds to the payment terminal. It is proved that the efficiency and reliability of the proposed model is higher than that of the classical Baumol-Tobin inventory management model: when tested on the time series of three ATMs, the discrete optimal control model did not allow exhaustion of funds and allowed to earn on average 30% more than the classical model.

While the difficulty of reinforcement learning problems is typically related to the complexity of their state spaces, Abstraction proposes that solutions often lie in simpler underlying latent spaces. Prior works have focused on learning either a continuous or dense abstraction, or require a human to provide one. Information-dense representations capture features irrelevant for solving tasks, and continuous spaces can struggle to represent discrete objects. In this work we automatically learn a sparse discrete abstraction of the underlying environment. We do so using a simple end-to-end trainable model based on the successor representation and max-entropy regularization. We describe an algorithm to apply our model, named Discrete State-Action Abstraction (DSAA), which computes an action abstraction in the form of temporally extended actions, i.e., Options, to transition between discrete abstract states. Empirically, we demonstrate the effects of different exploration schemes on our resulting abstraction, and show that it is efficient for solving downstream tasks.

The problem of classifying turbulent environments from partial observation is key for some theoretical and applied fields, from engineering to earth observation and astrophysics, e.g. to precondition searching of optimal control policies in different turbulent backgrounds, to predict the probability of rare events and/or to infer physical parameters labelling different turbulent set-ups. To achieve such goal one can use different tools depending on the system's knowledge and on the quality and quantity of the accessible data. In this context, we assume to work in a model-free setup completely blind to all dynamical laws, but with a large quantity of (good quality) data for training. As a prototype of complex flows with different attractors, and different multi-scale statistical properties we selected 10 turbulent 'ensembles' by changing the rotation frequency of the frame of reference of the 3d domain and we suppose to have access to a set of partial observations limited to the instantaneous kinetic energy distribution in a 2d plane, as it is often the case in geophysics and astrophysics. We compare results obtained by a Machine Learning (ML) approach consisting of a state-of-the-art Deep Convolutional Neural Network (DCNN) against Bayesian inference which exploits the information on velocity and enstrophy moments. First, we discuss the supremacy of the ML approach, presenting also results at changing the number of training data and of the hyper-parameters. Second, we present an ablation study on the input data aimed to perform a ranking on the importance of the flow features used by the DCNN, helping to identify the main physical contents used by the classifier. Finally, we discuss the main limitations of such data-driven methods and potential interesting applications.

In this article, we investigate the robust optimal design problem for the prediction of response when the fitted regression models are only approximately specified, and observations might be missing completely at random. The intuitive idea is as follows: We assume that data are missing at random, and the complete case analysis is applied. To account for the occurrence of missing data, the design criterion we choose is the mean, for the missing indicator, of the averaged (over the design space) mean squared errors of the predictions. To describe the uncertainty in the specification of the real underlying model, we impose a neighborhood structure on the deterministic part of the regression response and maximize, analytically, the \textbf{M}ean of the averaged \textbf{M}ean squared \textbf{P}rediction \textbf{E}rrors (MMPE), over the entire neighborhood. The maximized MMPE is the ``worst'' loss in the neighborhood of the fitted regression model. Minimizing the maximum MMPE over the class of designs, we obtain robust ``minimax'' designs. The robust designs constructed afford protection from increases in prediction errors resulting from model misspecifications.

We present a contact-implicit planning approach that can generate contact-interaction trajectories for non-prehensile manipulation problems without tuning or a tailored initial guess and with high success rates. This is achieved by leveraging the concept of state-triggered constraints (STCs) to capture the hybrid dynamics induced by discrete contact modes without explicitly reasoning about the combinatorics. STCs enable triggering arbitrary constraints by a strict inequality condition in a continuous way. We first use STCs to develop an automatic contact constraint activation method to minimize the effective constraint space based on the utility of contact candidates for a given task. Then, we introduce a re-formulation of the Coulomb friction model based on STCs that is more efficient for the discovery of tangential forces than the well-studied complementarity constraints-based approach. Last, we include the proposed friction model in the planning and control of quasi-static planar pushing. The performance of the STC-based contact activation and friction methods is evaluated by extensive simulation experiments in a dynamic pushing scenario. The results demonstrate that our methods outperform the baselines based on complementarity constraints with a significant decrease in the planning time and a higher success rate. We then compare the proposed quasi-static pushing controller against a mixed-integer programming-based approach in simulation and find that our method is computationally more efficient and provides a better tracking accuracy, with the added benefit of not requiring an initial control trajectory. Finally, we present hardware experiments demonstrating the usability of our framework in executing complex trajectories in real-time even with a low-accuracy tracking system.

Understanding which student support strategies mitigate dropout and improve student retention is an important part of modern higher educational research. One of the largest challenges institutions of higher learning currently face is the scalability of student support. Part of this is due to the shortage of staff addressing the needs of students, and the subsequent referral pathways associated to provide timeous student support strategies. This is further complicated by the difficulty of these referrals, especially as students are often faced with a combination of administrative, academic, social, and socio-economic challenges. A possible solution to this problem can be a combination of student outcome predictions and applying algorithmic recommender systems within the context of higher education. While much effort and detail has gone into the expansion of explaining algorithmic decision making in this context, there is still a need to develop data collection strategies Therefore, the purpose of this paper is to outline a data collection framework specific to recommender systems within this context in order to reduce collection biases, understand student characteristics, and find an ideal way to infer optimal influences on the student journey. If confirmation biases, challenges in data sparsity and the type of information to collect from students are not addressed, it will have detrimental effects on attempts to assess and evaluate the effects of these systems within higher education.

Deep neural networks have achieved remarkable success in computer vision tasks. Existing neural networks mainly operate in the spatial domain with fixed input sizes. For practical applications, images are usually large and have to be downsampled to the predetermined input size of neural networks. Even though the downsampling operations reduce computation and the required communication bandwidth, it removes both redundant and salient information obliviously, which results in accuracy degradation. Inspired by digital signal processing theories, we analyze the spectral bias from the frequency perspective and propose a learning-based frequency selection method to identify the trivial frequency components which can be removed without accuracy loss. The proposed method of learning in the frequency domain leverages identical structures of the well-known neural networks, such as ResNet-50, MobileNetV2, and Mask R-CNN, while accepting the frequency-domain information as the input. Experiment results show that learning in the frequency domain with static channel selection can achieve higher accuracy than the conventional spatial downsampling approach and meanwhile further reduce the input data size. Specifically for ImageNet classification with the same input size, the proposed method achieves 1.41% and 0.66% top-1 accuracy improvements on ResNet-50 and MobileNetV2, respectively. Even with half input size, the proposed method still improves the top-1 accuracy on ResNet-50 by 1%. In addition, we observe a 0.8% average precision improvement on Mask R-CNN for instance segmentation on the COCO dataset.

北京阿比特科技有限公司