亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In pursuit of enhancing the comprehensive efficiency of production systems, our study focused on the joint optimization problem of scheduling and machine maintenance in scenarios where product rework occurs. The primary challenge lies in the interdependence between product \underline{q}uality, machine \underline{r}eliability, and \underline{p}roduction scheduling, compounded by the uncertainties from machine degradation and product quality, which is prevalent in sophisticated manufacturing systems. To address this issue, we investigated the dynamic relationship among these three aspects, named as QRP-co-effect. On this basis, we constructed an optimization model that integrates production scheduling, machine maintenance, and product rework decisions, encompassing the context of stochastic degradation and product quality uncertainties within a mixed-integer programming problem. To effectively solve this problem, we proposed a dual-module solving framework that integrates planning and evaluation for solution improvement via dynamic communication. By analyzing the structural properties of this joint optimization problem, we devised an efficient solving algorithm with an interactive mechanism that leverages \emph{in-situ} condition information regarding the production system's state and computational resources. The proposed methodology has been validated through comparative and ablation experiments. The experimental results demonstrated the significant enhancement of production system efficiency, along with a reduction in machine maintenance costs in scenarios involving rework.

相關內容

The proposed two-dimensional geometrically exact beam element extends our previous work by including the effects of shear distortion, and also of distributed forces and moments acting along the beam. The general flexibility-based formulation exploits the kinematic equations combined with the inverted sectional equations and the integrated form of equilibrium equations. The resulting set of three first-order differential equations is discretized by finite differences and the boundary value problem is converted into an initial value problem using the shooting method. Due to the special structure of the governing equations, the scheme remains explicit even though the first derivatives are approximated by central differences, leading to high accuracy. The main advantage of the adopted approach is that the error can be efficiently reduced by refining the computational grid used for finite differences at the element level while keeping the number of global degrees of freedom low. The efficiency is also increased by dealing directly with the global centerline coordinates and sectional inclination with respect to global axes as the primary unknowns at the element level, thereby avoiding transformations between local and global coordinates. Two formulations of the sectional equations, referred to as the Reissner and Ziegler models, are presented and compared. In particular, stability of an axially loaded beam/column is investigated and the connections to the Haringx and Engesser stability theories are discussed. Both approaches are tested in a series of numerical examples, which illustrate (i) high accuracy with quadratic convergence when the spatial discretization is refined, (ii) easy modeling of variable stiffness along the element (such as rigid joint offsets), (iii) efficient and accurate characterization of the buckling and post-buckling behavior.

Uncertainty quantification is not yet widely adapted in the design process of engineering components despite its importance for achieving sustainable and resource-efficient structures. This is mainly due to two reasons: 1) Tracing the effect of uncertainty in engineering simulations is a computationally challenging task. This is especially true for inelastic simulations as the whole loading history influences the results. 2) Implementations of efficient schemes in standard finite element software are lacking. In this paper, we are tackling both problems. We are proposing a \rev{weakly}-intrusive implementation of the time-separated stochastic mechanics in the finite element software Abaqus. The time-separated stochastic mechanics is an efficient and accurate method for the uncertainty quantification of structures with inelastic material behavior. The method effectivly separates the stochastic but time-independent from the deterministic but time-dependent behavior. The resulting scheme consists only two deterministic finite element simulations for homogeneous material fluctuations in order to approximate the stochastic behavior. This brings down the computational cost compared to standard Monte Carlo simulations by at least two orders of magnitude while ensuring accurate solutions. In this paper, the implementation details in Abaqus and numerical comparisons are presented for the example of damage simulations.

This paper presents an analysis of properties of two hybrid discretization methods for Gaussian derivatives, based on convolutions with either the normalized sampled Gaussian kernel or the integrated Gaussian kernel followed by central differences. The motivation for studying these discretization methods is that in situations when multiple spatial derivatives of different order are needed at the same scale level, they can be computed significantly more efficiently compared to more direct derivative approximations based on explicit convolutions with either sampled Gaussian kernels or integrated Gaussian kernels. While these computational benefits do also hold for the genuinely discrete approach for computing discrete analogues of Gaussian derivatives, based on convolution with the discrete analogue of the Gaussian kernel followed by central differences, the underlying mathematical primitives for the discrete analogue of the Gaussian kernel, in terms of modified Bessel functions of integer order, may not be available in certain frameworks for image processing, such as when performing deep learning based on scale-parameterized filters in terms of Gaussian derivatives, with learning of the scale levels. In this paper, we present a characterization of the properties of these hybrid discretization methods, in terms of quantitative performance measures concerning the amount of spatial smoothing that they imply, as well as the relative consistency of scale estimates obtained from scale-invariant feature detectors with automatic scale selection, with an emphasis on the behaviour for very small values of the scale parameter, which may differ significantly from corresponding results obtained from the fully continuous scale-space theory, as well as between different types of discretization methods.

This work presents an abstract framework for the design, implementation, and analysis of the multiscale spectral generalized finite element method (MS-GFEM), a particular numerical multiscale method originally proposed in [I. Babuska and R. Lipton, Multiscale Model.\;\,Simul., 9 (2011), pp.~373--406]. MS-GFEM is a partition of unity method employing optimal local approximation spaces constructed from local spectral problems. We establish a general local approximation theory demonstrating exponential convergence with respect to local degrees of freedom under certain assumptions, with explicit dependence on key problem parameters. Our framework applies to a broad class of multiscale PDEs with $L^{\infty}$-coefficients in both continuous and discrete, finite element settings, including highly indefinite problems (convection-dominated diffusion, as well as the high-frequency Helmholtz, Maxwell and elastic wave equations with impedance boundary conditions), and higher-order problems. Notably, we prove a local convergence rate of $O(e^{-cn^{1/d}})$ for MS-GFEM for all these problems, improving upon the $O(e^{-cn^{1/(d+1)}})$ rate shown by Babuska and Lipton. Moreover, based on the abstract local approximation theory for MS-GFEM, we establish a unified framework for showing low-rank approximations to multiscale PDEs. This framework applies to the aforementioned problems, proving that the associated Green's functions admit an $O(|\log\epsilon|^{d})$-term separable approximation on well-separated domains with error $\epsilon>0$. Our analysis improves and generalizes the result in [M. Bebendorf and W. Hackbusch, Numerische Mathematik, 95 (2003), pp.~1-28] where an $O(|\log\epsilon|^{d+1})$-term separable approximation was proved for Poisson-type problems.

As the spatial features of multivariate data are increasingly central in researchers' applied problems, there is a growing demand for novel spatially-aware methods that are flexible, easily interpretable, and scalable to large data. We develop inside-out cross-covariance (IOX) models for multivariate spatial likelihood-based inference. IOX leads to valid cross-covariance matrix functions which we interpret as inducing spatial dependence on independent replicates of a correlated random vector. The resulting sample cross-covariance matrices are "inside-out" relative to the ubiquitous linear model of coregionalization (LMC). However, unlike LMCs, our methods offer direct marginal inference, easy prior elicitation of covariance parameters, the ability to model outcomes with unequal smoothness, and flexible dimension reduction. As a covariance model for a q-variate Gaussian process, IOX leads to scalable models for noisy vector data as well as flexible latent models. For large n cases, IOX complements Vecchia approximations and related process-based methods based on sparse graphical models. We demonstrate superior performance of IOX on synthetic datasets as well as on colorectal cancer proteomics data. An R package implementing the proposed methods is available at github.com/mkln/spiox.

We present a novel, model-free, and data-driven methodology for controlling complex dynamical systems into previously unseen target states, including those with significantly different and complex dynamics. Leveraging a parameter-aware realization of next-generation reservoir computing, our approach accurately predicts system behavior in unobserved parameter regimes, enabling control over transitions to arbitrary target states. Crucially, this includes states with dynamics that differ fundamentally from known regimes, such as shifts from periodic to intermittent or chaotic behavior. The method's parameter-awareness facilitates non-stationary control, ensuring smooth transitions between states. By extending the applicability of machine learning-based control mechanisms to previously inaccessible target dynamics, this methodology opens the door to transformative new applications while maintaining exceptional efficiency. Our results highlight reservoir computing as a powerful alternative to traditional methods for dynamic system control.

The multi-modal perception methods are thriving in the autonomous driving field due to their better usage of complementary data from different sensors. Such methods depend on calibration and synchronization between sensors to get accurate environmental information. There have already been studies about space-alignment robustness in autonomous driving object detection process, however, the research for time-alignment is relatively few. As in reality experiments, LiDAR point clouds are more challenging for real-time data transfer, our study used historical frames of LiDAR to better align features when the LiDAR data lags exist. We designed a Timealign module to predict and combine LiDAR features with observation to tackle such time misalignment based on SOTA GraphBEV framework.

We present a computational design method that optimizes the reinforcement of dentures and increases the stiffness of dentures. Our approach optimally places reinforcement in the denture, which modern multi-material three-dimensional printers could implement. The study focuses on reducing denture displacement by identifying regions that require reinforcement (E-glass material) with the help of topology optimization. Our method is applied to a three-dimensional complete lower jaw denture. We compare the displacement results of a non-reinforced denture and a reinforced denture that has two materials. The comparison results indicate that there is a decrease in the displacement in the reinforced denture. Considering node-based displacement distribution, the reinforcement reduces the displacement magnitudes in the reinforced denture compared to the non-reinforced denture. The study guides dental technicians on where to automatically place reinforcement in the fabrication process, helping them save time and reduce material usage.

Leveraging the large body of work devoted in recent years to describe redundancy and synergy in multivariate interactions among random variables, we propose a novel approach to quantify cooperative effects in feature importance, one of the most used techniques for explainable artificial intelligence. In particular, we propose an adaptive version of a well-known metric of feature importance, named Leave One Covariate Out (LOCO), to disentangle high-order effects involving a given input feature in regression problems. LOCO is the reduction of the prediction error when the feature under consideration is added to the set of all the features used for regression. Instead of calculating the LOCO using all the features at hand, as in its standard version, our method searches for the multiplet of features that maximize LOCO and for the one that minimize it. This provides a decomposition of the LOCO as the sum of a two-body component and higher-order components (redundant and synergistic), also highlighting the features that contribute to building these high-order effects alongside the driving feature. We report the application to proton/pion discrimination from simulated detector measures by GEANT.

Studying unified model averaging estimation for situations with complicated data structures, we propose a novel model averaging method based on cross-validation (MACV). MACV unifies a large class of new and existing model averaging estimators and covers a very general class of loss functions. Furthermore, to reduce the computational burden caused by the conventional leave-subject/one-out cross validation, we propose a SEcond-order-Approximated Leave-one/subject-out (SEAL) cross validation, which largely improves the computation efficiency. In the context of non-independent and non-identically distributed random variables, we establish the unified theory for analyzing the asymptotic behaviors of the proposed MACV and SEAL methods, where the number of candidate models is allowed to diverge with sample size. To demonstrate the breadth of the proposed methodology, we exemplify four optimal model averaging estimators under four important situations, i.e., longitudinal data with discrete responses, within-cluster correlation structure modeling, conditional prediction in spatial data, and quantile regression with a potential correlation structure. We conduct extensive simulation studies and analyze real-data examples to illustrate the advantages of the proposed methods.

北京阿比特科技有限公司