亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Rescheduling problems arise in a variety of situations where a previously planned schedule needs to be adjusted to deal with unforeseen events. A common problem is the arrival of new orders, i.e. jobs, which have to be integrated into the schedule of the so-called old jobs. The maximum and total absolute time deviations of the completion times of these jobs are modeled as a disruption constraint to limit the change in the original schedule. Disruption constraints affect the shape of an optimal schedule, particularly with respect to the sequencing of old jobs and the insertion of idle time. We therefore give a classification into idle and no-idle problems for a set of single-machine rescheduling problems with different objective functions. We then prove the complexity of five rescheduling problems that have been left open in the literature.

相關內容

In PDE-constrained optimization, one aims to find design parameters that minimize some objective, subject to the satisfaction of a partial differential equation. A major challenges is computing gradients of the objective to the design parameters, as applying the chain rule requires computing the Jacobian of the design parameters to the PDE's state. The adjoint method avoids this Jacobian by computing partial derivatives of a Lagrangian. Evaluating these derivatives requires the solution of a second PDE with the adjoint differential operator to the constraint, resulting in a backwards-in-time simulation. Particle-based Monte Carlo solvers are often used to compute the solution to high-dimensional PDEs. However, such solvers have the drawback of introducing noise to the computed results, thus requiring stochastic optimization methods. To guarantee convergence in this setting, both the constraint and adjoint Monte Carlo simulations should simulate the same particle trajectories. For large simulations, storing full paths from the constraint equation for re-use in the adjoint equation becomes infeasible due to memory limitations. In this paper, we provide a reversible extension to the family of permuted congruential pseudorandom number generators (PCG). We then use such a generator to recompute these time-reversed paths for the heat equation, avoiding these memory issues.

The problem of optimal recovering high-order mixed derivatives of bivariate functions with finite smoothness is studied. On the basis of the truncation method, an algorithm for numerical differentiation is constructed, which is order-optimal both in the sense of accuracy and in terms of the amount of involved Galerkin information.

We present a robust deep incremental learning framework for regression tasks on financial temporal tabular datasets which is built upon the incremental use of commonly available tabular and time series prediction models to adapt to distributional shifts typical of financial datasets. The framework uses a simple basic building block (decision trees) to build self-similar models of any required complexity to deliver robust performance under adverse situations such as regime changes, fat-tailed distributions, and low signal-to-noise ratios. As a detailed study, we demonstrate our scheme using XGBoost models trained on the Numerai dataset and show that a two layer deep ensemble of XGBoost models over different model snapshots delivers high quality predictions under different market regimes. We also show that the performance of XGBoost models with different number of boosting rounds in three scenarios (small, standard and large) is monotonically increasing with respect to model size and converges towards the generalisation upper bound. We also evaluate the robustness of the model under variability of different hyperparameters, such as model complexity and data sampling settings. Our model has low hardware requirements as no specialised neural architectures are used and each base model can be independently trained in parallel.

We consider the problem of sequential change detection, where the goal is to design a scheme for detecting any changes in a parameter or functional $\theta$ of the data stream distribution that has small detection delay, but guarantees control on the frequency of false alarms in the absence of changes. In this paper, we describe a simple reduction from sequential change detection to sequential estimation using confidence sequences: we begin a new $(1-\alpha)$-confidence sequence at each time step, and proclaim a change when the intersection of all active confidence sequences becomes empty. We prove that the average run length is at least $1/\alpha$, resulting in a change detection scheme with minimal structural assumptions~(thus allowing for possibly dependent observations, and nonparametric distribution classes), but strong guarantees. Our approach bears an interesting parallel with the reduction from change detection to sequential testing of Lorden (1971) and the e-detector of Shin et al. (2022).

Hierarchical matrices approximate a given matrix by a decomposition into low-rank submatrices that can be handled efficiently in factorized form. $\mathcal{H}^2$-matrices refine this representation following the ideas of fast multipole methods in order to achieve linear, i.e., optimal complexity for a variety of important algorithms. The matrix multiplication, a key component of many more advanced numerical algorithms, has so far proven tricky: the only linear-time algorithms known so far either require the very special structure of HSS-matrices or need to know a suitable basis for all submatrices in advance. In this article, a new and fairly general algorithm for multiplying $\mathcal{H}^2$-matrices in linear complexity with adaptively constructed bases is presented. The algorithm consists of two phases: first an intermediate representation with a generalized block structure is constructed, then this representation is re-compressed in order to match the structure prescribed by the application. The complexity and accuracy are analysed and numerical experiments indicate that the new algorithm can indeed be significantly faster than previous attempts.

We present a general theory to quantify the uncertainty from imposing structural assumptions on the second-order structure of nonstationary Hilbert space-valued processes, which can be measured via functionals of time-dependent spectral density operators. The second-order dynamics are well-known to be elements of the space of trace-class operators, the latter is a Banach space of type 1 and of cotype 2, which makes the development of statistical inference tools more challenging. A part of our contribution is to obtain a weak invariance principle as well as concentration inequalities for (functionals of) the sequential time-varying spectral density operator. In addition, we introduce deviation measures in the nonstationary context, and derive estimators that are asymptotically pivotal. We then apply this framework and propose statistical methodology to investigate the validity of structural assumptions for nonstationary response surface data, such as low-rank assumptions in the context of time-varying dynamic fPCA and principle separable component analysis, deviations from stationarity with respect to the square root distance, and deviations from zero functional canonical coherency.

A general a posteriori error analysis applies to five lowest-order finite element methods for two fourth-order semi-linear problems with trilinear non-linearity and a general source. A quasi-optimal smoother extends the source term to the discrete trial space, and more importantly, modifies the trilinear term in the stream-function vorticity formulation of the incompressible 2D Navier-Stokes and the von K\'{a}rm\'{a}n equations. This enables the first efficient and reliable a posteriori error estimates for the 2D Navier-Stokes equations in the stream-function vorticity formulation for Morley, two discontinuous Galerkin, $C^0$ interior penalty, and WOPSIP discretizations with piecewise quadratic polynomials.

While text-conditional 3D object generation and manipulation have seen rapid progress, the evaluation of coherence between generated 3D shapes and input textual descriptions lacks a clear benchmark. The reason is twofold: a) the low quality of the textual descriptions in the only publicly available dataset of text-shape pairs; b) the limited effectiveness of the metrics used to quantitatively assess such coherence. In this paper, we propose a comprehensive solution that addresses both weaknesses. Firstly, we employ large language models to automatically refine textual descriptions associated with shapes. Secondly, we propose a quantitative metric to assess text-to-shape coherence, through cross-attention mechanisms. To validate our approach, we conduct a user study and compare quantitatively our metric with existing ones. The refined dataset, the new metric and a set of text-shape pairs validated by the user study comprise a novel, fine-grained benchmark that we publicly release to foster research on text-to-shape coherence of text-conditioned 3D generative models. Benchmark available at //cvlab-unibo.github.io/CrossCoherence-Web/.

Unveiling the underlying governing equations of nonlinear dynamic systems remains a significant challenge, especially when encountering noisy observations and no prior knowledge available. This study proposes R-DISCOVER, a framework designed to robustly uncover open-form partial differential equations (PDEs) from limited and noisy data. The framework operates through two alternating update processes: discovering and embedding. The discovering phase employs symbolic representation and a reinforcement learning (RL)-guided hybrid PDE generator to efficiently produce diverse open-form PDEs with tree structures. A neural network-based predictive model fits the system response and serves as the reward evaluator for the generated PDEs. PDEs with superior fits are utilized to iteratively optimize the generator via the RL method and the best-performing PDE is selected by a parameter-free stability metric. The embedding phase integrates the initially identified PDE from the discovering process as a physical constraint into the predictive model for robust training. The traversal of PDE trees automates the construction of the computational graph and the embedding process without human intervention. Numerical experiments demonstrate our framework's capability to uncover governing equations from nonlinear dynamic systems with limited and highly noisy data and outperform other physics-informed neural network-based discovery methods. This work opens new potential for exploring real-world systems with limited understanding.

Tens of thousands of simultaneous hypothesis tests are routinely performed in genomic studies to identify differentially expressed genes. However, due to unmeasured confounders, many standard statistical approaches may be substantially biased. This paper investigates the large-scale hypothesis testing problem for multivariate generalized linear models in the presence of confounding effects. Under arbitrary confounding mechanisms, we propose a unified statistical estimation and inference framework that harnesses orthogonal structures and integrates linear projections into three key stages. It first leverages multivariate responses to separate marginal and uncorrelated confounding effects, recovering the confounding coefficients' column space. Subsequently, latent factors and primary effects are jointly estimated, utilizing $\ell_1$-regularization for sparsity while imposing orthogonality onto confounding coefficients. Finally, we incorporate projected and weighted bias-correction steps for hypothesis testing. Theoretically, we establish various effects' identification conditions and non-asymptotic error bounds. We show effective Type-I error control of asymptotic $z$-tests as sample and response sizes approach infinity. Numerical experiments demonstrate that the proposed method controls the false discovery rate by the Benjamini-Hochberg procedure and is more powerful than alternative methods. By comparing single-cell RNA-seq counts from two groups of samples, we demonstrate the suitability of adjusting confounding effects when significant covariates are absent from the model.

北京阿比特科技有限公司