亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this article, we propose an interval constraint programming method for globally solving catalog-based categorical optimization problems. It supports catalogs of arbitrary size and properties of arbitrary dimension, and does not require any modeling effort from the user. A novel catalog-based contractor (or filtering operator) guarantees consistency between the categorical properties and the existing catalog items. This results in an intuitive and generic approach that is exact, rigorous (robust to roundoff errors) and can be easily implemented in an off-the-shelf interval-based continuous solver that interleaves branching and constraint propagation. We demonstrate the validity of the approach on a numerical problem in which a categorical variable is described by a two-dimensional property space. A Julia prototype is available as open-source software under the MIT license at //github.com/cvanaret/CateGOrical.jl

相關內容

Several mixed-effects models for longitudinal data have been proposed to accommodate the non-linearity of late-life cognitive trajectories and assess the putative influence of covariates on it. No prior research provides a side-by-side examination of these models to offer guidance on their proper application and interpretation. In this work, we examined five statistical approaches previously used to answer research questions related to non-linear changes in cognitive aging: the linear mixed model (LMM) with a quadratic term, LMM with splines, the functional mixed model, the piecewise linear mixed model, and the sigmoidal mixed model. We first theoretically describe the models. Next, using data from two prospective cohorts with annual cognitive testing, we compared the interpretation of the models by investigating associations of education on cognitive change before death. Lastly, we performed a simulation study to empirically evaluate the models and provide practical recommendations. Except for the LMM-quadratic, the fit of all models was generally adequate to capture non-linearity of cognitive change and models were relatively robust. Although spline-based models have no interpretable nonlinearity parameters, their convergence was easier to achieve, and they allow graphical interpretation. In contrast, piecewise and sigmoidal models, with interpretable non-linear parameters, may require more data to achieve convergence.

This work highlights an approach for incorporating realistic uncertainties into scientific computing workflows based on finite elements, focusing on applications in computational mechanics and design optimization. We leverage Mat\'ern-type Gaussian random fields (GRFs) generated using the SPDE method to model aleatoric uncertainties, including environmental influences, variating material properties, and geometric ambiguities. Our focus lies on delivering practical GRF realizations that accurately capture imperfections and variations and understanding how they impact the predictions of computational models and the topology of optimized designs. We describe a numerical algorithm based on solving a generalized SPDE to sample GRFs on arbitrary meshed domains. The algorithm leverages established techniques and integrates seamlessly with the open-source finite element library MFEM and associated scientific computing workflows, like those found in industrial and national laboratory settings. Our solver scales efficiently for large-scale problems and supports various domain types, including surfaces and embedded manifolds. We showcase its versatility through biomechanics and topology optimization applications. The flexibility and efficiency of SPDE-based GRF generation empower us to run large-scale optimization problems on 2D and 3D domains, including finding optimized designs on embedded surfaces, and to generate topologies beyond the reach of conventional techniques. Moreover, these capabilities allow us to model geometric uncertainties of reconstructed submanifolds, such as the surfaces of cerebral aneurysms. In addition to offering benefits in these specific domains, the proposed techniques transcend specific applications and generalize to arbitrary forward and backward problems in uncertainty quantification involving finite elements.

A component-splitting method is proposed to improve convergence characteristics for implicit time integration of compressible multicomponent reactive flows. The characteristic decomposition of flux jacobian of multicomponent Navier-Stokes equations yields a large sparse eigensystem, presenting challenges of slow convergence and high computational costs for implicit methods. To addresses this issue, the component-splitting method segregates the implicit operator into two parts: one for the flow equations (density/momentum/energy) and the other for the component equations. Each part's implicit operator employs flux-vector splitting based on their respective spectral radii to achieve accelerated convergence. This approach improves the computational efficiency of implicit iteration, mitigating the quadratic increase in time cost with the number of species. Two consistence corrections are developed to reduce the introduced component-splitting error and ensure the numerical consistency of mass fraction. Importantly, the impact of component-splitting method on accuracy is minimal as the residual approaches convergence. The accuracy, efficiency, and robustness of component-splitting method are thoroughly investigated and compared with the coupled implicit scheme through several numerical cases involving thermo-chemical nonequilibrium hypersonic flows. The results demonstrate that the component-splitting method decreases the required number of iteration steps for convergence of residual and wall heat flux, decreases the computation time per iteration step, and diminishes the residual to lower magnitude. The acceleration efficiency is enhanced with increases in CFL number and number of species.

During the evolution of large models, performance evaluation is necessarily performed to assess their capabilities and ensure safety before practical application. However, current model evaluations mainly rely on specific tasks and datasets, lacking a united framework for assessing the multidimensional intelligence of large models. In this perspective, we advocate for a comprehensive framework of cognitive science-inspired artificial general intelligence (AGI) tests, aimed at fulfilling the testing needs of large models with enhanced capabilities. The cognitive science-inspired AGI tests encompass the full spectrum of intelligence facets, including crystallized intelligence, fluid intelligence, social intelligence, and embodied intelligence. To assess the multidimensional intelligence of large models, the AGI tests consist of a battery of well-designed cognitive tests adopted from human intelligence tests, and then naturally encapsulates into an immersive virtual community. We propose increasing the complexity of AGI testing tasks commensurate with advancements in large models and emphasizing the necessity for the interpretation of test results to avoid false negatives and false positives. We believe that cognitive science-inspired AGI tests will effectively guide the targeted improvement of large models in specific dimensions of intelligence and accelerate the integration of large models into human society.

This study addresses a class of linear mixed-integer programming (MILP) problems that involve uncertainty in the objective function parameters. The parameters are assumed to form a random vector, whose probability distribution can only be observed through a finite training data set. Unlike most of the related studies in the literature, we also consider uncertainty in the underlying data set. The data uncertainty is described by a set of linear constraints for each random sample, and the uncertainty in the distribution (for a fixed realization of data) is defined using a type-1 Wasserstein ball centered at the empirical distribution of the data. The overall problem is formulated as a three-level distributionally robust optimization (DRO) problem. First, we prove that the three-level problem admits a single-level MILP reformulation, if the class of loss functions is restricted to biaffine functions. Secondly, it turns out that for several particular forms of data uncertainty, the outlined problem can be solved reasonably fast by leveraging the nominal MILP problem. Finally, we conduct a computational study, where the out-of-sample performance of our model and computational complexity of the proposed MILP reformulation are explored numerically for several application domains.

As the development of formal proofs is a time-consuming task, it is important to devise ways of sharing the already written proofs to prevent wasting time redoing them. One of the challenges in this domain is to translate proofs written in proof assistants based on impredicative logics to proof assistants based on predicative logics, whenever impredicativity is not used in an essential way. In this paper we present a transformation for sharing proofs with a core predicative system supporting prenex universe polymorphism (like in Agda). It consists in trying to elaborate each term into a predicative universe polymorphic term as general as possible. The use of universe polymorphism is justified by the fact that mapping each universe to a fixed one in the target theory is not sufficient in most cases. During the elaboration, we need to solve unification problems in the equational theory of universe levels. In order to do this, we give a complete characterization of when a single equation admits a most general unifier. This characterization is then employed in a partial algorithm which uses a constraint-postponement strategy for trying to solve unification problems. The proposed translation is of course partial, but in practice allows one to translate many proofs that do not use impredicativity in an essential way. Indeed, it was implemented in the tool Predicativize and then used to translate semi-automatically many non-trivial developments from Matita's library to Agda, including proofs of Bertrand's Postulate and Fermat's Little Theorem, which (as far as we know) were not available in Agda yet.

In this paper, we study how the Koopman operator framework can be combined with kernel methods to effectively control nonlinear dynamical systems. While kernel methods have typically large computational requirements, we show how random subspaces (Nystr\"om approximation) can be used to achieve huge computational savings while preserving accuracy. Our main technical contribution is deriving theoretical guarantees on the effect of the Nystr\"om approximation. More precisely, we study the linear quadratic regulator problem, showing that both the approximated Riccati operator and the regulator objective, for the associated solution of the optimal control problem, converge at the rate $m^{-1/2}$, where $m$ is the random subspace size. Theoretical findings are complemented by numerical experiments corroborating our results.

We introduce a novel quantum programming language featuring higher-order programs and quantum controlflow which ensures that all qubit transformations are unitary. Our language boasts a type system guaranteeingboth unitarity and polynomial-time normalization. Unitarity is achieved by using a special modality forsuperpositions while requiring orthogonality among superposed terms. Polynomial-time normalization isachieved using a linear-logic-based type discipline employing Barber and Plotkin duality along with a specificmodality to account for potential duplications. This type discipline also guarantees that derived values havepolynomial size. Our language seamlessly combines the two modalities: quantum circuit programs upholdunitarity, and all programs are evaluated in polynomial time, ensuring their feasibility.

In this paper, we investigate the properties of standard and multilevel Monte Carlo methods for weak approximation of solutions of stochastic differential equations (SDEs) driven by the infinite-dimensional Wiener process and Poisson random measure with Lipschitz payoff function. The error of the truncated dimension randomized numerical scheme, which is determined by two parameters, i.e grid density $n \in \mathbb{N}_{+}$ and truncation dimension parameter $M \in \mathbb{N}_{+},$ is of the order $n^{-1/2}+\delta(M)$ such that $\delta(\cdot)$ is positive and decreasing to $0$. We derive complexity model and provide proof for the upper complexity bound of the multilevel Monte Carlo method which depends on two increasing sequences of parameters for both $n$ and $M.$ The complexity is measured in terms of upper bound for mean-squared error and compared with the complexity of the standard Monte Carlo algorithm. The results from numerical experiments as well as Python and CUDA C implementation are also reported.

This paper presents a method for thematic agreement assessment of geospatial data products of different semantics and spatial granularities, which may be affected by spatial offsets between test and reference data. The proposed method uses a multi-scale framework allowing for a probabilistic evaluation whether thematic disagreement between datasets is induced by spatial offsets due to different nature of the datasets or not. We test our method using real-estate derived settlement locations and remote-sensing derived building footprint data.

北京阿比特科技有限公司