亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Adversarial robustness and generalization are both crucial properties of reliable machine learning models. In this paper, we study these properties in the context of quantum machine learning based on Lipschitz bounds. We derive tailored, parameter-dependent Lipschitz bounds for quantum models with trainable encoding, showing that the norm of the data encoding has a crucial impact on the robustness against perturbations in the input data. Further, we derive a bound on the generalization error which explicitly depends on the parameters of the data encoding. Our theoretical findings give rise to a practical strategy for training robust and generalizable quantum models by regularizing the Lipschitz bound in the cost. Further, we show that, for fixed and non-trainable encodings as frequently employed in quantum machine learning, the Lipschitz bound cannot be influenced by tuning the parameters. Thus, trainable encodings are crucial for systematically adapting robustness and generalization during training. With numerical results, we demonstrate that, indeed, Lipschitz bound regularization leads to substantially more robust and generalizable quantum models.

相關內容

This paper proposes a new approach to fit a linear regression for symbolic internal-valued variables, which improves both the Center Method suggested by Billard and Diday in \cite{BillardDiday2000} and the Center and Range Method suggested by Lima-Neto, E.A. and De Carvalho, F.A.T. in \cite{Lima2008, Lima2010}. Just in the Centers Method and the Center and Range Method, the new methods proposed fit the linear regression model on the midpoints and in the half of the length of the intervals as an additional variable (ranges) assumed by the predictor variables in the training data set, but to make these fitments in the regression models, the methods Ridge Regression, Lasso, and Elastic Net proposed by Tibshirani, R. Hastie, T., and Zou H in \cite{Tib1996, HastieZou2005} are used. The prediction of the lower and upper of the interval response (dependent) variable is carried out from their midpoints and ranges, which are estimated from the linear regression models with shrinkage generated in the midpoints and the ranges of the interval-valued predictors. Methods presented in this document are applied to three real data sets cardiologic interval data set, Prostate interval data set and US Murder interval data set to then compare their performance and facility of interpretation regarding the Center Method and the Center and Range Method. For this evaluation, the root-mean-squared error and the correlation coefficient are used. Besides, the reader may use all the methods presented herein and verify the results using the {\tt RSDA} package written in {\tt R} language, that can be downloaded and installed directly from {\tt CRAN} \cite{Rod2014}.

Optimal model reduction for large-scale linear dynamical systems is studied. In contrast to most existing works, the systems under consideration are not required to be stable, neither in discrete nor in continuous time. As a consequence, the underlying rational transfer functions are allowed to have poles in general domains in the complex plane. In particular, this covers the case of specific conservative partial differential equations such as the linear Schr\"odinger and the undamped linear wave equation with spectra on the imaginary axis. By an appropriate modification of the classical continuous time Hardy space $\mathcal{H}_2$, a new $\mathcal{H}_2$ like optimal model reduction problem is introduced and first order optimality conditions are derived. As in the classical $\mathcal{H}_2$ case, these conditions exhibit a rational Hermite interpolation structure for which an iterative model reduction algorithm is proposed. Numerical examples demonstrate the effectiveness of the new method.

In this paper, we investigate the tumor instability by employing both analytical and numerical techniques to validate previous results and extend the analytical findings presented in a prior study by Feng et al 2023. Building upon the insights derived from the analytical reconstruction of key results in the aforementioned work in one dimension (1D) and two dimensions (2D), we extend our analysis to three dimensions (3D). Specifically, we focus on the determination of boundary instability using perturbation and asymptotic analysis along with spherical harmonics. Additionally, we have validated our analytical results in a two-dimensional framework by implementing the Alternating Directional Implicit (ADI) method, as detailed in Witelski and Bowen (2003). Our primary focus has been on ensuring that the numerical simulation of the propagation speed aligns accurately with the analytical findings. Furthermore, we have matched the simulated boundary stability with the analytical predictions derived from the evolution function, which will be defined in subsequent sections of our paper. These alignment is essential for accurately determining the stability or instability of tumor boundaries.

In this work, we provide a simulation algorithm to simulate from a (multivariate) characteristic function, which is only accessible in a black-box format. We construct a generative neural network, whose loss function exploits a specific representation of the Maximum-Mean-Discrepancy metric to directly incorporate the targeted characteristic function. The construction is universal in the sense that it is independent of the dimension and that it does not require any assumptions on the given characteristic function. Furthermore, finite sample guarantees on the approximation quality in terms of the Maximum-Mean Discrepancy metric are derived. The method is illustrated in a short simulation study.

Modelling of systems where the full system information is unknown is an oft encountered problem for various engineering and industrial applications, as it's either impossible to consider all the complex physics involved or simpler models are considered to keep within the limits of the available resources. Recent advances in greybox modelling like the deep hidden physics models address this space by combining data and physics. However, for most real-life applications, model generalizability is a key issue, as retraining a model for every small change in system inputs and parameters or modification in domain configuration can render the model economically unviable. In this work we present a novel enhancement to the idea of hidden physics models which can generalize for changes in system inputs, parameters and domains. We also show that this approach holds promise in system discovery as well and helps learn the hidden physics for the changed system inputs, parameters and domain configuration.

In this paper, we define and study variants of several complexity classes of decision problems that are defined via some criteria on the number of accepting paths of an NPTM. In these variants, we modify the acceptance criteria so that they concern the total number of computation paths instead of the number of accepting ones. This direction reflects the relationship between the counting classes #P and TotP, which are the classes of functions that count the number of accepting paths and the total number of paths of NPTMs, respectively. The former is the well-studied class of counting versions of NP problems introduced by Valiant (1979). The latter contains all self-reducible counting problems in #P whose decision version is in P, among them prominent #P-complete problems such as Non-negative Permanent, #PerfMatch, and #DNF-Sat, thus playing a significant role in the study of approximable counting problems. We show that almost all classes introduced in this work coincide with their `#accepting paths'-definable counterparts, thus providing an alternative model of computation for them. Moreover, for each of these classes, we present a novel family of complete problems, which are defined via TotP-complete problems. This way, we show that all the aforementioned classes have complete problems that are defined via counting problems whose existence version is in P, in contrast to the standard way of obtaining completeness results via counting versions of NP-complete problems. To the best of our knowledge, prior to this work, such results were known only for parity-P and C=P.

In this work we design and analyse a Discrete de Rham (DDR) method for the incompressible Navier-Stokes equations. Our focus is, more specifically, on the SDDR variant, where a reduction in the number of unknowns is obtained using serendipity techniques. The main features of the DDR approach are the support of general meshes and arbitrary approximation orders. The method we develop is based on the curl-curl formulation of the momentum equation and, through compatibility with the Helmholtz-Hodge decomposition, delivers pressure-robust error estimates for the velocity. It also enables non-standard boundary conditions, such as imposing the value of the pressure on the boundary. In-depth numerical validation on a complete panel of tests including general polyhedral meshes is provided. The paper also contains an appendix where bounds on DDR potential reconstructions and differential operators are proved in the more general framework of Polytopal Exterior Calculus.

In this paper we propose to quantify execution time variability of programs using statistical dispersion parameters. We show how the execution time variability can be exploited in mixed criticality real-time systems. We propose a heuristic to compute the execution time budget to be allocated to each low criticality real-time task according to its execution time variability. We show using experiments and simulations that the proposed heuristic reduces the probability of exceeding the allocated budget compared to algorithms which do not take into account the execution time variability parameter.

In this paper, we propose a new efficient method for calculating the Gerber-Shiu discounted penalty function. Generally, the Gerber-Shiu function usually satisfies a class of integro-differential equation. We introduce the physics-informed neural networks (PINN) which embed a differential equation into the loss of the neural network using automatic differentiation. In addition, PINN is more free to set boundary conditions and does not rely on the determination of the initial value. This gives us an idea to calculate more general Gerber-Shiu functions. Numerical examples are provided to illustrate the very good performance of our approximation.

We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.

北京阿比特科技有限公司