亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

High-dimensional Partial Differential Equations (PDEs) are a popular mathematical modelling tool, with applications ranging from finance to computational chemistry. However, standard numerical techniques for solving these PDEs are typically affected by the curse of dimensionality. In this work, we tackle this challenge while focusing on stationary diffusion equations defined over a high-dimensional domain with periodic boundary conditions. Inspired by recent progress in sparse function approximation in high dimensions, we propose a new method called compressive Fourier collocation. Combining ideas from compressive sensing and spectral collocation, our method replaces the use of structured collocation grids with Monte Carlo sampling and employs sparse recovery techniques, such as orthogonal matching pursuit and $\ell^1$ minimization, to approximate the Fourier coefficients of the PDE solution. We conduct a rigorous theoretical analysis showing that the approximation error of the proposed method is comparable with the best $s$-term approximation (with respect to the Fourier basis) to the solution. Using the recently introduced framework of random sampling in bounded Riesz systems, our analysis shows that the compressive Fourier collocation method mitigates the curse of dimensionality with respect to the number of collocation points under sufficient conditions on the regularity of the diffusion coefficient. We also present numerical experiments that illustrate the accuracy and stability of the method for the approximation of sparse and compressible solutions.

相關內容

We propose a data-driven way to reduce the noise of covariance matrices of nonstationary systems. In the case of stationary systems, asymptotic approaches were proved to converge to the optimal solutions. Such methods produce eigenvalues that are highly dependent on the inputs, as common sense would suggest. Our approach proposes instead to use a set of eigenvalues totally independent from the inputs and that encode the long-term averaging of the influence of the future on present eigenvalues. Such an influence can be the predominant factor in nonstationary systems. Using real and synthetic data, we show that our data-driven method outperforms optimal methods designed for stationary systems for the filtering of both covariance matrix and its inverse, as illustrated by financial portfolio variance minimization, which makes out method generically relevant to many problems of multivariate inference.

We propose a novel automatic parameter selection strategy for variational imaging problems under Poisson noise corruption. The selection of a suitable regularization parameter, whose value is crucial in order to achieve high quality reconstructions, is known to be a particularly hard task in low photon-count regimes. In this work, we extend the so-called residual whiteness principle originally designed for additive white noise to Poisson data. The proposed strategy relies on the study of the whiteness property of a standardized Poisson noise process. After deriving the theoretical properties that motivate our proposal, we solve the target minimization problem with a linearized version of the alternating direction method of multipliers, which is particularly suitable in presence of a general linear forward operator. Our strategy is extensively tested on image restoration and computed tomography reconstruction problems, and compared to the well-known discrepancy principle for Poisson noise proposed by Zanella at al. and with a nearly exact version of it previously proposed by the authors.

In this work, we introduce a Variational Multi-Scale (VMS) method for the numerical approximation of parabolic problems, where sub-grid scales are approximated from the eigenpairs of associated elliptic operator. The abstract method is particularized to the one-dimensional advection-diffusion equations, for which the sub-grid components are exactly calculated in terms of a spectral expansion when the advection velocity is approximated by piecewise constant velocities on the grid elements. We prove error estimates that in particular imply that when Lagrange finite element discretisations in space are used, the spectral VMS method coincides with the exact solution of the implicit Euler semi-discretisation of the advection-diffusion problem at the Lagrange interpolation nodes. We also build a feasible method to solve the evolutive advection-diffusion problems by means of an offline/online strategy with reduced computational complexity. We perform some numerical tests in good agreement with the theoretical expectations, that show an improved accuracy with respect to several stabilised methods.

This paper focuses on waveform design for joint radar and communication systems and presents a new subset selection process to improve the communication error rate performance and global accuracy of radar sensing of the random stepped frequency permutation waveform. An optimal communication receiver based on integer programming is proposed to handle any subset of permutations followed by a more efficient sub-optimal receiver based on the Hungarian algorithm. Considering optimum maximum likelihood detection, the block error rate is analyzed under both additive white Gaussian noise and correlated Rician fading. We propose two methods to select a permutation subset with an improved block error rate and an efficient encoding scheme to map the information symbols to selected permutations under these subsets. From the radar perspective, the ambiguity function is analyzed with regards to the local and the global accuracy of target detection. Furthermore, a subset selection method to reduce the maximum sidelobe height is proposed by extending the properties of Costas arrays. Finally, the process of remapping the frequency tones to the symbol set used to generate permutations is introduced as a method to improve both the communication and radar performances of the selected permutation subset.

It was observed in \citet{gupta2009differentially} that the Set Cover problem has strong impossibility results under differential privacy. In our work, we observe that these hardness results dissolve when we turn to the Partial Set Cover problem, where we only need to cover a $\rho$-fraction of the elements in the universe, for some $\rho\in(0,1)$. We show that this relaxation enables us to avoid the impossibility results: under loose conditions on the input set system, we give differentially private algorithms which output an explicit set cover with non-trivial approximation guarantees. In particular, this is the first differentially private algorithm which outputs an explicit set cover. Using our algorithm for Partial Set Cover as a subroutine, we give a differentially private (bicriteria) approximation algorithm for a facility location problem which generalizes $k$-center/$k$-supplier with outliers. Like with the Set Cover problem, no algorithm has been able to give non-trivial guarantees for $k$-center/$k$-supplier-type facility location problems due to the high sensitivity and impossibility results. Our algorithm shows that relaxing the covering requirement to serving only a $\rho$-fraction of the population, for $\rho\in(0,1)$, enables us to circumvent the inherent hardness. Overall, our work is an important step in tackling and understanding impossibility results in private combinatorial optimization.

The main aim of this article is to analyze mixed finite element method for the second order Dirichlet boundary control problem. Therein, we develop both a priori and a posteriori error analysis using the energy space based approach. We obtain optimal order a priori error estimates in the energy norm and $L^2$-norm with the help of auxiliary problems. The reliability and the efficiency of proposed a posteriori error estimator is discussed using the Helmholtz decomposition. Numerical experiments are presented to confirm the theoretical findings.

Compressed Stochastic Gradient Descent (SGD) algorithms have been recently proposed to address the communication bottleneck in distributed and decentralized optimization problems, such as those that arise in federated machine learning. Existing compressed SGD algorithms assume the use of non-adaptive step-sizes(constant or diminishing) to provide theoretical convergence guarantees. Typically, the step-sizes are fine-tuned in practice to the dataset and the learning algorithm to provide good empirical performance. Such fine-tuning might be impractical in many learning scenarios, and it is therefore of interest to study compressed SGD using adaptive step-sizes. Motivated by prior work on adaptive step-size methods for SGD to train neural networks efficiently in the uncompressed setting, we develop an adaptive step-size method for compressed SGD. In particular, we introduce a scaling technique for the descent step in compressed SGD, which we use to establish order-optimal convergence rates for convex-smooth and strong convex-smooth objectives under an interpolation condition and for non-convex objectives under a strong growth condition. We also show through simulation examples that without this scaling, the algorithm can fail to converge. We present experimental results on deep neural networks for real-world datasets, and compare the performance of our proposed algorithm with previously proposed compressed SGD methods in literature, and demonstrate improved performance on ResNet-18, ResNet-34 and DenseNet architectures for CIFAR-100 and CIFAR-10 datasets at various levels of compression.

We consider power means of independent and identically distributed (i.i.d.) non-integrable random variables. The power mean is a homogeneous quasi-arithmetic mean, and under some conditions, several limit theorems hold for the power mean as well as for the arithmetic mean of i.i.d. integrable random variables. We establish integrabilities and a limit theorem for the variances of the power mean of i.i.d. non-integrable random variables. We also consider behaviors of the power mean when the parameter of the power varies. Our feature is that the generator of the power mean is allowed to be complex-valued, which enables us to consider the power mean of random variables supported on the whole set of real numbers. The complex-valued power mean is an unbiased strongly-consistent estimator for the joint of the location and scale parameters of the Cauchy distribution.

We present a mathematical and numerical investigation to the shrinkingdimer saddle dynamics for finding any-index saddle points in the solution landscape. Due to the dimer approximation of Hessian in saddle dynamics, the local Lipschitz assumptions and the strong nonlinearity for the saddle dynamics, it remains challenges for delicate analysis, such as the the boundedness of the solutions and the dimer error. We address these issues to bound the solutions under proper relaxation parameters, based on which we prove the error estimates for numerical discretization to the shrinking-dimer saddle dynamics by matching the dimer length and the time step size. Furthermore, the Richardson extrapolation is employed to obtain a high-order approximation. The inherent reason of requiring the matching of the dimer length and the time step size lies in that the former serves a different mesh size from the later, and thus the proposed numerical method is close to a fully-discrete numerical scheme of some spacetime PDE model with the Hessian in the saddle dynamics and its dimer approximation serving as a "spatial operator" and its discretization, respectively, which in turn indicates the PDE nature of the saddle dynamics.

The conjoining of dynamical systems and deep learning has become a topic of great interest. In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equation are two sides of the same coin. Traditional parameterised differential equations are a special case. Many popular neural network architectures, such as residual networks and recurrent networks, are discretisations. NDEs are suitable for tackling generative problems, dynamical systems, and time series (particularly in physics, finance, ...) and are thus of interest to both modern machine learning and traditional mathematical modelling. NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides. This doctoral thesis provides an in-depth survey of the field. Topics include: neural ordinary differential equations (e.g. for hybrid neural/mechanistic modelling of physical systems); neural controlled differential equations (e.g. for learning functions of irregular time series); and neural stochastic differential equations (e.g. to produce generative models capable of representing complex stochastic dynamics, or sampling from complex high-dimensional distributions). Further topics include: numerical methods for NDEs (e.g. reversible differential equations solvers, backpropagation through differential equations, Brownian reconstruction); symbolic regression for dynamical systems (e.g. via regularised evolution); and deep implicit models (e.g. deep equilibrium models, differentiable optimisation). We anticipate this thesis will be of interest to anyone interested in the marriage of deep learning with dynamical systems, and hope it will provide a useful reference for the current state of the art.

北京阿比特科技有限公司