In this paper, we consider the finite element approximation to a parabolic Dirichlet boundary control problem and establish new a priori error estimates. In the temporal semi-discretization we apply the DG(0) method for the state and the variational discretization for the control, and obtain the convergence rates $O(k^{\frac{1}{4}})$ and $O(k^{\frac{3}{4}-\varepsilon})$ $(\varepsilon>0)$ for the control for problems posed on polytopes with $y_0\in L^2(\Omega)$, $y_d\in L^2(I;L^2(\Omega))$ and smooth domains with $y_0\in H^{\frac{1}{2}}(\Omega)$, $y_d\in L^2(I;H^1(\Omega))\cap H^{\frac{1}{2}}(I;L^2(\Omega))$, respectively. In the fully discretization of the optimal control problem posed on polytopal domains, we apply the DG(0)-CG(1) method for the state and the variational discretization approach for the control, and derive the convergence order $O(k^{\frac{1}{4}} +h^{\frac{1}{2}})$, which improves the known results by removing the mesh size condition $k=O(h^2)$ between the space mesh size $h$ and the time step $k$. As a byproduct, we obtain a priori error estimate $O(h+k^{1\over 2})$ for the fully discretization of parabolic equations with inhomogeneous Dirichlet data posed on polytopes, which also improves the known error estimate by removing the above mesh size condition.
In the present paper, we consider one-hidden layer ANNs with a feedforward architecture, also referred to as shallow or two-layer networks, so that the structure is determined by the number and types of neurons. The determination of the parameters that define the function, called training, is done via the resolution of the approximation problem, so by imposing the interpolation through a set of specific nodes. We present the case where the parameters are trained using a procedure that is referred to as Extreme Learning Machine (ELM) that leads to a linear interpolation problem. In such hypotheses, the existence of an ANN interpolating function is guaranteed. The focus is then on the accuracy of the interpolation outside of the given sampling interpolation nodes when they are the equispaced, the Chebychev, and the randomly selected ones. The study is motivated by the well-known bell-shaped Runge example, which makes it clear that the construction of a global interpolating polynomial is accurate only if trained on suitably chosen nodes, ad example the Chebychev ones. In order to evaluate the behavior when growing the number of interpolation nodes, we raise the number of neurons in our network and compare it with the interpolating polynomial. We test using Runge's function and other well-known examples with different regularities. As expected, the accuracy of the approximation with a global polynomial increases only if the Chebychev nodes are considered. Instead, the error for the ANN interpolating function always decays and in most cases we observe that the convergence follows what is observed in the polynomial case on Chebychev nodes, despite the set of nodes used for training.
We report some results regarding the mechanization of normative (preference-based) conditional reasoning. Our focus is on Aqvist's system E for conditional obligation (and its extensions). Our mechanization is achieved via a shallow semantical embedding in Isabelle/HOL. We consider two possible uses of the framework. The first one is as a tool for meta-reasoning about the considered logic. We employ it for the automated verification of deontic correspondences (broadly conceived) and related matters, analogous to what has been previously achieved for the modal logic cube. The second use is as a tool for assessing ethical arguments. We provide a computer encoding of a well-known paradox in population ethics, Parfit's repugnant conclusion. Whether the presented encoding increases or decreases the attractiveness and persuasiveness of the repugnant conclusion is a question we would like to pass on to philosophy and ethics.
The distributed task allocation problem, as one of the most interesting distributed optimization challenges, has received considerable research attention recently. Previous works mainly focused on the task allocation problem in a population of individuals, where there are no constraints for affording task amounts. The latter condition, however, cannot always be hold. In this paper, we study the task allocation problem with constraints of task allocation in a game-theoretical framework. We assume that each individual can afford different amounts of task and the cost function is convex. To investigate the problem in the framework of population games, we construct a potential game and calculate the fitness function for each individual. We prove that when the Nash equilibrium point in the potential game is in the feasible solutions for the limited task allocation problem, the Nash equilibrium point is the unique globally optimal solution. Otherwise, we also derive analytically the unique globally optimal solution. In addition, in order to confirm our theoretical results, we consider the exponential and quadratic forms of cost function for each agent. Two algorithms with the mentioned representative cost functions are proposed to numerically seek the optimal solution to the limited task problems. We further perform Monte Carlo simulations which provide agreeing results with our analytical calculations.
In this paper we develop a numerical method for efficiently approximating solutions of certain Zakai equations in high dimensions. The key idea is to transform a given Zakai SPDE into a PDE with random coefficients. We show that under suitable regularity assumptions on the coefficients of the Zakai equation, the corresponding random PDE admits a solution random field which, for almost all realizations of the random coefficients, can be written as a classical solution of a linear parabolic PDE. This makes it possible to apply the Feynman--Kac formula to obtain an efficient Monte Carlo scheme for computing approximate solutions of Zakai equations. The approach achieves good results in up to 25 dimensions with fast run times.
In the present paper, we develop a new goodness-of-fit test for the Birnbaum- Saunders distribution based on the probability plot. We utilize the sample correlation coefficient from the Birnbaum-Saunders probability plot as a measure of goodness of fit. Unfortunately, it is impossible or extremely difficult to obtain an explicit distribution of this sample correlation coefficient. To address this challenge, we employ extensive Monte Carlo simulations to obtain the empirical distribution of the sample correlation coefficient from the Birnbaum-Saunders probability plot. This empirical distribution allows us to determine the critical values alongside their corresponding significance levels, thus facilitating the computation of the p-value when the sample correlation coefficient is obtained. Finally, two real-data examples are provided for illustrative purposes.
In this paper, we investigate the physical layer security capabilities of reconfigurable intelligent surface (RIS) empowered wireless systems. In more detail, we consider a general system model, in which the links between the transmitter (TX) and the RIS as well as the links between the RIS and the legitimate receiver are modeled as mixture Gamma (MG) random variables (RVs). Moreover, the link between the TX and eavesdropper is also modeled as a MG RV. Building upon this system model, we derive the probability of zero-secrecy capacity as well as the probability of information leakage. Finally, we extract the average secrecy rate for both cases of TX having full and partial channel state information knowledge.
This paper presents a new accelerated proximal Markov chain Monte Carlo methodology to perform Bayesian inference in imaging inverse problems with an underlying convex geometry. The proposed strategy takes the form of a stochastic relaxed proximal-point iteration that admits two complementary interpretations. For models that are smooth or regularised by Moreau-Yosida smoothing, the algorithm is equivalent to an implicit midpoint discretisation of an overdamped Langevin diffusion targeting the posterior distribution of interest. This discretisation is asymptotically unbiased for Gaussian targets and shown to converge in an accelerated manner for any target that is $\kappa$-strongly log-concave (i.e., requiring in the order of $\sqrt{\kappa}$ iterations to converge, similarly to accelerated optimisation schemes), comparing favorably to [M. Pereyra, L. Vargas Mieles, K.C. Zygalakis, SIAM J. Imaging Sciences, 13, 2 (2020), pp. 905-935] which is only provably accelerated for Gaussian targets and has bias. For models that are not smooth, the algorithm is equivalent to a Leimkuhler-Matthews discretisation of a Langevin diffusion targeting a Moreau-Yosida approximation of the posterior distribution of interest, and hence achieves a significantly lower bias than conventional unadjusted Langevin strategies based on the Euler-Maruyama discretisation. For targets that are $\kappa$-strongly log-concave, the provided non-asymptotic convergence analysis also identifies the optimal time step which maximizes the convergence speed. The proposed methodology is demonstrated through a range of experiments related to image deconvolution with Gaussian and Poisson noise, with assumption-driven and data-driven convex priors.
In the context of finite sums minimization, variance reduction techniques are widely used to improve the performance of state-of-the-art stochastic gradient methods. Their practical impact is clear, as well as their theoretical properties. Stochastic proximal point algorithms have been studied as an alternative to stochastic gradient algorithms since they are more stable with respect to the choice of the stepsize but a proper variance reduced version is missing. In this work, we propose the first study of variance reduction techniques for stochastic proximal point algorithms. We introduce a stochastic proximal version of SVRG, SAGA, and some of their variants for smooth and convex functions. We provide several convergence results for the iterates and the objective function values. In addition, under the Polyak-{\L}ojasiewicz (PL) condition, we obtain linear convergence rates for the iterates and the function values. Our numerical experiments demonstrate the advantages of the proximal variance reduction methods over their gradient counterparts, especially about the stability with respect to the choice of the step size.
In this paper, we introduce several geometric characterizations for strong minima of optimization problems. Applying these results to nuclear norm minimization problems allows us to obtain new necessary and sufficient quantitative conditions for this important property. Our characterizations for strong minima are weaker than the Restricted Injectivity and Nondegenerate Source Condition, which are usually used to identify solution uniqueness of nuclear norm minimization problems. Consequently, we obtain the minimum (tight) bound on the number of measurements for (strong) exact recovery of low-rank matrices.
In this paper we develop a novel neural network model for predicting implied volatility surface. Prior financial domain knowledge is taken into account. A new activation function that incorporates volatility smile is proposed, which is used for the hidden nodes that process the underlying asset price. In addition, financial conditions, such as the absence of arbitrage, the boundaries and the asymptotic slope, are embedded into the loss function. This is one of the very first studies which discuss a methodological framework that incorporates prior financial domain knowledge into neural network architecture design and model training. The proposed model outperforms the benchmarked models with the option data on the S&P 500 index over 20 years. More importantly, the domain knowledge is satisfied empirically, showing the model is consistent with the existing financial theories and conditions related to implied volatility surface.