In this paper, we consider the low rank structure of the reward sequence of the pure exploration problems. Firstly, we propose the separated setting in pure exploration problem, where the exploration strategy cannot receive the feedback of its explorations. Due to this separation, it requires that the exploration strategy to sample the arms obliviously. By involving the kernel information of the reward vectors, we provide efficient algorithms for both time-varying and fixed cases with regret bound $O(d\sqrt{(\ln N)/n})$. Then, we show the lower bound to the pure exploration in multi-armed bandits with low rank sequence. There is an $O(\sqrt{\ln N})$ gap between our upper bound and the lower bound.
In the present paper, we consider one-hidden layer ANNs with a feedforward architecture, also referred to as shallow or two-layer networks, so that the structure is determined by the number and types of neurons. The determination of the parameters that define the function, called training, is done via the resolution of the approximation problem, so by imposing the interpolation through a set of specific nodes. We present the case where the parameters are trained using a procedure that is referred to as Extreme Learning Machine (ELM) that leads to a linear interpolation problem. In such hypotheses, the existence of an ANN interpolating function is guaranteed. The focus is then on the accuracy of the interpolation outside of the given sampling interpolation nodes when they are the equispaced, the Chebychev, and the randomly selected ones. The study is motivated by the well-known bell-shaped Runge example, which makes it clear that the construction of a global interpolating polynomial is accurate only if trained on suitably chosen nodes, ad example the Chebychev ones. In order to evaluate the behavior when growing the number of interpolation nodes, we raise the number of neurons in our network and compare it with the interpolating polynomial. We test using Runge's function and other well-known examples with different regularities. As expected, the accuracy of the approximation with a global polynomial increases only if the Chebychev nodes are considered. Instead, the error for the ANN interpolating function always decays and in most cases we observe that the convergence follows what is observed in the polynomial case on Chebychev nodes, despite the set of nodes used for training.
We report some results regarding the mechanization of normative (preference-based) conditional reasoning. Our focus is on Aqvist's system E for conditional obligation (and its extensions). Our mechanization is achieved via a shallow semantical embedding in Isabelle/HOL. We consider two possible uses of the framework. The first one is as a tool for meta-reasoning about the considered logic. We employ it for the automated verification of deontic correspondences (broadly conceived) and related matters, analogous to what has been previously achieved for the modal logic cube. The second use is as a tool for assessing ethical arguments. We provide a computer encoding of a well-known paradox in population ethics, Parfit's repugnant conclusion. Whether the presented encoding increases or decreases the attractiveness and persuasiveness of the repugnant conclusion is a question we would like to pass on to philosophy and ethics.
In this paper we develop a numerical method for efficiently approximating solutions of certain Zakai equations in high dimensions. The key idea is to transform a given Zakai SPDE into a PDE with random coefficients. We show that under suitable regularity assumptions on the coefficients of the Zakai equation, the corresponding random PDE admits a solution random field which, for almost all realizations of the random coefficients, can be written as a classical solution of a linear parabolic PDE. This makes it possible to apply the Feynman--Kac formula to obtain an efficient Monte Carlo scheme for computing approximate solutions of Zakai equations. The approach achieves good results in up to 25 dimensions with fast run times.
In the present paper, we develop a new goodness-of-fit test for the Birnbaum- Saunders distribution based on the probability plot. We utilize the sample correlation coefficient from the Birnbaum-Saunders probability plot as a measure of goodness of fit. Unfortunately, it is impossible or extremely difficult to obtain an explicit distribution of this sample correlation coefficient. To address this challenge, we employ extensive Monte Carlo simulations to obtain the empirical distribution of the sample correlation coefficient from the Birnbaum-Saunders probability plot. This empirical distribution allows us to determine the critical values alongside their corresponding significance levels, thus facilitating the computation of the p-value when the sample correlation coefficient is obtained. Finally, two real-data examples are provided for illustrative purposes.
In this paper, we investigate the physical layer security capabilities of reconfigurable intelligent surface (RIS) empowered wireless systems. In more detail, we consider a general system model, in which the links between the transmitter (TX) and the RIS as well as the links between the RIS and the legitimate receiver are modeled as mixture Gamma (MG) random variables (RVs). Moreover, the link between the TX and eavesdropper is also modeled as a MG RV. Building upon this system model, we derive the probability of zero-secrecy capacity as well as the probability of information leakage. Finally, we extract the average secrecy rate for both cases of TX having full and partial channel state information knowledge.
In this paper, we study a numerical artifact of solving the nonlinear shallow water equations with a discontinuous bottom topography. For various first-order schemes, the numerical solution of the momentum will form a spurious spike at the discontinuous points of the bottom, which should not exist in the exact solution. The height of the spike cannot be reduced even after the mesh is refined. For subsonic problems, this numerical artifact may cause the wrong convergence to a function far away from the exact solution. To explain the formation of the spurious spike, we perform a convergence analysis by proving a Lax--Wendroff type theorem. It is shown that the spurious spike is caused by the numerical viscosity in the computation of the water height at the discontinuous bottom. The height of the spike is proportional to the magnitude of the viscosity constant in the Lax--Friedrichs flux. Motivated by this conclusion, we propose a modified scheme by adopting the central flux at the bottom discontinuity in the equation of mass conservation, and show that this numerical artifact can be removed in many cases. For various numerical tests with nontransonic Riemann solutions, we observe that the modified scheme is able to retrieve the correct convergence.
Modern 'smart' materials have complex heterogeneous microscale structure, often with unknown macroscale closure but one we need to realise for large scale engineering and science. The multiscale Equation-Free Patch Scheme empowers us to non-intrusively, efficiently, and accurately predict the large scale, system level, solutions through computations on only small sparse patches of the given detailed microscale system. Here the microscale system is that of a 2D beam of heterogeneous elasticity, with either fixed fixed, fixed-free, or periodic boundary conditions. We demonstrate that the described multiscale Patch Scheme simply, efficiently, and stably predicts the beam's macroscale, with a controllable accuracy, at finite scale separation. Dynamical systems theory supports the scheme. This article points the way for others to use this systematic non-intrusive approach, via a developing toolbox of functions, to model and compute accurately macroscale system-levels of general complex physical and engineering systems.
In this paper, we introduce several geometric characterizations for strong minima of optimization problems. Applying these results to nuclear norm minimization problems allows us to obtain new necessary and sufficient quantitative conditions for this important property. Our characterizations for strong minima are weaker than the Restricted Injectivity and Nondegenerate Source Condition, which are usually used to identify solution uniqueness of nuclear norm minimization problems. Consequently, we obtain the minimum (tight) bound on the number of measurements for (strong) exact recovery of low-rank matrices.
Damped wave equations have been used in many real-world fields. In this paper, we study a low-rank solution of the strongly damped wave equation with the damping term, visco-elastic damping term and mass term. Firstly, a second-order finite difference method is employed for spatial discretization. Then, we receive a second-order matrix differential system. Next, we transform it into an equivalent first-order matrix differential system, and split the transformed system into three subproblems. Applying a Strang splitting to these subproblems and combining a dynamical low-rank approach, we obtain a low-rank algorithm. Numerical experiments are reported to demonstrate that the proposed low-rank algorithm is robust and accurate, and has second-order convergence rate in time.
Training a deep architecture using a ranking loss has become standard for the person re-identification task. Increasingly, these deep architectures include additional components that leverage part detections, attribute predictions, pose estimators and other auxiliary information, in order to more effectively localize and align discriminative image regions. In this paper we adopt a different approach and carefully design each component of a simple deep architecture and, critically, the strategy for training it effectively for person re-identification. We extensively evaluate each design choice, leading to a list of good practices for person re-identification. By following these practices, our approach outperforms the state of the art, including more complex methods with auxiliary components, by large margins on four benchmark datasets. We also provide a qualitative analysis of our trained representation which indicates that, while compact, it is able to capture information from localized and discriminative regions, in a manner akin to an implicit attention mechanism.