亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Reconstructing the structure of the soil using non-invasive techniques is a very relevant problem in many scientific fields, like geophysics and archaeology. This can be done, for instance, with the aid of Frequency Domain Electromagnetic (FDEM) induction devices. Inverting FDEM data is a very challenging inverse problem, as the problem is extremely ill-posed, i.e., sensible to the presence of noise in the measured data, and non-linear. Regularization methods substitute the original ill-posed problem with a well-posed one whose solution is an accurate approximation of the desired one. In this paper we develop a regularization method to invert FDEM data. We propose to determine the electrical conductivity of the ground by solving a variational problem. The minimized functional is made up by the sum of two term: the data fitting term ensures that the recovered solution fits the measured data, while the regularization term enforces sparsity on the Laplacian of the solution. The trade-off between the two terms is determined by the regularization parameter. This is achieved by minimizing an $\ell_2 - \ell_q$ functional with $0 < q \leq 2$. Since the functional we wish to minimize is non-convex, we show that the variational problem admits a solution. Moreover, we prove that, if the regularization parameter is tuned accordingly to the amount of noise present in the data, this model induces a regularization method. Some selected numerical examples on synthetic and real data show the good performances of our proposal.

相關內容

We study the computational complexity of the popular board game backgammon. We show that deciding whether a player can win from a given board configuration is NP-Hard, PSPACE-Hard, and EXPTIME-Hard under different settings of known and unknown opponents' strategies and dice rolls. Our work answers an open question posed by Erik Demaine in 2001. In particular, for the real life setting where the opponent's strategy and dice rolls are unknown, we prove that determining whether a player can win is EXPTIME-Hard. Interestingly, it is not clear what complexity class strictly contains each problem we consider because backgammon games can theoretically continue indefinitely as a result of the capture rule.

We propose a constrained linear data-feature-mapping model as an interpretable mathematical model for image classification using a convolutional neural network (CNN). From this viewpoint, we establish detailed connections between the traditional iterative schemes for linear systems and the architectures of the basic blocks of ResNet- and MgNet-type models. Using these connections, we present some modified ResNet models that compared with the original models have fewer parameters and yet can produce more accurate results, thereby demonstrating the validity of this constrained learning data-feature-mapping assumption. Based on this assumption, we further propose a general data-feature iterative scheme to show the rationality of MgNet. We also provide a systematic numerical study on MgNet to show its success and advantages in image classification problems and demonstrate its advantages in comparison with established networks.

This work provides a theoretical framework for the pose estimation problem using total least squares for vector observations from landmark features. First, the optimization framework is formulated with observation vectors extracted from point cloud features. Then, error-covariance expressions are derived. The attitude and position solutions obtained via the derived optimization framework are proven to reach the bounds defined by the Cram\'er-Rao lower bound under the small-angle approximation of attitude errors. The measurement data for the simulation of this problem is provided through a series of vector observation scans, and a fully populated observation noise-covariance matrix is assumed as the weight in the cost function to cover the most general case of the sensor uncertainty. Here, previous derivations are expanded for the pose estimation problem to include more generic correlations in the errors than previous cases involving an isotropic noise assumption. The proposed solution is simulated in a Monte-Carlo framework to validate the error-covariance analysis.

In 2006, Arnold, Falk, and Winther developed finite element exterior calculus, using the language of differential forms to generalize the Lagrange, Raviart-Thomas, Brezzi-Douglas-Marini, and N\'ed\'elec finite element spaces for simplicial triangulations. In a recent paper, Licht asks whether, on a single simplex, one can construct bases for these spaces that are invariant with respect to permuting the vertices of the simplex. For scalar fields, standard bases all have this symmetry property, but for vector fields, this question is more complicated: such invariant bases may or may not exist, depending on the polynomial degree of the element. In dimensions two and three, Licht constructs such invariant bases for certain values of the polynomial degree $r$, and he conjectures that his list is complete, that is, that no such basis exists for other values of $r$. In this paper, we show that Licht's conjecture is true in dimension two. However, in dimension three, we show that Licht's ideas can be extended to give invariant bases for many more values of $r$; we then show that this new larger list is complete. Along the way, we develop a more general framework for the geometric decomposition ideas of Arnold, Falk, and Winther.

Inversion of the two-dimensional discrete Fourier transform (DFT) typically requires all DFT coefficients to be known. When only band-limited DFT coefficients of a matrix are known, the original matrix can not be uniquely recovered. Using a priori information that the matrix is binary (all elements are either 0 or 1) can overcome the missing high-frequency DFT coefficients and restore uniqueness. We theoretically investigate the smallest pass band that can be applied while still guaranteeing unique recovery of an arbitrary binary matrix. The results depend on the dimensions of the matrix. Uniqueness results are proven for the dimensions $p\times q$, $p\times p$, and $p^\alpha\times p^\alpha$, where $p\neq q$ are primes numbers and $\alpha>1$ an integer. An inversion algorithm is proposed for practically recovering the unique binary matrix. This algorithm is based on integer linear programming methods and significantly outperforms naive implementations. The algorithm efficiently reconstructs $17\times17$ binary matrices using 81 out of the total 289 DFT coefficients.

Studying phenotype-gene association can uncover mechanism of diseases and develop efficient treatments. In complex disease where multiple phenotypes are available and correlated, analyzing and interpreting associated genes for each phenotype respectively may decrease statistical power and lose intepretation due to not considering the correlation between phenotypes. The typical approaches are many global testing methods, such as multivariate analysis of variance (MANOVA), which tests the overall association between phenotypes and each gene, without considersing the heterogeneity among phenotypes. In this paper, we extend and evaluate two p-value combination methods, adaptive weighted Fisher's method (AFp) and adaptive Fisher's method (AFz), to tackle this problem, where AFp stands out as our final proposed method, based on extensive simulations and a real application. Our proposed AFp method has three advantages over traditional global testing methods. Firstly, it can consider the heterogeneity of phenotypes and determines which specific phenotypes a gene is associated with, using phenotype specific 0-1 weights. Secondly, AFp takes the p-values from the test of association of each phenotype as input, thus can accommodate different types of phenotypes (continuous, binary and count). Thirdly, we also apply bootstrapping to construct a variability index for the weight estimator of AFp and generate a co-membership matrix to categorize (cluster) genes based on their association-patterns for intuitive biological investigations. Through extensive simulations, AFp shows superior performance over global testing methods in terms of type I error control and statistical power, as well as higher accuracy of 0-1 weights estimation over AFz. A real omics application with transcriptomic and clinical data of complex lung diseases demonstrates insightful biological findings of AFp.

Spatially inhomogeneous functions, which may be smooth in some regions and rough in other regions, are modelled naturally in a Bayesian manner using so-called Besov priors which are given by random wavelet expansions with Laplace-distributed coefficients. This paper studies theoretical guarantees for such prior measures - specifically, we examine their frequentist posterior contraction rates in the setting of non-linear inverse problems with Gaussian white noise. Our results are first derived under a general local Lipschitz assumption on the forward map. We then verify the assumption for two non-linear inverse problems arising from elliptic partial differential equations, the Darcy flow model from geophysics as well as a model for the Schr\"odinger equation appearing in tomography. In the course of the proofs, we also obtain novel concentration inequalities for penalized least squares estimators with $\ell^1$ wavelet penalty, which have a natural interpretation as maximum a posteriori (MAP) estimators. The true parameter is assumed to belong to some spatially inhomogeneous Besov class $B^{\alpha}_{11}$, $\alpha>0$. In a setting with direct observations, we complement these upper bounds with a lower bound on the rate of contraction for arbitrary Gaussian priors. An immediate consequence of our results is that while Laplace priors can achieve minimax-optimal rates over $B^{\alpha}_{11}$-classes, Gaussian priors are limited to a (by a polynomial factor) slower contraction rate. This gives information-theoretical justification for the intuition that Laplace priors are more compatible with $\ell^1$ regularity structure in the underlying parameter.

We study constrained reinforcement learning (CRL) from a novel perspective by setting constraints directly on state density functions, rather than the value functions considered by previous works. State density has a clear physical and mathematical interpretation, and is able to express a wide variety of constraints such as resource limits and safety requirements. Density constraints can also avoid the time-consuming process of designing and tuning cost functions required by value function-based constraints to encode system specifications. We leverage the duality between density functions and Q functions to develop an effective algorithm to solve the density constrained RL problem optimally and the constrains are guaranteed to be satisfied. We prove that the proposed algorithm converges to a near-optimal solution with a bounded error even when the policy update is imperfect. We use a set of comprehensive experiments to demonstrate the advantages of our approach over state-of-the-art CRL methods, with a wide range of density constrained tasks as well as standard CRL benchmarks such as Safety-Gym.

Sampling methods (e.g., node-wise, layer-wise, or subgraph) has become an indispensable strategy to speed up training large-scale Graph Neural Networks (GNNs). However, existing sampling methods are mostly based on the graph structural information and ignore the dynamicity of optimization, which leads to high variance in estimating the stochastic gradients. The high variance issue can be very pronounced in extremely large graphs, where it results in slow convergence and poor generalization. In this paper, we theoretically analyze the variance of sampling methods and show that, due to the composite structure of empirical risk, the variance of any sampling method can be decomposed into \textit{embedding approximation variance} in the forward stage and \textit{stochastic gradient variance} in the backward stage that necessities mitigating both types of variance to obtain faster convergence rate. We propose a decoupled variance reduction strategy that employs (approximate) gradient information to adaptively sample nodes with minimal variance, and explicitly reduces the variance introduced by embedding approximation. We show theoretically and empirically that the proposed method, even with smaller mini-batch sizes, enjoys a faster convergence rate and entails a better generalization compared to the existing methods.

This paper proposes a model-free Reinforcement Learning (RL) algorithm to synthesise policies for an unknown Markov Decision Process (MDP), such that a linear time property is satisfied. We convert the given property into a Limit Deterministic Buchi Automaton (LDBA), then construct a synchronized MDP between the automaton and the original MDP. According to the resulting LDBA, a reward function is then defined over the state-action pairs of the product MDP. With this reward function, our algorithm synthesises a policy whose traces satisfies the linear time property: as such, the policy synthesis procedure is "constrained" by the given specification. Additionally, we show that the RL procedure sets up an online value iteration method to calculate the maximum probability of satisfying the given property, at any given state of the MDP - a convergence proof for the procedure is provided. Finally, the performance of the algorithm is evaluated via a set of numerical examples. We observe an improvement of one order of magnitude in the number of iterations required for the synthesis compared to existing approaches.

北京阿比特科技有限公司