亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider signal source localization from range-difference measurements. First, we give some readily-checked conditions on measurement noises and sensor deployment to guarantee the asymptotic identifiability of the model and show the consistency and asymptotic normality of the maximum likelihood (ML) estimator. Then, we devise an estimator that owns the same asymptotic property as the ML one. Specifically, we prove that the negative log-likelihood function converges to a function, which has a unique minimum and positive-definite Hessian at the true source's position. Hence, it is promising to execute local iterations, e.g., the Gauss-Newton (GN) algorithm, following a consistent estimate. The main issue involved is obtaining a preliminary consistent estimate. To this aim, we construct a linear least-squares problem via algebraic operation and constraint relaxation and obtain a closed-form solution. We then focus on deriving and eliminating the bias of the linear least-squares estimator, which yields an asymptotically unbiased (thus consistent) estimate. Noting that the bias is a function of the noise variance, we further devise a consistent noise variance estimator which involves $3$-order polynomial rooting. Based on the preliminary consistent location estimate, we prove that a one-step GN iteration suffices to achieve the same asymptotic property as the ML estimator. Simulation results demonstrate the superiority of our proposed algorithm in the large sample case.

相關內容

Conformance checking techniques allow us to evaluate how well some exhibited behaviour, represented by a trace of monitored events, conforms to a specified process model. Modern monitoring and activity recognition technologies, such as those relying on sensors, the IoT, statistics and AI, can produce a wealth of relevant event data. However, this data is typically characterised by noise and uncertainty, in contrast to the assumption of a deterministic event log required by conformance checking algorithms. In this paper, we extend alignment-based conformance checking to function under a probabilistic event log. We introduce a weighted trace model and weighted alignment cost function, and a custom threshold parameter that controls the level of confidence on the event data vs. the process model. The resulting algorithm considers activities of lower but sufficiently high probability that better align with the process model. We explain the algorithm and its motivation both from formal and intuitive perspectives, and demonstrate its functionality in comparison with deterministic alignment using real-life datasets.

In the sequential decision making setting, an agent aims to achieve systematic generalization over a large, possibly infinite, set of environments. Such environments are modeled as discrete Markov decision processes with both states and actions represented through a feature vector. The underlying structure of the environments allows the transition dynamics to be factored into two components: one that is environment-specific and another that is shared. Consider a set of environments that share the laws of motion as an example. In this setting, the agent can take a finite amount of reward-free interactions from a subset of these environments. The agent then must be able to approximately solve any planning task defined over any environment in the original set, relying on the above interactions only. Can we design a provably efficient algorithm that achieves this ambitious goal of systematic generalization? In this paper, we give a partially positive answer to this question. First, we provide a tractable formulation of systematic generalization by employing a causal viewpoint. Then, under specific structural assumptions, we provide a simple learning algorithm that guarantees any desired planning error up to an unavoidable sub-optimality term, while showcasing a polynomial sample complexity.

We propose a data-assisted two-stage method for solving an inverse random source problem of the Helmholtz equation. In the first stage, the regularized Kaczmarz method is employed to generate initial approximations of the mean and variance based on the mild solution of the stochastic Helmholtz equation. A dataset is then obtained by sampling the approximate and corresponding true profiles from a certain a-priori criterion. The second stage is formulated as an image-to-image translation problem, and several data-assisted approaches are utilized to handle the dataset and obtain enhanced reconstructions. Numerical experiments demonstrate that the data-assisted two-stage method provides satisfactory reconstruction for both homogeneous and inhomogeneous media with fewer realizations.

A common approach to synthetic data is to sample from a fitted model. We show that under general assumptions, this approach results in a sample with inefficient estimators and whose joint distribution is inconsistent with the true distribution. Motivated by this, we propose a general method of producing synthetic data, which is widely applicable for parametric models, has asymptotically efficient summary statistics, and is both easily implemented and highly computationally efficient. Our approach allows for the construction of both partially synthetic datasets, which preserve certain summary statistics, as well as fully synthetic data which satisfy the strong guarantee of differential privacy (DP), both with the same asymptotic guarantees. We also provide theoretical and empirical evidence that the distribution from our procedure converges to the true distribution. Besides our focus on synthetic data, our procedure can also be used to perform approximate hypothesis tests in the presence of intractable likelihood functions.

In the context of simulation-based methods, multiple challenges arise, two of which are considered in this work. As a first challenge, problems including time-dependent phenomena with complex domain deformations, potentially even with changes in the domain topology, need to be tackled appropriately. The second challenge arises when computational resources and the time for evaluating the model become critical in so-called many query scenarios for parametric problems. For example, these problems occur in optimization, uncertainty quantification (UQ), or automatic control and using highly resolved full-order models (FOMs) may become impractical. To address both types of complexity, we present a novel projection-based model order reduction (MOR) approach for deforming domain problems that takes advantage of the time-continuous space-time formulation. We apply it to two examples that are relevant for engineering or biomedical applications and conduct an error and performance analysis. In both cases, we are able to drastically reduce the computational expense for a model evaluation and, at the same time, to maintain an adequate accuracy level. All in all, this work indicates the effectiveness of the presented MOR approach for deforming domain problems taking advantage of a time-continuous space-time setting.

Orthogonality constraints naturally appear in many machine learning problems, from Principal Components Analysis to robust neural network training. They are usually solved using Riemannian optimization algorithms, which minimize the objective function while enforcing the constraint. However, enforcing the orthogonality constraint can be the most time-consuming operation in such algorithms. Recently, Ablin & Peyr\'e (2022) proposed the Landing algorithm, a method with cheap iterations that does not enforce the orthogonality constraint but is attracted towards the manifold in a smooth manner. In this article, we provide new practical and theoretical developments for the landing algorithm. First, the method is extended to the Stiefel manifold, the set of rectangular orthogonal matrices. We also consider stochastic and variance reduction algorithms when the cost function is an average of many functions. We demonstrate that all these methods have the same rate of convergence as their Riemannian counterparts that exactly enforce the constraint. Finally, our experiments demonstrate the promise of our approach to an array of machine-learning problems that involve orthogonality constraints.

This study investigates the use of continuous-time dynamical systems for sparse signal recovery. The proposed dynamical system is in the form of a nonlinear ordinary differential equation (ODE) derived from the gradient flow of the Lasso objective function. The sparse signal recovery process of this ODE-based approach is demonstrated by numerical simulations using the Euler method. The state of the continuous-time dynamical system eventually converges to the equilibrium point corresponding to the minimum of the objective function. To gain insight into the local convergence properties of the system, a linear approximation around the equilibrium point is applied, yielding a closed-form error evolution ODE. This analysis shows the behavior of convergence to the equilibrium point. In addition, a variational optimization problem is proposed to optimize a time-dependent regularization parameter in order to improve both convergence speed and solution quality. The deep unfolded-variational optimization method is introduced as a means of solving this optimization problem, and its effectiveness is validated through numerical experiments.

This paper develops several interesting, significant, and interconnected approaches to nonparametric or semi-parametric statistical inferences. The overwhelmingly favoured maximum likelihood estimator (MLE) under parametric model is renowned for its strong consistency and optimality generally credited to Cramer. These properties, however, falter when the model is not regular or not completely accurate. In addition, their applicability is limited to local maxima close to the unknown true parameter value. One must therefore ascertain that the global maximum of the likelihood is strongly consistent under generic conditions (Wald, 1949). Global consistency is also a vital research problem in the context of empirical likelihood (Owen, 2001). The EL is a ground-breaking platform for nonparametric statistical inference. A subsequent milestone is achieved by placing estimating functions under the EL umbrella (Qin and Lawless, 1994). The resulting profile EL function possesses many nice properties of parametric likelihood but also shares the same shortcomings. These properties cannot be utilized unless we know the local maximum at hand is close to the unknown true parameter value. To overcome this obstacle, we first put forward a clean set of conditions under which the global maximum is consistent. We then develop a global maximum test to ascertain if the local maximum at hand is in fact a global maximum. Furthermore, we invent a global maximum remedy to ensure global consistency by expanding the set of estimating functions under EL. Our simulation experiments on many examples from the literature firmly establish that the proposed approaches work as predicted. Our approaches also provide superior solutions to problems of their parametric counterparts investigated by DeHaan (1981), Veall (1991), and Gan and Jiang (1999).

Here, we investigate whether (and how) experimental design could aid in the estimation of the precision matrix in a Gaussian chain graph model, especially the interplay between the design, the effect of the experiment and prior knowledge about the effect. Estimation of the precision matrix is a fundamental task to infer biological graphical structures like microbial networks. We compare the marginal posterior precision of the precision matrix under four priors: flat, conjugate Normal-Wishart, Normal-MGIG and a general independent. Under the flat and conjugate priors, the Laplace-approximated posterior precision is not a function of the design matrix rendering useless any efforts to find an optimal experimental design to infer the precision matrix. In contrast, the Normal-MGIG and general independent priors do allow for the search of optimal experimental designs, yet there is a sharp upper bound on the information that can be extracted from a given experiment. We confirm our theoretical findings via a simulation study comparing i) the KL divergence between prior and posterior and ii) the Stein's loss difference of MAPs between random and no experiment. Our findings provide practical advice for domain scientists conducting experiments to better infer the precision matrix as a representation of a biological network.

We consider the problem of discovering $K$ related Gaussian directed acyclic graphs (DAGs), where the involved graph structures share a consistent causal order and sparse unions of supports. Under the multi-task learning setting, we propose a $l_1/l_2$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models. We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order (or topological order) than separate estimations. Moreover, the joint estimator is able to recover non-identifiable DAGs, by estimating them together with some identifiable DAGs. Lastly, our analysis also shows the consistency of union support recovery of the structures. To allow practical implementation, we design a continuous optimization problem whose optimizer is the same as the joint estimator and can be approximated efficiently by an iterative algorithm. We validate the theoretical analysis and the effectiveness of the joint estimator in experiments.

北京阿比特科技有限公司