亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A technique that allows a formation-enforcing control (FEC) derived from graph rigidity theory to interface with a realistic relative localization system onboard lightweight Unmanned Aerial Vehicles (UAVs) is proposed in this paper. The proposed methodology enables reliable real-world deployment of UAVs in tight formation using real relative localization systems burdened by non-negligible sensory noise, which is typically not fully taken into account in FEC algorithms. The proposed solution is based on decomposition of the gradient descent-based FEC command into interpretable elements, and then modifying these individually based on the estimated distribution of sensory noise, such that the resulting action limits the probability of overshooting the desired formation. The behavior of the system has been analyzed and the practicality of the proposed solution has been compared to pure gradient-descent in real-world experiments where it presented significantly better performance in terms of oscillations, deviation from the desired state and convergence time.

相關內容

Several mixed-effects models for longitudinal data have been proposed to accommodate the non-linearity of late-life cognitive trajectories and assess the putative influence of covariates on it. No prior research provides a side-by-side examination of these models to offer guidance on their proper application and interpretation. In this work, we examined five statistical approaches previously used to answer research questions related to non-linear changes in cognitive aging: the linear mixed model (LMM) with a quadratic term, LMM with splines, the functional mixed model, the piecewise linear mixed model, and the sigmoidal mixed model. We first theoretically describe the models. Next, using data from two prospective cohorts with annual cognitive testing, we compared the interpretation of the models by investigating associations of education on cognitive change before death. Lastly, we performed a simulation study to empirically evaluate the models and provide practical recommendations. Except for the LMM-quadratic, the fit of all models was generally adequate to capture non-linearity of cognitive change and models were relatively robust. Although spline-based models have no interpretable nonlinearity parameters, their convergence was easier to achieve, and they allow graphical interpretation. In contrast, piecewise and sigmoidal models, with interpretable non-linear parameters, may require more data to achieve convergence.

In the context of interactive theorem provers based on a dependent type theory, automation tactics (dedicated decision procedures, call of automated solvers, ...) are often limited to goals which are exactly in some expected logical fragment. This very often prevents users from applying these tactics in other contexts, even similar ones. This paper discusses the design and the implementation of pre-processing operations for automating formal proofs in the Coq proof assistant. It presents the implementation of a wide variety of predictible, atomic goal transformations, which can be composed in various ways to target different backends. A gallery of examples illustrates how it helps to expand significantly the power of automation engines.

By computing a feedback control via the linear quadratic regulator (LQR) approach and simulating a non-linear non-autonomous closed-loop system using this feedback, we combine two numerically challenging tasks. For the first task, the computation of the feedback control, we use the non-autonomous generalized differential Riccati equation (DRE), whose solution determines the time-varying feedback gain matrix. Regarding the second task, we want to be able to simulate non-linear closed-loop systems for which it is known that the regulator is only valid for sufficiently small perturbations. Thus, one easily runs into numerical issues in the integrators when the closed-loop control varies greatly. For these systems, e.g., the A-stable implicit Euler methods fails.\newline On the one hand, we implement non-autonomous versions of splitting schemes and BDF methods for the solution of our non-autonomous DREs. These are well-established DRE solvers in the autonomous case. On the other hand, to tackle the numerical issues in the simulation of the non-linear closed-loop system, we apply a fractional-step-theta scheme with time-adaptivity tuned specifically to this kind of challenge. That is, we additionally base the time-adaptivity on the activity of the control. We compare this approach to the more classical error-based time-adaptivity.\newline We describe techniques to make these two tasks computable in a reasonable amount of time and are able to simulate closed-loop systems with strongly varying controls, while avoiding numerical issues. Our time-adaptivity approach requires fewer time steps than the error-based alternative and is more reliable.

The scale function holds significant importance within the fluctuation theory of Levy processes, particularly in addressing exit problems. However, its definition is established through the Laplace transform, thereby lacking explicit representations in general. This paper introduces a novel series representation for this scale function, employing Laguerre polynomials to construct a uniformly convergent approximate sequence. Additionally, we derive statistical inference based on specific discrete observations, presenting estimators of scale functions that are asymptotically normal.

New biological assays like Perturb-seq link highly parallel CRISPR interventions to a high-dimensional transcriptomic readout, providing insight into gene regulatory networks. Causal gene regulatory networks can be represented by directed acyclic graph (DAGs), but learning DAGs from observational data is complicated by lack of identifiability and a combinatorial solution space. Score-based structure learning improves practical scalability of inferring DAGs. Previous score-based methods are sensitive to error variance structure; on the other hand, estimation of error variance is difficult without prior knowledge of structure. Accordingly, we present $\texttt{dotears}$ [doo-tairs], a continuous optimization framework which leverages observational and interventional data to infer a single causal structure, assuming a linear Structural Equation Model (SEM). $\texttt{dotears}$ exploits structural consequences of hard interventions to give a marginal estimate of exogenous error structure, bypassing the circular estimation problem. We show that $\texttt{dotears}$ is a provably consistent estimator of the true DAG under mild assumptions. $\texttt{dotears}$ outperforms other methods in varied simulations, and in real data infers edges that validate with higher precision and recall than state-of-the-art methods through differential expression tests and high-confidence protein-protein interactions.

We introduce a fine-grained framework for uncertainty quantification of predictive models under distributional shifts. This framework distinguishes the shift in covariate distributions from that in the conditional relationship between the outcome (Y) and the covariates (X). We propose to reweight the training samples to adjust for an identifiable covariate shift while protecting against worst-case conditional distribution shift bounded in an $f$-divergence ball. Based on ideas from conformal inference and distributionally robust learning, we present an algorithm that outputs (approximately) valid and efficient prediction intervals in the presence of distributional shifts. As a use case, we apply the framework to sensitivity analysis of individual treatment effects with hidden confounding. The proposed methods are evaluated in simulation studies and three real data applications, demonstrating superior robustness and efficiency compared with existing benchmarks.

We present a fully-integrated lattice Boltzmann (LB) method for fluid--structure interaction (FSI) simulations that efficiently models deformable solids in complex suspensions and active systems. Our Eulerian method (LBRMT) couples finite-strain solids to the LB fluid on the same fixed computational grid with the reference map technique (RMT). An integral part of the LBRMT is a new LB boundary condition for moving deformable interfaces across different densities. With this fully Eulerian solid--fluid coupling, the LBRMT is well-suited for parallelization and simulating multi-body contact without remeshing or extra meshes. We validate its accuracy via a benchmark of a deformable solid in a lid-driven cavity, then showcase its versatility through examples of soft solids rotating and settling. With simulations of complex suspensions mixing, we highlight potentials of the LBRMT for studying collective behavior in soft matter and biofluid dynamics.

We propose center-outward superquantile and expected shortfall functions, with applications to multivariate risk measurements, extending the standard notion of value at risk and conditional value at risk from the real line to $\mathbb{R}^d$. Our new concepts are built upon the recent definition of Monge-Kantorovich quantiles based on the theory of optimal transport, and they provide a natural way to characterize multivariate tail probabilities and central areas of point clouds. They preserve the univariate interpretation of a typical observation that lies beyond or ahead a quantile, but in a meaningful multivariate way. We show that they characterize random vectors and their convergence in distribution, which underlines their importance. Our new concepts are illustrated on both simulated and real datasets.

In semantic segmentation, training data down-sampling is commonly performed due to limited resources, the need to adapt image size to the model input, or improve data augmentation. This down-sampling typically employs different strategies for the image data and the annotated labels. Such discrepancy leads to mismatches between the down-sampled color and label images. Hence, the training performance significantly decreases as the down-sampling factor increases. In this paper, we bring together the down-sampling strategies for the image data and the training labels. To that aim, we propose a novel framework for label down-sampling via soft-labeling that better conserves label information after down-sampling. Therefore, fully aligning soft-labels with image data to keep the distribution of the sampled pixels. This proposal also produces reliable annotations for under-represented semantic classes. Altogether, it allows training competitive models at lower resolutions. Experiments show that the proposal outperforms other down-sampling strategies. Moreover, state-of-the-art performance is achieved for reference benchmarks, but employing significantly less computational resources than foremost approaches. This proposal enables competitive research for semantic segmentation under resource constraints.

Literature is full of inference techniques developed to estimate the parameters of stochastic dynamical systems driven by the well-known Brownian noise. Such diffusion models are often inappropriate models to properly describe the dynamics reflected in many real-world data which are dominated by jump discontinuities of various sizes and frequencies. To account for the presence of jumps, jump-diffusion models are introduced and some inference techniques are developed. Jump-diffusion models are also inadequate models since they fail to reflect the frequent occurrence as well as the continuous spectrum of natural jumps. It is, therefore, crucial to depart from the classical stochastic systems like diffusion and jump-diffusion models and resort to stochastic systems where the regime of stochasticity is governed by the stochastic fluctuations of L\'evy type. Reconstruction of L\'evy-driven dynamical systems, however, has been a major challenge. The literature on the reconstruction of L\'evy-driven systems is rather poor: there are few reconstruction algorithms developed which suffer from one or several problems such as being data-hungry, failing to provide a full reconstruction of noise parameters, tackling only some specific systems, failing to cope with multivariate data in practice, lacking proper validation mechanisms, and many more. This letter introduces a maximum likelihood estimation procedure which grants a full reconstruction of the system, requires less data, and its implementation for multivariate data is quite straightforward. To the best of our knowledge this contribution is the first to tackle all the mentioned shortcomings. We apply our algorithm to simulated data as well as an ice-core dataset spanning the last glaciation. In particular, we find new insights about the dynamics of the climate in the curse of the last glaciation which was not found in previous studies.

北京阿比特科技有限公司