亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Terrain surface roughness, often described abstractly, poses challenges in quantitative characterisation with various descriptors found in the literature. This study compares five commonly used roughness descriptors, exploring correlations among their quantified terrain surface roughness maps across three terrains with distinct spatial variations. Additionally, the study investigates the impacts of spatial scales and interpolation methods on these correlations. Dense point cloud data obtained through Light Detection and Ranging technique are used in this study. The findings highlight both global pattern similarities and local pattern distinctions in the derived roughness maps, emphasizing the significance of incorporating multiple descriptors in studies where local roughness values play a crucial role in subsequent analyses. The spatial scales were found to have a smaller impact on rougher terrain, while interpolation methods had minimal influence on roughness maps derived from different descriptors.

相關內容

We introduce a novel Dual Input Stream Transformer (DIST) for the challenging problem of assigning fixation points from eye-tracking data collected during passage reading to the line of text that the reader was actually focused on. This post-processing step is crucial for analysis of the reading data due to the presence of noise in the form of vertical drift. We evaluate DIST against eleven classical approaches on a comprehensive suite of nine diverse datasets. We demonstrate that combining multiple instances of the DIST model in an ensemble achieves high accuracy across all datasets. Further combining the DIST ensemble with the best classical approach yields an average accuracy of 98.17 %. Our approach presents a significant step towards addressing the bottleneck of manual line assignment in reading research. Through extensive analysis and ablation studies, we identify key factors that contribute to DIST's success, including the incorporation of line overlap features and the use of a second input stream. Via rigorous evaluation, we demonstrate that DIST is robust to various experimental setups, making it a safe first choice for practitioners in the field.

Under a generalised estimating equation analysis approach, approximate design theory is used to determine Bayesian D-optimal designs. For two examples, considering simple exchangeable and exponential decay correlation structures, we compare the efficiency of identified optimal designs to balanced stepped-wedge designs and corresponding stepped-wedge designs determined by optimising using a normal approximation approach. The dependence of the Bayesian D-optimal designs on the assumed correlation structure is explored; for the considered settings, smaller decay in the correlation between outcomes across time periods, along with larger values of the intra-cluster correlation, leads to designs closer to a balanced design being optimal. Unlike for normal data, it is shown that the optimal design need not be centro-symmetric in the binary outcome case. The efficiency of the Bayesian D-optimal design relative to a balanced design can be large, but situations are demonstrated in which the advantages are small. Similarly, the optimal design from a normal approximation approach is often not much less efficient than the Bayesian D-optimal design. Bayesian D-optimal designs can be readily identified for stepped-wedge cluster randomised trials with binary outcome data. In certain circumstances, principally ones with strong time period effects, they will indicate that a design unlikely to have been identified by previous methods may be substantially more efficient. However, they require a larger number of assumptions than existing optimal designs, and in many situations existing theory under a normal approximation will provide an easier means of identifying an efficient design for binary outcome data.

Many computational problems involve optimization over discrete variables with quadratic interactions. Known as discrete quadratic models (DQMs), these problems in general are NP-hard. Accordingly, there is increasing interest in encoding DQMs as quadratic unconstrained binary optimization (QUBO) models to allow their solution by quantum and quantum-inspired hardware with architectures and solution methods designed specifically for such problem types. However, converting DQMs to QUBO models often introduces invalid solutions to the solution space of the QUBO models. These solutions must be penalized by introducing appropriate constraints to the QUBO objective function that are weighted by a tunable penalty parameter to ensure that the global optimum is valid. However, selecting the strength of this parameter is non-trivial, given its influence on solution landscape structure. Here, we investigate the effects of choice of encoding and penalty strength on the structure of QUBO DQM solution landscapes and their optimization, focusing specifically on one-hot and domain-wall encodings.

We study a colored generalization of the famous simple-switch Markov chain for sampling the set of graphs with a fixed degree sequence. Here we consider the space of graphs with colored vertices, in which we fix the degree sequence and another statistic arising from the vertex coloring, and prove that the set can be connected with simple color-preserving switches or moves. These moves form a basis for defining an irreducible Markov chain necessary for testing statistical model fit to block-partitioned network data. Our methods further generalize well-known algebraic results from the 1990s: namely, that the corresponding moves can be used to construct a regular triangulation for a generalization of the second hypersimplex. On the other hand, in contrast to the monochromatic case, we show that for simple graphs, the 1-norm of the moves necessary to connect the space increases with the number of colors.

This paper is concerned with the problem of sampling and interpolation involving derivatives in shift-invariant spaces and the error analysis of the derivative sampling expansions for fundamentally large classes of functions. A new type of polynomials based on derivative samples is introduced, which is different from the Euler-Frobenius polynomials for the multiplicity $r>1$. A complete characterization of uniform sampling with derivatives is given using Laurent operators. The rate of approximation of a signal (not necessarily continuous) by the derivative sampling expansions in shift-invariant spaces generated by compactly supported functions is established in terms of $L^p$- average modulus of smoothness. Finally, several typical examples illustrating the various problems are discussed in detail.

We develop the theory of the edge coloring of infinite lattice graphs, proving a necessary and sufficient condition for a proper edge coloring of a patch of a lattice graph to induce a proper edge coloring of the entire lattice graph by translation. This condition forms the cornerstone of a method that finds nearly minimal or minimal edge colorings of infinite lattice graphs. In case a nearly minimal edge coloring is requested, the running time is $O(\mu^2 D^4)$, where $\mu$ is the number of edges in one cell (or `basis graph') of the lattice graph and $D$ is the maximum distance between two cells so that there is an edge from within one cell to the other. In case a minimal edge coloring is requested, we lack an upper bound on the running time, which we find need not pose a limitation in practice; we use the method to minimal edge color the meshes of all $k$-uniform tilings of the plane for $k\leq 6$, while utilizing modest computational resources. We find that all these lattice graphs are Vizing class~I. Relating edge colorings to quantum circuits, our work finds direct application by offering minimal-depth quantum circuits in the areas of quantum simulation, quantum optimization, and quantum state verification.

Quality assessment, including inspecting the images for artifacts, is a critical step during MRI data acquisition to ensure data quality and downstream analysis or interpretation success. This study demonstrates a deep learning model to detect rigid motion in T1-weighted brain images. We leveraged a 2D CNN for three-class classification and tested it on publicly available retrospective and prospective datasets. Grad-CAM heatmaps enabled the identification of failure modes and provided an interpretation of the model's results. The model achieved average precision and recall metrics of 85% and 80% on six motion-simulated retrospective datasets. Additionally, the model's classifications on the prospective dataset showed a strong inverse correlation (-0.84) compared to average edge strength, an image quality metric indicative of motion. This model is part of the ArtifactID tool, aimed at inline automatic detection of Gibbs ringing, wrap-around, and motion artifacts. This tool automates part of the time-consuming QA process and augments expertise on-site, particularly relevant in low-resource settings where local MR knowledge is scarce.

Dynamical low-rank approximation has become a valuable tool to perform an on-the-fly model order reduction for prohibitively large matrix differential equations. A core ingredient is the construction of integrators that are robust to the presence of small singular values and the resulting large time derivatives of the orthogonal factors in the low-rank matrix representation. Recently, the robust basis-update & Galerkin (BUG) class of integrators has been introduced. These methods require no steps that evolve the solution backward in time, often have favourable structure-preserving properties, and allow for parallel time-updates of the low-rank factors. The BUG framework is flexible enough to allow for adaptations to these and further requirements. However, the BUG methods presented so far have only first-order robust error bounds. This work proposes a second-order BUG integrator for dynamical low-rank approximation based on the midpoint rule. The integrator first performs a half-step with a first-order BUG integrator, followed by a Galerkin update with a suitably augmented basis. We prove a robust second-order error bound which in addition shows an improved dependence on the normal component of the vector field. These rigorous results are illustrated and complemented by a number of numerical experiments.

Hazard ratios are frequently reported in time-to-event and epidemiological studies to assess treatment effects. In observational studies, the combination of propensity score weights with the Cox proportional hazards model facilitates the estimation of the marginal hazard ratio (MHR). The methods for estimating MHR are analogous to those employed for estimating common causal parameters, such as the average treatment effect. However, MHR estimation in the context of high-dimensional data remain unexplored. This paper seeks to address this gap through a simulation study that consider variable selection methods from causal inference combined with a recently proposed multiply robust approach for MHR estimation. Additionally, a case study utilizing stroke register data is conducted to demonstrate the application of these methods. The results from the simulation study indicate that the double selection covariate selection method is preferable to several other strategies when estimating MHR. Nevertheless, the estimation can be further improved by employing the multiply robust approach to the set of propensity score models obtained during the double selection process.

This paper takes a different look on the problem of testing the mutual independence of the components of a high-dimensional vector. Instead of testing if all pairwise associations (e.g. all pairwise Kendall's $\tau$) between the components vanish, we are interested in the (null)-hypothesis that all pairwise associations do not exceed a certain threshold in absolute value. The consideration of these hypotheses is motivated by the observation that in the high-dimensional regime, it is rare, and perhaps impossible, to have a null hypothesis that can be exactly modeled by assuming that all pairwise associations are precisely equal to zero. The formulation of the null hypothesis as a composite hypothesis makes the problem of constructing tests non-standard and in this paper we provide a solution for a broad class of dependence measures, which can be estimated by $U$-statistics. In particular we develop an asymptotic and a bootstrap level $\alpha$-test for the new hypotheses in the high-dimensional regime. We also prove that the new tests are minimax-optimal and investigate their finite sample properties by means of a small simulation study and a data example.

北京阿比特科技有限公司