亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper extends the inverse substructuring (IS) approach to the state-space domain and presents a novel state-space substructuring (SSS) technique that embeds the dynamics of connecting elements (CEs) in the Lagrange Multiplier State-Space Substructuring (LM-SSS) formulation via compatibility relaxation. This coupling approach makes it possible to incorporate into LM-SSS connecting elements that are suitable for being characterized by inverse substructuring (e.g. rubber mounts) by simply using information from one of its off diagonal apparent mass terms. Therefore, the information obtained from an in-situ experimental characterization of the CEs is enough to include them into the coupling formulation. Moreover, LM-SSS with compatibility relaxation makes it possible to couple an unlimited number of components and CEs, requiring only one matrix inversion to compute the coupled state-space model (SSM). Two post-processing procedures to enable the computation of minimal-order coupled models by using this approach are also presented. Numerical and experimental substructuring applications are exploited to prove the validity of the proposed methods. It is found that the IS approach can be accurately applied on state-space models representative of components linked by CEs to identify models representative of the diagonal apparent mass terms of the CEs, provided that the CEs can be accurately characterized by the underlying assumptions of IS. In this way, state-space models representative of experimentally characterized CEs can be found without performing decoupling operations. Hence, these models are not contaminated with spurious states. Furthermore, it was found that the developed coupling approach is reliable, when the dynamics of the CEs can be accurately characterized by IS, thus making it possible to compute reliable coupled models that are not composed by spurious states.

相關內容

 國際消費類電子產品展覽會,簡稱國際消費電子展,常簡稱為CES,每年1月在美國內華達州拉斯維加斯舉行,由消費電子協會贊助。

Statistical analysis of large datasets is a challenge because of the limitation of computing devices' memory and excessive computation time. Divide and Conquer (DC) algorithm is an effective solution path, but the DC algorithm still has limitations for statistical inference. Empirical likelihood is an important semiparametric and nonparametric statistical method for parameter estimation and statistical inference, and the estimating equation builds a bridge between empirical likelihood and traditional statistical methods, which makes empirical likelihood widely used in various traditional statistical models. In this paper, we propose a novel approach to address the challenges posed by empirical likelihood with massive data, which is called split sample mean empirical likelihood(SSMEL), our approach provides a unique perspective for sovling big data problem. We show that the SSMEL estimator has the same estimation efficiency as the empirical likelihood estimator with the full dataset, and maintains the important statistical property of Wilks' theorem, allowing our proposed approach to be used for statistical inference. The effectiveness of the proposed approach is illustrated using simulation studies and real data analysis.

Recent papers [Ber'2022], [GP'2020], [DHZ'2019] have addressed different variants of the (\Delta + 1)-edge colouring problem by concatenating or gluing together many Vizing chains to form what Bernshteyn [Ber'2022] coined \emph{multi-step Vizing chains}. In this paper, we propose a slightly more general definition of this term. We then apply multi-step Vizing chain constructions to prove combinatorial properties of edge colourings that lead to (improved) algorithms for computing edge colouring across different models of computation. This approach seems especially powerful for constructing augmenting subgraphs which respect some notion of locality. First, we construct strictly local multi-step Vizing chains and use them to show a local version of Vizings Theorem thus confirming a recent conjecture of Bonamy, Delcourt, Lang and Postle [BDLP'2020]. Our proof is constructive and also implies an algorithm for computing such a colouring. Then, we show that for any uncoloured edge there exists an augmenting subgraph of size O(\Delta^{7}\log n), answering an open problem of Bernshteyn [Ber'2022]. Chang, He, Li, Pettie and Uitto [CHLPU'2018] show a lower bound of \Omega(\Delta \log \frac{n}{\Delta}) for the size of such augmenting subgraphs, so the upper bound is tight up to \Delta and constant factors. These ideas also extend to give a faster deterministic LOCAL algorithm for (\Delta + 1)-edge colouring running in \tilde{O}(\poly(\Delta)\log^6 n) rounds. These results improve the recent breakthrough result of Bernshteyn [Ber'2022], who showed the existence of augmenting subgraphs of size O(\Delta^6\log^2 n), and used these to give the first (\Delta + 1)-edge colouring algorithm in the LOCAL model running in O(\poly(\Delta, \log n)) rounds. ... (see paper for the remaining part of the abstract)

This study investigates the use of continuous-time dynamical systems for sparse signal recovery. The proposed dynamical system is in the form of a nonlinear ordinary differential equation (ODE) derived from the gradient flow of the Lasso objective function. The sparse signal recovery process of this ODE-based approach is demonstrated by numerical simulations using the Euler method. The state of the continuous-time dynamical system eventually converges to the equilibrium point corresponding to the minimum of the objective function. To gain insight into the local convergence properties of the system, a linear approximation around the equilibrium point is applied, yielding a closed-form error evolution ODE. This analysis shows the behavior of convergence to the equilibrium point. In addition, a variational optimization problem is proposed to optimize a time-dependent regularization parameter in order to improve both convergence speed and solution quality. The deep unfolded-variational optimization method is introduced as a means of solving this optimization problem, and its effectiveness is validated through numerical experiments.

We study the convergence of a family of numerical integration methods where the numerical integral is formulated as a finite matrix approximation to a multiplication operator. For bounded functions, the convergence has already been established using the theory of strong operator convergence. In this article, we consider unbounded functions and domains which pose several difficulties compared to the bounded case. A natural choice of method for this study is the theory of strong resolvent convergence which has previously been mostly applied to study the convergence of approximations of differential operators. The existing theory already includes convergence theorems that can be used as proofs as such for a limited class of functions and extended for wider class of functions in terms of function growth or discontinuity. The extended results apply to all self-adjoint operators, not just multiplication operators. We also show how Jensen's operator inequality can be used to analyse the convergence of an improper numerical integral of a function bounded by an operator convex function.

This work is motivated by learning the individualized minimal clinically important difference, a vital concept to assess clinical importance in various biomedical studies. We formulate the scientific question into a high-dimensional statistical problem where the parameter of interest lies in an individualized linear threshold. The goal is to develop a hypothesis testing procedure for the significance of a single element in this parameter as well as of a linear combination of this parameter. The difficulty dues to the high-dimensional nuisance in developing such a testing procedure, and also stems from the fact that this high-dimensional threshold model is nonregular and the limiting distribution of the corresponding estimator is nonstandard. To deal with these challenges, we construct a test statistic via a new bias-corrected smoothed decorrelated score approach, and establish its asymptotic distributions under both null and local alternative hypotheses. We propose a double-smoothing approach to select the optimal bandwidth in our test statistic and provide theoretical guarantees for the selected bandwidth. We conduct simulation studies to demonstrate how our proposed procedure can be applied in empirical studies. We apply the proposed method to a clinical trial where the scientific goal is to assess the clinical importance of a surgery procedure.

We present a study of a kernel-based two-sample test statistic related to the Maximum Mean Discrepancy (MMD) in the manifold data setting, assuming that high-dimensional observations are close to a low-dimensional manifold. We characterize the test level and power in relation to the kernel bandwidth, the number of samples, and the intrinsic dimensionality of the manifold. Specifically, we show that when data densities are supported on a $d$-dimensional sub-manifold $\mathcal{M}$ embedded in an $m$-dimensional space, the kernel two-sample test for data sampled from a pair of distributions $p$ and $q$ that are H\"older with order $\beta$ (up to 2) is powerful when the number of samples $n$ is large such that $\Delta_2 \gtrsim n^{- { 2 \beta/( d + 4 \beta ) }}$, where $\Delta_2$ is the squared $L^2$-divergence between $p$ and $q$ on manifold. We establish a lower bound on the test power for finite $n$ that is sufficiently large, where the kernel bandwidth parameter $\gamma$ scales as $n^{-1/(d+4\beta)}$. The analysis extends to cases where the manifold has a boundary, and the data samples contain high-dimensional additive noise. Our results indicate that the kernel two-sample test does not have a curse-of-dimensionality when the data lie on or near a low-dimensional manifold. We validate our theory and the properties of the kernel test for manifold data through a series of numerical experiments.

We present PACE, a novel method for modifying motion-captured virtual agents to interact with and move throughout dense, cluttered 3D scenes. Our approach changes a given motion sequence of a virtual agent as needed to adjust to the obstacles and objects in the environment. We first take the individual frames of the motion sequence most important for modeling interactions with the scene and pair them with the relevant scene geometry, obstacles, and semantics such that interactions in the agents motion match the affordances of the scene (e.g., standing on a floor or sitting in a chair). We then optimize the motion of the human by directly altering the high-DOF pose at each frame in the motion to better account for the unique geometric constraints of the scene. Our formulation uses novel loss functions that maintain a realistic flow and natural-looking motion. We compare our method with prior motion generating techniques and highlight the benefits of our method with a perceptual study and physical plausibility metrics. Human raters preferred our method over the prior approaches. Specifically, they preferred our method 57.1% of the time versus the state-of-the-art method using existing motions, and 81.0% of the time versus a state-of-the-art motion synthesis method. Additionally, our method performs significantly higher on established physical plausibility and interaction metrics. Specifically, we outperform competing methods by over 1.2% in terms of the non-collision metric and by over 18% in terms of the contact metric. We have integrated our interactive system with Microsoft HoloLens and demonstrate its benefits in real-world indoor scenes. Our project website is available at //gamma.umd.edu/pace/.

Words in a natural language not only transmit information but also evolve with the development of civilization and human migration. The same is true for music. To understand the complex structure behind the music, we introduced an algorithm called the Essential Element Network (EEN) to encode the audio into text. The network is obtained by calculating the correlations between scales, time, and volume. Optimizing EEN to generate Zipfs law for the frequency and rank of the clustering coefficient enables us to generate and regard the semantic relationships as words. We map these encoded words into the scale-temporal space, which helps us organize systematically the syntax in the deep structure of music. Our algorithm provides precise descriptions of the complex network behind the music, as opposed to the black-box nature of other deep learning approaches. As a result, the experience and properties accumulated through these processes can offer not only a new approach to the applications of Natural Language Processing (NLP) but also an easier and more objective way to analyze the evolution and development of music.

This PhD thesis contains several contributions to the field of statistical causal modeling. Statistical causal models are statistical models embedded with causal assumptions that allow for the inference and reasoning about the behavior of stochastic systems affected by external manipulation (interventions). This thesis contributes to the research areas concerning the estimation of causal effects, causal structure learning, and distributionally robust (out-of-distribution generalizing) prediction methods. We present novel and consistent linear and non-linear causal effects estimators in instrumental variable settings that employ data-dependent mean squared prediction error regularization. Our proposed estimators show, in certain settings, mean squared error improvements compared to both canonical and state-of-the-art estimators. We show that recent research on distributionally robust prediction methods has connections to well-studied estimators from econometrics. This connection leads us to prove that general K-class estimators possess distributional robustness properties. We, furthermore, propose a general framework for distributional robustness with respect to intervention-induced distributions. In this framework, we derive sufficient conditions for the identifiability of distributionally robust prediction methods and present impossibility results that show the necessity of several of these conditions. We present a new structure learning method applicable in additive noise models with directed trees as causal graphs. We prove consistency in a vanishing identifiability setup and provide a method for testing substructure hypotheses with asymptotic family-wise error control that remains valid post-selection. Finally, we present heuristic ideas for learning summary graphs of nonlinear time-series models.

Dynamic programming (DP) solves a variety of structured combinatorial problems by iteratively breaking them down into smaller subproblems. In spite of their versatility, DP algorithms are usually non-differentiable, which hampers their use as a layer in neural networks trained by backpropagation. To address this issue, we propose to smooth the max operator in the dynamic programming recursion, using a strongly convex regularizer. This allows to relax both the optimal value and solution of the original combinatorial problem, and turns a broad class of DP algorithms into differentiable operators. Theoretically, we provide a new probabilistic perspective on backpropagating through these DP operators, and relate them to inference in graphical models. We derive two particular instantiations of our framework, a smoothed Viterbi algorithm for sequence prediction and a smoothed DTW algorithm for time-series alignment. We showcase these instantiations on two structured prediction tasks and on structured and sparse attention for neural machine translation.

北京阿比特科技有限公司