亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We developed a statistical inference method applicable to a broad range of generalized linear models (GLMs) in high-dimensional settings, where the number of unknown coefficients scales proportionally with the sample size. Although a pioneering inference method has been developed for logistic regression, which is a specific instance of GLMs, we cannot apply this method directly to other GLMs because of unknown hyper-parameters. In this study, we addressed this limitation by developing a new inference method designed for a certain class of GLMs. Our method is based on the adjustment of asymptotic normality in high dimensions and is feasible in the sense that it is possible even with unknown hyper-parameters. Specifically, we introduce a novel convex loss-based estimator and its associated system, which are essential components of inference. Next, we devise a moment-based method for estimating the system parameters required by the method. Consequently, we construct confidence intervals for GLMs in a high-dimensional regime. We prove that our proposed method has desirable theoretical properties, such as strong consistency and exact coverage probability. Finally, we experimentally confirmed its validity.

相關內容

This work examines the distributed optimal control of generalized Oseen equations with non-constant viscosity. We propose and analyze a new conforming augmented mixed finite element method and a Discontinuous Galerkin (DG) method for the velocity-vorticity-pressure formulation. The continuous formulation, which incorporates least-squares terms from both the constitutive equation and the incompressibility condition, is well-posed under certain assumptions on the viscosity parameter. The CG method is divergence-conforming and suits any Stokes inf-sup stable velocity-pressure finite element pair, while a generic discrete space approximates vorticity. The DG scheme employs a stabilization technique, and a piecewise constant discretization estimates the control variable. We establish optimal a priori and residual-based a posteriori error estimates for the proposed schemes. Finally, we provide numerical experiments to showcase the method's performance and effectiveness.

Tensegrities synergistically combine tensile (cable) and rigid (link) elements to achieve structural integrity, making them lightweight, packable, and impact resistant. Consequently, they have high potential for locomotion in unstructured environments. This research presents geometric modeling of a Tensegrity eXploratory Robot (TeXploR) comprised of two semi-circular, curved links held together by 12 prestressed cables and actuated with an internal mass shifting along each link. This design allows for efficient rolling with stability (e.g., tip-over on an incline). However, the unique design poses static and dynamic modeling challenges given the discontinuous nature of the semi-circular, curved links, two changing points of contact with the surface plane, and instantaneous movement of the masses along the links. The robot is modeled using a geometric approach where the holonomic constraints confirm the experimentally observed four-state hybrid system, proving TeXploR rolls along one link while pivoting about the end of the other. It also identifies the quasi-static state transition boundaries that enable a continuous change in the robot states via internal mass shifting. This is the first time in literature a non-spherical two-point contact system is kinematically and geometrically modeled. Furthermore, the static solutions are closed-form and do not require numerical exploration of the solution. The MATLAB simulations are experimentally validated on a tetherless prototype with mean absolute error of 4.36{\deg}.

We present new fundamental results for the mean square error (MSE)-optimal conditional mean estimator (CME) in one-bit quantized systems for a Gaussian mixture model (GMM) distributed signal of interest, possibly corrupted by additive white Gaussian noise (AWGN). We first derive novel closed-form analytic expressions for the Bussgang estimator, the well-known linear minimum mean square error (MMSE) estimator in quantized systems. Afterward, closed-form analytic expressions for the CME in special cases are presented, revealing that the optimal estimator is linear in the one-bit quantized observation, opposite to higher resolution cases. Through a comparison to the recently studied Gaussian case, we establish a novel MSE inequality and show that that the signal of interest is correlated with the auxiliary quantization noise. We extend our analysis to multiple observation scenarios, examining the MSE-optimal transmit sequence and conducting an asymptotic analysis, yielding analytic expressions for the MSE and its limit. These contributions have broad impact for the analysis and design of various signal processing applications.

Score distillation sampling (SDS), the methodology in which the score from pretrained 2D diffusion models is distilled into 3D representation, has recently brought significant advancements in text-to-3D generation task. However, this approach is still confronted with critical geometric inconsistency problems such as the Janus problem. Starting from a hypothesis that such inconsistency problems may be induced by multiview inconsistencies between 2D scores predicted from various viewpoints, we introduce GSD, a simple and general plug-and-play framework for incorporating 3D consistency and therefore geometry awareness into the SDS process. Our methodology is composed of three components: 3D consistent noising, designed to produce 3D consistent noise maps that perfectly follow the standard Gaussian distribution, geometry-based gradient warping for identifying correspondences between predicted gradients of different viewpoints, and novel gradient consistency loss to optimize the scene geometry toward producing more consistent gradients. We demonstrate that our method significantly improves performance, successfully addressing the geometric inconsistency problems in text-to-3D generation task with minimal computation cost and being compatible with existing score distillation-based models. Our project page is available at //ku-cvlab.github.io/GSD/.

Optimal mean shift vector (OMSV)-based importance sampling methods have long been prevalent in yield estimation and optimization as an industry standard. However, most OMSV-based methods are designed heuristically without a rigorous understanding of their limitations. To this end, we propose VIS, the first variational analysis framework for yield problems, enabling a systematic refinement for OMSV. For instance, VIS reveals that the classic OMSV is suboptimal, and the optimal/true OMSV should always stay beyond the failure boundary, which enables a free improvement for all OMSV-based methods immediately. Using VIS, we show a progressive refinement for the classic OMSV including incorporation of full covariance in closed form, adjusting for asymmetric failure distributions, and capturing multiple failure regions, each of which contributes to a progressive improvement of more than 2x. Inheriting the simplicity of OMSV, the proposed method retains simplicity and robustness yet achieves up to 29.03x speedup over the state-of-the-art (SOTA) methods. We also demonstrate how the SOTA yield optimization, ASAIS, can immediately benefit from our True OMSV, delivering a 1.20x and 1.27x improvement in performance and efficiency, respectively, without additional computational overhead.

Reduced order models (ROMs) have achieved a lot of success in reducing the computational cost of traditional numerical methods across many disciplines. For convection-dominated (e.g., turbulent) flows, however, standard ROMs generally yield inaccurate results, usually affected by spurious oscillations. Thus, ROMs are usually equipped with numerical stabilization or closure models to account for the effect of the discarded modes. The literature on ROM closures and stabilizations is large and growing fast. In this paper, we focus on one particular type of ROM closures and stabilizations that are inspired by Large Eddy Simulation (LES). These ROMs, which we call LES-ROMs, are extremely easy to implement, very efficient, and accurate. Carefully tuned LES-ROMs can accurately capture the average physical quantities of interest in challenging convection-dominated flows in many applications. LES-ROM are constructed by leveraging spatial filtering, i.e., the same principle used to build classical LES models. This ensures a modeling consistency between LES-ROMs and the approaches that generated the data used to train them. It also ``bridges'' two distinct research fields (LES and ROMs), disconnected until now. This paper is a review of LES-ROMs. It starts with a description of a versatile LES strategy called evolve-filter-relax (EFR) that has been successfully used as a full order method. We then show how the EFR strategy, and spatial filtering in general, can be leveraged to construct LES-ROMs. Several applications of LES-ROMs are presented. Finally, we draw conclusions and outline several research directions and open questions in the LES-ROM development. While we do not claim this review to be comprehensive, we certainly hope it serves as a brief and friendly introduction to this exciting research area, which has a lot of potential in practical numerical simulation of convection-dominated flows.

Estimating the density of a distribution from samples is a fundamental problem in statistics. In many practical settings, the Wasserstein distance is an appropriate error metric for density estimation. For example, when estimating population densities in a geographic region, a small Wasserstein distance means that the estimate is able to capture roughly where the population mass is. In this work we study differentially private density estimation in the Wasserstein distance. We design and analyze instance-optimal algorithms for this problem that can adapt to easy instances. For distributions $P$ over $\mathbb{R}$, we consider a strong notion of instance-optimality: an algorithm that uniformly achieves the instance-optimal estimation rate is competitive with an algorithm that is told that the distribution is either $P$ or $Q_P$ for some distribution $Q_P$ whose probability density function (pdf) is within a factor of 2 of the pdf of $P$. For distributions over $\mathbb{R}^2$, we use a different notion of instance optimality. We say that an algorithm is instance-optimal if it is competitive with an algorithm that is given a constant-factor multiplicative approximation of the density of the distribution. We characterize the instance-optimal estimation rates in both these settings and show that they are uniformly achievable (up to polylogarithmic factors). Our approach for $\mathbb{R}^2$ extends to arbitrary metric spaces as it goes via hierarchically separated trees. As a special case our results lead to instance-optimal private learning in TV distance for discrete distributions.

Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.

We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.

We introduce a generic framework that reduces the computational cost of object detection while retaining accuracy for scenarios where objects with varied sizes appear in high resolution images. Detection progresses in a coarse-to-fine manner, first on a down-sampled version of the image and then on a sequence of higher resolution regions identified as likely to improve the detection accuracy. Built upon reinforcement learning, our approach consists of a model (R-net) that uses coarse detection results to predict the potential accuracy gain for analyzing a region at a higher resolution and another model (Q-net) that sequentially selects regions to zoom in. Experiments on the Caltech Pedestrians dataset show that our approach reduces the number of processed pixels by over 50% without a drop in detection accuracy. The merits of our approach become more significant on a high resolution test set collected from YFCC100M dataset, where our approach maintains high detection performance while reducing the number of processed pixels by about 70% and the detection time by over 50%.

北京阿比特科技有限公司