亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The preimage or inverse image of a predefined subset of the range of a deterministic function, called inverse set for short, is the set in the domain whose image equals that predefined subset. To quantify the uncertainty present in estimating such a set, one can construct data-dependent inner and outer confidence sets that serve as sub- and super-sets respectively of the true inverse set. Existing methods require strict assumptions with emphasis on dense functional data. In this work, we generalize the estimation of inverse sets to wider range data types by rigorously proving that, by inverting pre-constructed simultaneous confidence intervals (SCI), confidence sets of multiple levels can be simultaneously constructed with the desired confidence non-asymptotically. We provide valid non-parametric bootstrap algorithm and open source code for constructing confidence sets on dense functional data and multiple regression data. The method is exemplified in two distinct applications: identifying regions in North America experiencing rising temperatures using dense functional data and evaluating the impact of statin usage and COVID-19 on the clinical outcomes of hospitalized patients using logistic regression data.

相關內容

We present new Dirichlet-Neumann and Neumann-Dirichlet algorithms with a time domain decomposition applied to unconstrained parabolic optimal control problems. After a spatial semi-discretization, we use the Lagrange multiplier approach to derive a coupled forward-backward optimality system, which can then be solved using a time domain decomposition. Due to the forward-backward structure of the optimality system, three variants can be found for the Dirichlet-Neumann and Neumann-Dirichlet algorithms. We analyze their convergence behavior and determine the optimal relaxation parameter for each algorithm. Our analysis reveals that the most natural algorithms are actually only good smoothers, and there are better choices which lead to efficient solvers. We illustrate our analysis with numerical experiments.

Partial differential equations (PDEs) have become an essential tool for modeling complex physical systems. Such equations are typically solved numerically via mesh-based methods, such as finite element methods, the outputs of which consist of the solutions on a set of mesh nodes over the spatial domain. However, these simulations are often prohibitively costly to survey the input space. In this paper, we propose an efficient emulator that simultaneously predicts the outputs on a set of mesh nodes, with theoretical justification of its uncertainty quantification. The novelty of the proposed method lies in the incorporation of the mesh node coordinates into the statistical model. In particular, the proposed method segments the mesh nodes into multiple clusters via a Dirichlet process prior and fits a Gaussian process model in each. Most importantly, by revealing the underlying clustering structures, the proposed method can extract valuable flow physics present in the systems that can be used to guide further investigations. Real examples are demonstrated to show that our proposed method has smaller prediction errors than its main competitors, with competitive computation time, and identifies interesting clusters of mesh nodes that exhibit coherent input-output relationships and possess physical significance, such as satisfying boundary conditions. An R package for the proposed methodology is provided in an open repository.

As quantum theory allows for information processing and computing tasks that otherwise are not possible with classical systems, there is a need and use of quantum Internet beyond existing network systems. At the same time, the realization of a desirably functional quantum Internet is hindered by fundamental and practical challenges such as high loss during transmission of quantum systems, decoherence due to interaction with the environment, fragility of quantum states, etc. We study the implications of these constraints by analyzing the limitations on the scaling and robustness of quantum Internet. Considering quantum networks, we present practical bottlenecks for secure communication, delegated computing, and resource distribution among end nodes. Motivated by the power of abstraction in graph theory (in association with quantum information theory), we consider graph-theoretic quantifiers to assess network robustness and provide critical values of communication lines for viable communication over quantum Internet. In particular, we begin by discussing limitations on usefulness of isotropic states as device-independent quantum key repeaters which otherwise could be useful for device-independent quantum key distribution. We consider some quantum networks of practical interest, ranging from satellite-based networks connecting far-off spatial locations to currently available quantum processor architectures within computers, and analyze their robustness to perform quantum information processing tasks. Some of these tasks form primitives for delegated quantum computing, e.g., entanglement distribution and quantum teleportation. For some examples of quantum networks, we present algorithms to perform different quantum network tasks of interest such as constructing the network structure, finding the shortest path between a pair of end nodes, and optimizing the flow of resources at a node.

In many scientific applications the aim is to infer a function which is smooth in some areas, but rough or even discontinuous in other areas of its domain. Such spatially inhomogeneous functions can be modelled in Besov spaces with suitable integrability parameters. In this work we study adaptive Bayesian inference over Besov spaces, in the white noise model from the point of view of rates of contraction, using $p$-exponential priors, which range between Laplace and Gaussian and possess regularity and scaling hyper-parameters. To achieve adaptation, we employ empirical and hierarchical Bayes approaches for tuning these hyper-parameters. Our results show that, while it is known that Gaussian priors can attain the minimax rate only in Besov spaces of spatially homogeneous functions, Laplace priors attain the minimax or nearly the minimax rate in both Besov spaces of spatially homogeneous functions and Besov spaces permitting spatial inhomogeneities.

Eigenvector decomposition (EVD) is an inevitable operation to obtain the precoders in practical massive multiple-input multiple-output (MIMO) systems. Due to the large antenna size and at finite computation resources at the base station (BS), the overwhelming computation complexity of EVD is one of the key limiting factors of the system performance. To address this problem, we propose an eigenvector prediction (EGVP) method by interpolating the precoding matrix with predicted eigenvectors. The basic idea is to exploit a few historical precoders to interpolate the rest of them without EVD of the channel state information (CSI). We transform the nonlinear EVD into a linear prediction problem and prove that the prediction of the eigenvectors can be achieved with a complex exponential model. Furthermore, a channel prediction method called fast matrix pencil prediction (FMPP) is proposed to cope with the CSI delay when applying the EGVP method in mobility environments. The asymptotic analysis demonstrates how many samples are needed to achieve asymptotically error-free eigenvector predictions and channel predictions. Finally, the simulation results demonstrate the spectral efficiency improvement of our scheme over the benchmarks and the robustness to different mobility scenarios.

Neural dynamical systems with stable attractor structures, such as point attractors and continuous attractors, are hypothesized to underlie meaningful temporal behavior that requires working memory. However, working memory may not support useful learning signals necessary to adapt to changes in the temporal structure of the environment. We show that in addition to the continuous attractors that are widely implicated, periodic and quasi-periodic attractors can also support learning arbitrarily long temporal relationships. Unlike the continuous attractors that suffer from the fine-tuning problem, the less explored quasi-periodic attractors are uniquely qualified for learning to produce temporally structured behavior. Our theory has broad implications for the design of artificial learning systems and makes predictions about observable signatures of biological neural dynamics that can support temporal dependence learning and working memory. Based on our theory, we developed a new initialization scheme for artificial recurrent neural networks that outperforms standard methods for tasks that require learning temporal dynamics. Moreover, we propose a robust recurrent memory mechanism for integrating and maintaining head direction without a ring attractor.

Many economic panel and dynamic models, such as rational behavior and Euler equations, imply that the parameters of interest are identified by conditional moment restrictions with high dimensional conditioning instruments. We develop a novel inference method for the parameters identified by conditional moment restrictions, where the dimension of the conditioning instruments is high and there is no prior information about which conditioning instruments are weak or irrelevant. Building on Bierens (1990), we propose penalized maximum statistics and combine bootstrap inference with model selection. Our method optimizes the asymptotic power against a set of $n^{-1/2}$-local alternatives of interest by solving a data-dependent max-min problem for tuning parameter selection. We demonstrate the efficacy of our method by two empirical examples: the elasticity of intertemporal substitution and rational unbiased reporting of ability status. Extensive Monte Carlo experiments based on the first empirical example show that our inference procedure is superior to those available in the literature in realistic settings.

A superdirective antenna array has the potential to achieve an array gain proportional to the square of the number of antennas, making it of great value for future wireless communications. However, designing the superdirective beamformer while considering the complicated mutual-coupling effect is a practical challenge. Moreover, the superdirective antenna array is highly sensitive to excitation errors, especially when the number of antennas is large or the antenna spacing is very small, necessitating demanding and precise control over excitations. To address these problems, we first propose a novel superdirective beamforming approach based on the embedded element pattern (EEP), which contains the coupling information. The closed-form solution to the beamforming vector and the corresponding directivity factor are derived. This method relies on the beam coupling factors (BCFs) between the antennas, which are provided in closed form. To address the high sensitivity problem, we formulate a constrained optimization problem and propose an EEP-aided orthogonal complement-based robust beamforming (EEP-OCRB) algorithm. Full-wave simulation results validate our proposed methods. Finally, we build a prototype of a 5-dipole superdirective antenna array and conduct real-world experiments. The measurement results demonstrate the realization of the superdirectivity with our EEP-based method, as well as the robustness of the proposed EEP-OCRB algorithm to excitation errors.

In a given generalized linear model with fixed effects, and under a specified loss function, what is the optimal estimator of the coefficients? We propose as a contender an ideal (oracle) shrinkage estimator, specifically, the Bayes estimator under the particular prior that assigns equal mass to every permutation of the true coefficient vector. We first study this ideal shrinker, showing some optimality properties in both frequentist and Bayesian frameworks by extending notions from Robbins's compound decision theory. To compete with the ideal estimator, taking advantage of the fact that it depends on the true coefficients only through their {\it empirical distribution}, we postulate a hierarchical Bayes model, that can be viewed as a nonparametric counterpart of the usual Gaussian hierarchical model. More concretely, the individual coefficients are modeled as i.i.d.~draws from a common distribution $\pi$, which is itself modeled as random and assigned a Polya tree prior to reflect indefiniteness. We show in simulations that the posterior mean of $\pi$ approximates well the empirical distribution of the true, {\it fixed} coefficients, effectively solving a nonparametric deconvolution problem. This allows the posterior estimates of the coefficient vector to learn the correct shrinkage pattern without parametric restrictions. We compare our method with popular parametric alternatives on the challenging task of gene mapping in the presence of polygenic effects. In this scenario, the regressors exhibit strong spatial correlation, and the signal consists of a dense polygenic component along with several prominent spikes. Our analysis demonstrates that, unlike standard high-dimensional methods such as ridge regression or Lasso, the proposed approach recovers the intricate signal structure, and results in better estimation and prediction accuracy in supporting simulations.

We propose a new way to assess certain short constructed responses to mathematics items. Our approach uses a pipeline that identifies the key values specified by the student in their response. This allows us to determine the correctness of the response, as well as identify any misconceptions. The information from the value identification pipeline can then be used to provide feedback to the teacher and student. The value identification pipeline consists of two fine-tuned language models. The first model determines if a value is implicit in the student response. The second model identifies where in the response the key value is specified. We consider both a generic model that can be used for any prompt and value, as well as models that are specific to each prompt and value. The value identification pipeline is a more accurate and informative way to assess short constructed responses than traditional rubric-based scoring. It can be used to provide more targeted feedback to students, which can help them improve their understanding of mathematics.

北京阿比特科技有限公司