亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A signal recovery problem is considered, where the same binary testing problem is posed over multiple, independent data streams. The goal is to identify all signals, i.e., streams where the alternative hypothesis is correct, and noises, i.e., streams where the null hypothesis is correct, subject to prescribed bounds on the classical or generalized familywise error probabilities. It is not required that the exact number of signals be a priori known, only upper bounds on the number of signals and noises are assumed instead. A decentralized formulation is adopted, according to which the sample size and the decision for each testing problem must be based only on observations from the corresponding data stream. A novel multistage testing procedure is proposed for this problem and is shown to enjoy a high-dimensional asymptotic optimality property. Specifically, it achieves the optimal, average over all streams, expected sample size, uniformly in the true number of signals, as the maximum possible numbers of signals and noises go to infinity at arbitrary rates, in the class of all sequential tests with the same global error control. In contrast, existing multistage tests in the literature are shown to achieve this high-dimensional asymptotic optimality property only under additional sparsity or symmetry conditions. These results are based on an asymptotic analysis for the fundamental binary testing problem as the two error probabilities go to zero. For this problem, unlike existing multistage tests in the literature, the proposed test achieves the optimal expected sample size under both hypotheses, in the class of all sequential tests with the same error control, as the two error probabilities go to zero at arbitrary rates. These results are further supported by simulation studies and extended to problems with non-iid data and composite hypotheses.

相關內容

Sequential testing, always-valid $p$-values, and confidence sequences promise flexible statistical inference and on-the-fly decision making. However, unlike fixed-$n$ inference based on asymptotic normality, existing sequential tests either make parametric assumptions and end up under-covering/over-rejecting when these fail or use non-parametric but conservative concentration inequalities and end up over-covering/under-rejecting. To circumvent these issues, we sidestep exact at-least-$\alpha$ coverage and focus on asymptotically exact coverage and asymptotic optimality. That is, we seek sequential tests whose probability of ever rejecting a true hypothesis asymptotically approaches $\alpha$ and whose expected time to reject a false hypothesis approaches a lower bound on all tests with asymptotic coverage at least $\alpha$, both under an appropriate asymptotic regime. We permit observations to be both non-parametric and dependent and focus on testing whether the observations form a martingale difference sequence. We propose the universal sequential probability ratio test (uSPRT), a slight modification to the normal-mixture sequential probability ratio test, where we add a burn-in period and adjust thresholds accordingly. We show that even in this very general setting, the uSPRT is asymptotically optimal under mild generic conditions. We apply the results to stabilized estimating equations to test means, treatment effects, etc. Our results also provide corresponding guarantees for the implied confidence sequences. Numerical simulations verify our guarantees and the benefits of the uSPRT over alternatives.

In biomedical image analysis, the applicability of deep learning methods is directly impacted by the quantity of image data available. This is due to deep learning models requiring large image datasets to provide high-level performance. Generative Adversarial Networks (GANs) have been widely utilized to address data limitations through the generation of synthetic biomedical images. GANs consist of two models. The generator, a model that learns how to produce synthetic images based on the feedback it receives. The discriminator, a model that classifies an image as synthetic or real and provides feedback to the generator. Throughout the training process, a GAN can experience several technical challenges that impede the generation of suitable synthetic imagery. First, the mode collapse problem whereby the generator either produces an identical image or produces a uniform image from distinct input features. Second, the non-convergence problem whereby the gradient descent optimizer fails to reach a Nash equilibrium. Thirdly, the vanishing gradient problem whereby unstable training behavior occurs due to the discriminator achieving optimal classification performance resulting in no meaningful feedback being provided to the generator. These problems result in the production of synthetic imagery that is blurry, unrealistic, and less diverse. To date, there has been no survey article outlining the impact of these technical challenges in the context of the biomedical imagery domain. This work presents a review and taxonomy based on solutions to the training problems of GANs in the biomedical imaging domain. This survey highlights important challenges and outlines future research directions about the training of GANs in the domain of biomedical imagery.

Many Electromagnetic time reversal (EMTR)-based fault location methods were proposed in the latest decade. In this paper, we briefly review the EMTR-based fault location method using direct convolution (EMTR-conv) and generalize it to multi-phase transmission lines. Moreover, noting that the parameters of real transmission lines are frequency-dependent, while constant-parameters were often used during the reverse process of EMTR-based methods in the previous studies, we investigate the influence of this simplification to the fault location performance by considering frequency-dependent parameters and lossy ground in the forward process which shows the location error increases as the distance between the observation point and the fault position increases, especially when the ground resistivity is high. Therefore, we propose a correction method to reduce the location error by using double observation points. Numerical experiments are carried out in a 3-phase 300-km transmission line considering different ground resistivities, fault types and fault conditions, which shows the method gives good location errors and works efficiently via direct convolution of the signals collected from the fault and the pre-stored calculated transient signals.

The performance of large-scale computing systems often critically depends on high-performance communication networks. Dynamically reconfigurable topologies, e.g., based on optical circuit switches, are emerging as an innovative new technology to deal with the explosive growth of datacenter traffic. Specifically, \emph{periodic} reconfigurable datacenter networks (RDCNs) such as RotorNet (SIGCOMM 2017), Opera (NSDI 2020) and Sirius (SIGCOMM 2020) have been shown to provide high throughput, by emulating a \emph{complete graph} through fast periodic circuit switch scheduling. However, to achieve such a high throughput, existing reconfigurable network designs pay a high price: in terms of potentially high delays, but also, as we show as a first contribution in this paper, in terms of the high buffer requirements. In particular, we show that under buffer constraints, emulating the high-throughput complete graph is infeasible at scale, and we uncover a spectrum of unvisited and attractive alternative RDCNs, which emulate regular graphs, but with lower node degree than the complete graph. We present Mars, a periodic reconfigurable topology which emulates a $d$-regular graph with near-optimal throughput. In particular, we systematically analyze how the degree~$d$ can be optimized for throughput given the available buffer and delay tolerance of the datacenter. We further show empirically that Mars achieves higher throughput compared to existing systems when buffer sizes are bounded.

Log-linear models are a family of probability distributions which capture relationships between variables. They have been proven useful in a wide variety of fields such as epidemiology, economics and sociology. The interest in using these models is that they are able to capture context-specific independencies, relationships that provide richer structure to the model. Many approaches exist for automatic learning of the independence structure of log-linear models from data. The methods for evaluating these approaches, however, are limited, and are mostly based on indirect measures of the complete density of the probability distribution. Such computation requires additional learning of the numerical parameters of the distribution, which introduces distortions when used for comparing structures. This work addresses this issue by presenting the first measure for the direct and efficient comparison of independence structures of log-linear models. Our method relies only on the independence structure of the models, which is useful when the interest lies in obtaining knowledge from said structure, or when comparing the performance of structure learning algorithms, among other possible uses. We present proof that the measure is a metric, and a method for its computation that is efficient in the number of variables of the domain.

This paper considers the distributed online convex optimization problem with time-varying constraints over a network of agents. This is a sequential decision making problem with two sequences of arbitrarily varying convex loss and constraint functions. At each round, each agent selects a decision from the decision set, and then only a portion of the loss function and a coordinate block of the constraint function at this round are privately revealed to this agent. The goal of the network is to minimize the network-wide loss accumulated over time. Two distributed online algorithms with full-information and bandit feedback are proposed. Both dynamic and static network regret bounds are analyzed for the proposed algorithms, and network cumulative constraint violation is used to measure constraint violation, which excludes the situation that strictly feasible constraints can compensate the effects of violated constraints. In particular, we show that the proposed algorithms achieve $\mathcal{O}(T^{\max\{\kappa,1-\kappa\}})$ static network regret and $\mathcal{O}(T^{1-\kappa/2})$ network cumulative constraint violation, where $T$ is the time horizon and $\kappa\in(0,1)$ is a user-defined trade-off parameter. Moreover, if the loss functions are strongly convex, then the static network regret bound can be reduced to $\mathcal{O}(T^{\kappa})$. Finally, numerical simulations are provided to illustrate the effectiveness of the theoretical results.

Dynamic Movement Primitives (DMP) have found remarkable applicability and success in various robotic tasks, which can be mainly attributed to their generalization and robustness properties. Nevertheless, their generalization is based only on the trajectory endpoints (initial and target position). Moreover, the spatial generalization of DMP is known to suffer from shortcomings like over-scaling and mirroring of the motion. In this work we propose a novel generalization scheme, based on optimizing online the DMP weights so that the acceleration profile and hence the underlying training trajectory pattern is preserved. This approach remedies the shortcomings of the classical DMP scaling and additionally allows the DMP to generalize also to intermediate points (via-points) and external signals (coupling terms), while preserving the training trajectory pattern. Extensive comparative simulations with the classical and other DMP variants are conducted, while experimental results validate the applicability and efficacy of the proposed method.

We study the finite element approximation of the solid isotropic material with penalization (SIMP) model for the topology optimization of the compliance of a linearly elastic structure. To ensure the existence of a minimizer to the infinite-dimensional problem, we consider two popular restriction methods: $W^{1,p}$-type regularization and density filtering. Previous results prove weak(-*) convergence in the solution space of the material distribution to an unspecified minimizer of the infinite-dimensional problem. In this work, we show that, for every isolated local or global minimizer, there exists a sequence of finite element minimizers that strongly converges to the minimizer in the solution space. As a by-product, this ensures that there exists a sequence of unfiltered discretized material distributions that does not exhibit checkerboarding.

The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their biological counterparts, sparse networks generalize just as well, if not better than, the original dense networks. Sparsity can reduce the memory footprint of regular networks to fit mobile devices, as well as shorten training time for ever growing networks. In this paper, we survey prior work on sparsity in deep learning and provide an extensive tutorial of sparsification for both inference and training. We describe approaches to remove and add elements of neural networks, different training strategies to achieve model sparsity, and mechanisms to exploit sparsity in practice. Our work distills ideas from more than 300 research papers and provides guidance to practitioners who wish to utilize sparsity today, as well as to researchers whose goal is to push the frontier forward. We include the necessary background on mathematical methods in sparsification, describe phenomena such as early structure adaptation, the intricate relations between sparsity and the training process, and show techniques for achieving acceleration on real hardware. We also define a metric of pruned parameter efficiency that could serve as a baseline for comparison of different sparse networks. We close by speculating on how sparsity can improve future workloads and outline major open problems in the field.

In this monograph, I introduce the basic concepts of Online Learning through a modern view of Online Convex Optimization. Here, online learning refers to the framework of regret minimization under worst-case assumptions. I present first-order and second-order algorithms for online learning with convex losses, in Euclidean and non-Euclidean settings. All the algorithms are clearly presented as instantiation of Online Mirror Descent or Follow-The-Regularized-Leader and their variants. Particular attention is given to the issue of tuning the parameters of the algorithms and learning in unbounded domains, through adaptive and parameter-free online learning algorithms. Non-convex losses are dealt through convex surrogate losses and through randomization. The bandit setting is also briefly discussed, touching on the problem of adversarial and stochastic multi-armed bandits. These notes do not require prior knowledge of convex analysis and all the required mathematical tools are rigorously explained. Moreover, all the proofs have been carefully chosen to be as simple and as short as possible.

北京阿比特科技有限公司