亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The Unlimited Sensing Framework (USF) was recently introduced to overcome the sensor saturation bottleneck in conventional digital acquisition systems. At its core, the USF allows for high-dynamic-range (HDR) signal reconstruction by converting a continuous-time signal into folded, low-dynamic-range (LDR), modulo samples. HDR reconstruction is then carried out by algorithmic unfolding of the folded samples. In hardware, however, implementing an ideal modulo folding requires careful calibration, analog design and high precision. At the interface of theory and practice, this paper explores a computational sampling strategy that relaxes strict hardware requirements by compensating them via a novel, mathematically guaranteed recovery method. Our starting point is a generalized model for USF. The generalization relies on two new parameters modeling hysteresis and folding transients} in addition to the modulo threshold. Hysteresis accounts for the mismatch between the reset threshold and the amplitude displacement at the folding time and we refer to a continuous transition period in the implementation of a reset as folding transient. Both these effects are motivated by our hardware experiments and also occur in previous, domain-specific applications. We show that the effect of hysteresis is beneficial for the USF and we leverage it to derive the first recovery guarantees in the context of our generalized USF model. Additionally, we show how the proposed recovery can be directly generalized for the case of lower sampling rates. Our theoretical work is corroborated by hardware experiments that are based on a hysteresis enabled, modulo ADC testbed comprising off-the-shelf electronic components. Thus, by capitalizing on a collaboration between hardware and algorithms, our paper enables an end-to-end pipeline for HDR sampling allowing more flexible hardware implementations.

相關內容

ACM/IEEE第23屆模型驅動工程語言和系統國際會議,是模型驅動軟件和系統工程的首要會議系列,由ACM-SIGSOFT和IEEE-TCSE支持組織。自1998年以來,模型涵蓋了建模的各個方面,從語言和方法到工具和應用程序。模特的參加者來自不同的背景,包括研究人員、學者、工程師和工業專業人士。MODELS 2019是一個論壇,參與者可以圍繞建模和模型驅動的軟件和系統交流前沿研究成果和創新實踐經驗。今年的版本將為建模社區提供進一步推進建模基礎的機會,并在網絡物理系統、嵌入式系統、社會技術系統、云計算、大數據、機器學習、安全、開源等新興領域提出建模的創新應用以及可持續性。 官網鏈接: · EASE · 泛函 · 學成 · 設計 ·
2022 年 1 月 28 日

Various fields of science face a reproducibility crisis. For quantum software engineering as an emerging field, it is therefore imminent to focus on proper reproducibility engineering from the start. Yet the provision of reproduction packages is almost universally lacking. Actionable advice on how to build such packages is rare, particularly unfortunate in a field with many contributions from researchers with backgrounds outside computer science. In this article, we argue how to rectify this deficiency by proposing a 1-2-3~approach to reproducibility engineering for quantum software experiments: Using a meta-generation mechanism, we generate DOI-safe, long-term functioning and dependency-free reproduction packages. They are designed to satisfy the requirements of professional and learned societies solely on the basis of project-specific research artefacts (source code, measurement and configuration data), and require little temporal investment by researchers. Our scheme ascertains long-term traceability even when the quantum processor itself is no longer accessible. By drastically lowering the technical bar, we foster the proliferation of reproduction packages in quantum software experiments and ease the inclusion of non-CS researchers entering the field.

The effect of combined, generating R- and Q-factors of measured variables on the loadings resulting from R-factor analysis was investigated. It was found algebraically that R-factor analysis of measured variables generated by a combination of R- and Q-factors results in biased population R-factor loadings. In this context, the loadings of the generating R-factors cannot be perfectly estimated by means of R-factor analysis. This effect was also shown in a simulation study at the population level. Moreover, the simulation study based on the samples drawn from the populations revealed that the R-factor loadings averaged across samples were substantially larger than the population loadings of the generating R-factors. This effect was due to between sample variability of measured variables resulting from generating Q-factors at the population level which does not occur when a single sample is investigated. These results indicate that -in data that are generated by a combination of R- and Q-factors- the effect of the Q-factors may lead to substantial over-estimation of the R-factor loadings in sample-based R-factor analysis.

We introduce a numerical technique for controlling the location and stability properties of Hopf bifurcations in dynamical systems. The algorithm consists of solving an optimization problem constrained by an extended system of nonlinear partial differential equations that characterizes Hopf bifurcation points. The flexibility and robustness of the method allows us to advance or delay a Hopf bifurcation to a target value of the bifurcation parameter, as well as controlling the oscillation frequency with respect to a parameter of the system or the shape of the domain on which solutions are defined. Numerical applications are presented in systems arising from biology and fluid dynamics, such as the FitzHugh-Nagumo model, Ginzburg-Landau equation, Rayleigh-B\'enard convection problem, and Navier-Stokes equations, where the control of the location and oscillation frequency of periodic solutions is of high interest.

In optical diffraction tomography (ODT), the three-dimensional scattering potential of a microscopic object rotating around its center is recovered by a series of illuminations with coherent light. Reconstruction algorithms such as the filtered backpropagation require knowledge of the complex-valued wave at the measurement plane, whereas often only intensities, i.e., phaseless measurements, are available in practice. We propose a new reconstruction approach for ODT with unknown phase information based on three key ingredients. First, the light propagation is modeled using Born's approximation enabling us to use the Fourier diffraction theorem. Second, we stabilize the inversion of the non-uniform discrete Fourier transform via total variation regularization utilizing a primal-dual iteration, which also yields a novel numerical inversion formula for ODT with known phase. The third ingredient is a hybrid input-output scheme. We achieved convincing numerical results, which indicate that ODT with phaseless data is possible. The so-obtained 2D and 3D reconstructions are even comparable to the ones with known phase.

Limited process control can cause metallurgical defect formation and inhomogeneous relative density in laser powder bed fusion manufactured parts. In this study, cylindrical 15-5 PH stainless steel specimens are investigated by computer tomography; it shows an edge enhanced relative density profile. Additionally, the on axis monitoring signal, obtained from recording the thermal radiation of the melt pool, is considered. Analyzing data for the full duration of the building process results in a statistically increased melt pool signature close to the edge corresponding to the density profile. Edge specific patterns in the on axis signal are found by unsupervised times series clustering. The observations are interpreted using finite element method modeling: For exemplary points at the center and edge it shows different local thermal histories attributed to the chosen laser scan pattern. The results motivate a route towards future design of components with locally dependent material parameters.

The ability of a radar to discriminate in both range and Doppler velocity is completely characterized by the ambiguity function (AF) of its transmit waveform. Mathematically, it is obtained by correlating the waveform with its Doppler-shifted and delayed replicas. We consider the inverse problem of designing a radar transmit waveform that satisfies the specified AF magnitude. This process can be viewed as a signal reconstruction with some variation of phase retrieval methods. We provide a trust-region algorithm that minimizes a smoothed non-convex least-squares objective function to iteratively recover the underlying signal-of-interest for either time- or band-limited support. The method first approximates the signal using an iterative spectral algorithm and then refines the attained initialization based upon a sequence of gradient iterations. Our theoretical analysis shows that unique signal reconstruction is possible using signal samples no more than thrice the number of signal frequencies or time samples. Numerical experiments demonstrate that our method recovers both time- and band-limited signals from even sparsely and randomly sampled AFs with mean-square-error of $1\times 10^{-6}$ and $9\times 10^{-2}$ for the full noiseless samples and sparse noisy samples, respectively.

Iterative distributed optimization algorithms involve multiple agents that communicate with each other, over time, in order to minimize/maximize a global objective. In the presence of unreliable communication networks, the Age-of-Information (AoI), which measures the freshness of data received, may be large and hence hinder algorithmic convergence. In this paper, we study the convergence of general distributed gradient-based optimization algorithms in the presence of communication that neither happens periodically nor at stochastically independent points in time. We show that convergence is guaranteed provided the random variables associated with the AoI processes are stochastically dominated by a random variable with finite first moment. This improves on previous requirements of boundedness of more than the first moment. We then introduce stochastically strongly connected (SSC) networks, a new stochastic form of strong connectedness for time-varying networks. We show: If for any $p \ge0$ the processes that describe the success of communication between agents in a SSC network are $\alpha$-mixing with $n^{p-1}\alpha(n)$ summable, then the associated AoI processes are stochastically dominated by a random variable with finite $p$-th moment. In combination with our first contribution, this implies that distributed stochastic gradient descend converges in the presence of AoI, if $\alpha(n)$ is summable.

Subclassification and matching are often used in empirical studies to adjust for observed covariates; however, they are largely restricted to relatively simple study designs with a binary treatment and less developed for designs with a continuous exposure. Matching with exposure doses is particularly useful in instrumental variable designs and in understanding the dose-response relationships. In this article, we propose two criteria for optimal subclassification based on subclass homogeneity in the context of having a continuous exposure dose, and propose an efficient polynomial-time algorithm that is guaranteed to find an optimal subclassification with respect to one criterion and serves as a 2-approximation algorithm for the other criterion. We discuss how to incorporate dose and use appropriate penalties to control the number of subclasses in the design. Via extensive simulations, we systematically compare our proposed design to optimal non-bipartite pair matching, and demonstrate that combining our proposed subclassification scheme with regression adjustment helps reduce model dependence for parametric causal inference with a continuous dose. We apply the new design and associated randomization-based inferential procedure to study the effect of transesophageal echocardiography (TEE) monitoring during coronary artery bypass graft (CABG) surgery on patients' post-surgery clinical outcomes using Medicare and Medicaid claims data, and find evidence that TEE monitoring lowers patients' all-cause $30$-day mortality rate.

Training datasets for machine learning often have some form of missingness. For example, to learn a model for deciding whom to give a loan, the available training data includes individuals who were given a loan in the past, but not those who were not. This missingness, if ignored, nullifies any fairness guarantee of the training procedure when the model is deployed. Using causal graphs, we characterize the missingness mechanisms in different real-world scenarios. We show conditions under which various distributions, used in popular fairness algorithms, can or can not be recovered from the training data. Our theoretical results imply that many of these algorithms can not guarantee fairness in practice. Modeling missingness also helps to identify correct design principles for fair algorithms. For example, in multi-stage settings where decisions are made in multiple screening rounds, we use our framework to derive the minimal distributions required to design a fair algorithm. Our proposed algorithm decentralizes the decision-making process and still achieves similar performance to the optimal algorithm that requires centralization and non-recoverable distributions.

In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.

北京阿比特科技有限公司