亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Reversible computing basically means computation with less or not at all electrical power. Since the standard binary gates are not usually reversible we use the Fredkin gate in order to achieve reversibility. An algorithm for designing reversible digital circuits is described in this paper. The algorithm is based on Multi Expression Programming (MEP), a Genetic Programming variant with a linear representation of individuals. The case of digital circuits for the even-parity problem is investigated. Numerical experiments show that the MEP-based algorithm is able to easily design reversible digital circuits for up to the even-8-parity problem.

相關內容

We consider structured optimisation problems defined in terms of the sum of a smooth and convex function, and a proper, l.s.c., convex (typically non-smooth) one in reflexive variable exponent Lebesgue spaces $L_{p(\cdot)}(\Omega)$. Due to their intrinsic space-variant properties, such spaces can be naturally used as solution space and combined with space-variant functionals for the solution of ill-posed inverse problems. For this purpose, we propose and analyse two instances (primal and dual) of proximal gradient algorithms in $L_{p(\cdot)}(\Omega)$, where the proximal step, rather than depending on the natural (non-separable) $L_{p(\cdot)}(\Omega)$ norm, is defined in terms of its modular function, which, thanks to its separability, allows for the efficient computation of algorithmic iterates. Convergence in function values is proved for both algorithms, with convergence rates depending on problem/space smoothness. To show the effectiveness of the proposed modelling, some numerical tests highlighting the flexibility of the space $L_{p(\cdot)}(\Omega)$ are shown for exemplar deconvolution and mixed noise removal problems. Finally, a numerical verification on the convergence speed and computational costs of both algorithms in comparison with analogous ones defined in standard $L_{p}(\Omega)$ spaces is presented.

The optimal value of the projected successive overrelaxation (PSOR) method for nonnegative quadratic programming problems is problem-dependent. We present a novel adaptive PSOR algorithm that adaptively controls the relaxation parameter using the Wolfe conditions. The method and its variants can be applied to various problems without requiring a specific assumption regarding the matrix defining the objective function, and the cost for updating the parameter is negligible in the whole iteration. Numerical experiments show that the proposed methods often perform comparably to (or sometimes superior to) the PSOR method with a nearly optimal relaxation parameter.

The Bayesian inference is widely used in many scientific and engineering problems, especially in the linear inverse problems in infinite-dimensional setting where the unknowns are functions. In such problems, choosing an appropriate prior distribution is an important task. In particular, when the function to infer has much detail information, such as many sharp jumps, corners, and the discontinuous and nonsmooth oscillation, the so-called total variation-Gaussian (TG) prior is proposed in function space to address it. However, the TG prior is easy to lead the blocky (staircase) effect in numerical results. In this work, we present a fractional order-TG (FTG) hybrid prior to deal with such problems, where the fractional order total variation (FTV) term is used to capture the detail information of the unknowns and simultaneously uses the Gaussian measure to ensure that it results in a well-defined posterior measure. For the numerical implementations of linear inverse problems in function spaces, we also propose an efficient independence sampler based on a transport map, which uses a proposal distribution derived from a diagonal map, and the acceptance probability associated to the proposal is independent of discretization dimensionality. And in order to take full advantage of the transport map, the hierarchical Bayesian framework is applied to flexibly determine the regularization parameter. Finally we provide some numerical examples to demonstrate the performance of the FTG prior and the efficiency and robustness of the proposed independence sampler method.

We construct families of circles in the plane such that their tangency graphs have arbitrarily large girth and chromatic number. This provides a strong negative answer to Ringel's circle problem (1959). The proof relies on a (multidimensional) version of Gallai's theorem with polynomial constraints, which we derive from the Hales-Jewett theorem and which may be of independent interest.

Probabilistic circuits (PCs) are a powerful modeling framework for representing tractable probability distributions over combinatorial spaces. In machine learning and probabilistic programming, one is often interested in understanding whether the distributions learned using PCs are close to the desired distribution. Thus, given two probabilistic circuits, a fundamental problem of interest is to determine whether their distributions are close to each other. The primary contribution of this paper is a closeness test for PCs with respect to the total variation distance metric. Our algorithm utilizes two common PC queries, counting and sampling. In particular, we provide a poly-time probabilistic algorithm to check the closeness of two PCs when the PCs support tractable approximate counting and sampling. We demonstrate the practical efficiency of our algorithmic framework via a detailed experimental evaluation of a prototype implementation against a set of 475 PC benchmarks. We find that our test correctly decides the closeness of all 475 PCs within 3600 seconds.

As part of the Exascale Computing Project (ECP), a recent focus of development efforts for the SUite of Nonlinear and DIfferential/ALgebraic equation Solvers (SUNDIALS) has been to enable GPU-accelerated time integration in scientific applications at extreme scales. This effort has resulted in several new GPU-enabled implementations of core SUNDIALS data structures, support for programming paradigms which are aware of the heterogeneous architectures, and the introduction of utilities to provide new points of flexibility. In this paper, we discuss our considerations, both internal and external, when designing these new features and present the features themselves. We also present performance results for several of the features on the Summit supercomputer and early access hardware for the Frontier supercomputer, which demonstrate negligible performance overhead resulting from the additional infrastructure and significant speedups when using both NVIDIA and AMD GPUs.

Motivated by a wide range of real-world problems whose solutions exhibit boundary and interior layers, the numerical analysis of discretizations of singularly perturbed differential equations is an established sub-discipline within the study of the numerical approximation of solutions to differential equations. Consequently, much is known about how to accurately and stably discretize such equations on \textit{a priori} adapted meshes, in order to properly resolve the layer structure present in their continuum solutions. However, despite being a key step in the numerical simulation process, much less is known about the efficient and accurate solution of the linear systems of equations corresponding to these discretizations. In this paper, we discuss problems associated with the application of direct solvers to these discretizations, and we propose a preconditioning strategy that is tuned to the matrix structure induced by using layer-adapted meshes for convection-diffusion equations, proving a strong condition-number bound on the preconditioned system in one spatial dimension, and a weaker bound in two spatial dimensions. Numerical results confirm the efficiency of the resulting preconditioners in one and two dimensions, with time-to-solution of less than one second for representative problems on $1024\times 1024$ meshes and up to $40\times$ speedup over standard sparse direct solvers.

We present an approach to solving hard geometric optimization problems in the RANSAC framework. The hard minimal problems arise from relaxing the original geometric optimization problem into a minimal problem with many spurious solutions. Our approach avoids computing large numbers of spurious solutions. We design a learning strategy for selecting a starting problem-solution pair that can be numerically continued to the problem and the solution of interest. We demonstrate our approach by developing a RANSAC solver for the problem of computing the relative pose of three calibrated cameras, via a minimal relaxation using four points in each view. On average, we can solve a single problem in under 70 $\mu s.$ We also benchmark and study our engineering choices on the very familiar problem of computing the relative pose of two calibrated cameras, via the minimal case of five points in two views.

In the ice-fishing problem, a half-space of fluid lies below an infinite rigid plate (``the ice'') with a hole. In this paper, we investigate the ice-fishing problem including the effects of surface tension on the free surface. The dimensionless number that describes the effect of surface tension is called the Bond number. For holes that are infinite parallel strips or circular holes, we transform the problem to an equivalent eigenvalue integro-differential equation on an interval and expand in the appropriate basis (Legendre and radial polynomials, respectively). We use computational methods to demonstrate that the high spot, i.e., the maximal elevation of the fundamental sloshing profile, for the IFP is in the interior of the free surface for large Bond numbers, but for sufficiently small Bond number the high spot is on the boundary of the free surface. While several papers have proven high spot results in the absence of surface tension as it depends on the shape of the container, as far as we are aware, this is the first study investigating the effects of surface tension on the location of the high spot.

Despite the considerable success of neural networks in security settings such as malware detection, such models have proved vulnerable to evasion attacks, in which attackers make slight changes to inputs (e.g., malware) to bypass detection. We propose a novel approach, \emph{Fourier stabilization}, for designing evasion-robust neural networks with binary inputs. This approach, which is complementary to other forms of defense, replaces the weights of individual neurons with robust analogs derived using Fourier analytic tools. The choice of which neurons to stabilize in a neural network is then a combinatorial optimization problem, and we propose several methods for approximately solving it. We provide a formal bound on the per-neuron drop in accuracy due to Fourier stabilization, and experimentally demonstrate the effectiveness of the proposed approach in boosting robustness of neural networks in several detection settings. Moreover, we show that our approach effectively composes with adversarial training.

北京阿比特科技有限公司