亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Quadratic programming over orthogonal matrices encompasses a broad class of hard optimization problems that do not have an efficient quantum representation. Such problems are instances of the little noncommutative Grothendieck problem (LNCG), a generalization of binary quadratic programs to continuous, noncommutative variables. In this work, we establish a natural embedding for this class of LNCG problems onto a fermionic Hamiltonian, thereby enabling the study of this classical problem with the tools of quantum information. This embedding is accomplished by a new representation of orthogonal matrices as fermionic quantum states, which we achieve through the well-known double covering of the orthogonal group. Correspondingly, the embedded LNCG Hamiltonian is a two-body fermion model. Determining extremal states of this Hamiltonian provides an outer approximation to the original problem, a quantum analogue to classical semidefinite relaxations. In particular, when optimizing over the \emph{special} orthogonal group our quantum relaxation obeys additional, powerful constraints based on the convex hull of rotation matrices. The classical size of this convex-hull representation is exponential in matrix dimension, whereas our quantum representation requires only a linear number of qubits. Finally, to project the relaxed solution back into the feasible space, we propose rounding procedures which return orthogonal matrices from appropriate measurements of the quantum state. Through numerical experiments we provide evidence that this rounded quantum relaxation can produce high-quality approximations.

相關內容

We discuss the second-order differential uniformity of vectorial Boolean functions. The closely related notion of second-order zero differential uniformity has recently been studied in connection to resistance to the boomerang attack. We prove that monomial functions with univariate form $x^d$ where $d=2^{2k}+2^k+1$ and $\gcd(k,n)=1$ have optimal second-order differential uniformity. Computational results suggest that, up to affine equivalence, these might be the only optimal cubic power functions. We begin work towards generalising such conditions to all monomial functions of algebraic degree 3. We also discuss further questions arising from computational results.

We present a streamlined and simplified exponential lower bound on the length of proofs in intuitionistic implicational logic, adapted to Gordeev and Haeusler's dag-like natural deduction.

Mediation analysis aims to identify and estimate the effect of an exposure on an outcome that is mediated through one or more intermediate variables. In the presence of multiple intermediate variables, two pertinent methodological questions arise: estimating mediated effects when mediators are correlated, and performing high-dimensional mediation analysis when the number of mediators exceeds the sample size. This paper presents a two-step procedure for high-dimensional mediation analysis. The first step selects a reduced number of candidate mediators using an ad-hoc lasso penalty. The second step applies a procedure we previously developed to estimate the mediated and direct effects, accounting for the correlation structure among the retained candidate mediators. We compare the performance of the proposed two-step procedure with state-of-the-art methods using simulated data. Additionally, we demonstrate its practical application by estimating the causal role of DNA methylation in the pathway between smoking and rheumatoid arthritis using real data.

We study the decidability and complexity of non-cooperative rational synthesis problem (abbreviated as NCRSP) for some classes of probabilistic strategies. We show that NCRSP for stationary strategies and Muller objectives is in 3-EXPTIME, and if we restrict the strategies of environment players to be positional, NCRSP becomes NEXPSPACE solvable. On the other hand, NCRSP_>, which is a variant of NCRSP, is shown to be undecidable even for pure finite-state strategies and terminal reachability objectives. Finally, we show that NCRSP becomes EXPTIME solvable if we restrict the memory of a strategy to be the most recently visited t vertices where t is linear in the size of the game.

To avoid ineffective collisions between the equilibrium states, the hybrid method with deviational particles (HDP) has been proposed to integrate the Fokker-Planck-Landau system, while leaving a new issue in sampling deviational particles from the high-dimensional source term. In this paper, we present an adaptive sampling (AS) strategy that first adaptively reconstructs a piecewise constant approximation of the source term based on sequential clustering via discrepancy estimation, and then samples deviational particles directly from the resulting adaptive piecewise constant function without rejection. The mixture discrepancy, which can be easily calculated thanks to its explicit analytical expression, is employed as a measure of uniformity instead of the star discrepancy the calculation of which is NP-hard. The resulting method, dubbed the HDP-AS method, runs approximately ten times faster than the HDP method while keeping the same accuracy in the Landau damping, two stream instability, bump on tail and Rosenbluth's test problem.

Optimal solutions of combinatorial optimization problems can be sensitive to changes in the cost of one or more elements of the ground set E. Single and set tolerances measure the supremum / infimum possible change such that the current solution remains optimal for cost changes in one or more elements. The current definition does not apply to all elements of E or to all subsets of E. In this work, we broaden the definition to all elements for single tolerances and to all subsets of elements for set tolerances, while proving that key theoretical and computational properties still apply.

The preservation of stochastic orders by distortion functions has become a topic of increasing interest in the reliability analysis of coherent systems. The reason of this interest is that the reliability function of a coherent system with identically distributed components can be represented as a distortion function of the common reliability function of the components. In this framework, we study the preservation of the excess wealth order, the total time on test transform order, the decreasing mean residual live order, and the quantile mean inactivity time order by distortion functions. The results are applied to study the preservation of these stochastic orders under the formation of coherent systems with exchangeable components.

Given a finite set of matrices with integer entries, the matrix mortality problem asks if there exists a product of these matrices equal to the zero matrix. We consider a special case of this problem where all entries of the matrices are nonnegative. This case is equivalent to the NFA mortality problem, which, given an NFA, asks for a word $w$ such that the image of every state under $w$ is the empty set. The size of the alphabet of the NFA is then equal to the number of matrices in the set. We study the length of shortest such words depending on the size of the alphabet. We show that for an NFA with $n$ states this length can be at least $2^n - 1$ for an alphabet of size $n$, $2^{(n - 4)/2}$ for an alphabet of size $3$ and $2^{(n - 2)/3}$ for an alphabet of size $2$. We also discuss further open problems related to mortality of NFAs and DFAs.

This paper deals with nonlinear mechanics of an elevator brake system subjected to uncertainties. A deterministic model that relates the braking force with uncertain parameters is deduced from mechanical equilibrium conditions. In order to take into account parameters variabilities, a parametric probabilistic approach is employed. In this stochastic formalism, the uncertain parameters are modeled as random variables, with distributions specified by the maximum entropy principle. The uncertainties are propagated by the Monte Carlo method, which provides a detailed statistical characterization of the response. This work still considers the optimum design of the brake system, formulating and solving nonlinear optimization problems, with and without the uncertainties effects.

Deep learning is usually described as an experiment-driven field under continuous criticizes of lacking theoretical foundations. This problem has been partially fixed by a large volume of literature which has so far not been well organized. This paper reviews and organizes the recent advances in deep learning theory. The literature is categorized in six groups: (1) complexity and capacity-based approaches for analyzing the generalizability of deep learning; (2) stochastic differential equations and their dynamic systems for modelling stochastic gradient descent and its variants, which characterize the optimization and generalization of deep learning, partially inspired by Bayesian inference; (3) the geometrical structures of the loss landscape that drives the trajectories of the dynamic systems; (4) the roles of over-parameterization of deep neural networks from both positive and negative perspectives; (5) theoretical foundations of several special structures in network architectures; and (6) the increasingly intensive concerns in ethics and security and their relationships with generalizability.

北京阿比特科技有限公司