亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Given a local Hamiltonian, how difficult is it to determine the entanglement structure of its ground state? We show that this problem is computationally intractable even if one is only trying to decide if the ground state is volume-law vs near area-law entangled. We prove this by constructing strong forms of pseudoentanglement in a public-key setting, where the circuits used to prepare the states are public knowledge. In particular, we construct two families of quantum circuits which produce volume-law vs near area-law entangled states, but nonetheless the classical descriptions of the circuits are indistinguishable under the Learning with Errors (LWE) assumption. Indistinguishability of the circuits then allows us to translate our construction to Hamiltonians. Our work opens new directions in Hamiltonian complexity, for example whether it is difficult to learn certain phases of matter.

相關內容

Frequent and intensive disasters make the repeated and uncertain post-disaster recovery process. Despite the importance of the successful recovery process, previous simulation studies on the post-disaster recovery process did not explore the sufficient number of household return decision model types, population sizes, and the corresponding critical transition conditions of the system. This paper simulates the recovery process in the agent-based model with multilayer networks to reveal the impact of household return decision model types and population sizes in a toy network. After that, this paper applies the agent-based model to the five selected counties affected by Hurricane Harvey in 2017 to check the urban-rural recovery differences by types of household return decision models. The agent-based model yields three conclusions. First, the threshold model can successfully substitute the binary logit model. Second, high thresholds and less than 1,000 populations perturb the recovery process, yielding critical transitions during the recovery process. Third, this study checks the urban-rural recovery value differences by different decision model types. This study highlights the importance of the threshold models and population sizes to check the critical transitions and urban-rural differences in the recovery process.

We consider scalar semilinear elliptic PDEs, where the nonlinearity is strongly monotone, but only locally Lipschitz continuous. To linearize the arising discrete nonlinear problem, we employ a damped Zarantonello iteration, which leads to a linear Poisson-type equation that is symmetric and positive definite. The resulting system is solved by a contractive algebraic solver such as a multigrid method with local smoothing. We formulate a fully adaptive algorithm that equibalances the various error components coming from mesh refinement, iterative linearization, and algebraic solver. We prove that the proposed adaptive iteratively linearized finite element method (AILFEM) guarantees convergence with optimal complexity, where the rates are understood with respect to the overall computational cost (i.e., the computational time). Numerical experiments investigate the involved adaptivity parameters.

We consider the Navier-Stokes-Fourier system governing the motion of a general compressible, heat conducting, Newtonian fluid driven by random initial/boundary data. Convergence of the stochastic collocation and Monte Carlo numerical methods is shown under the hypothesis that approximate solutions are bounded in probability. Abstract results are illustrated by numerical experiments for the Rayleigh-Benard convection problem.

Predictions under interventions are estimates of what a person's risk of an outcome would be if they were to follow a particular treatment strategy, given their individual characteristics. Such predictions can give important input to medical decision making. However, evaluating predictive performance of interventional predictions is challenging. Standard ways of evaluating predictive performance do not apply when using observational data, because prediction under interventions involves obtaining predictions of the outcome under conditions that are different to those that are observed for a subset of individuals in the validation dataset. This work describes methods for evaluating counterfactual performance of predictions under interventions for time-to-event outcomes. This means we aim to assess how well predictions would match the validation data if all individuals had followed the treatment strategy under which predictions are made. We focus on counterfactual performance evaluation using longitudinal observational data, and under treatment strategies that involve sustaining a particular treatment regime over time. We introduce an estimation approach using artificial censoring and inverse probability weighting which involves creating a validation dataset that mimics the treatment strategy under which predictions are made. We extend measures of calibration, discrimination (c-index and cumulative/dynamic AUCt) and overall prediction error (Brier score) to allow assessment of counterfactual performance. The methods are evaluated using a simulation study, including scenarios in which the methods should detect poor performance. Applying our methods in the context of liver transplantation shows that our procedure allows quantification of the performance of predictions supporting crucial decisions on organ allocation.

Stabbing Planes (also known as Branch and Cut) is a proof system introduced very recently which, informally speaking, extends the DPLL method by branching on integer linear inequalities instead of single variables. The techniques known so far to prove size and depth lower bounds for Stabbing Planes are generalizations of those used for the Cutting Planes proof system. For size lower bounds these are established by monotone circuit arguments, while for depth these are found via communication complexity and protection. As such these bounds apply for lifted versions of combinatorial statements. Rank lower bounds for Cutting Planes are also obtained by geometric arguments called protection lemmas. In this work we introduce two new geometric approaches to prove size/depth lower bounds in Stabbing Planes working for any formula: (1) the antichain method, relying on Sperner's Theorem and (2) the covering method which uses results on essential coverings of the boolean cube by linear polynomials, which in turn relies on Alon's combinatorial Nullenstellensatz. We demonstrate their use on classes of combinatorial principles such as the Pigeonhole principle, the Tseitin contradictions and the Linear Ordering Principle. By the first method we prove almost linear size lower bounds and optimal logarithmic depth lower bounds for the Pigeonhole principle and analogous lower bounds for the Tseitin contradictions over the complete graph and for the Linear Ordering Principle. By the covering method we obtain a superlinear size lower bound and a logarithmic depth lower bound for Stabbing Planes proof of Tseitin contradictions over a grid graph.

In the conventional change detection (CD) pipeline, two manually registered and labeled remote sensing datasets serve as the input of the model for training and prediction. However, in realistic scenarios, data from different periods or sensors could fail to be aligned as a result of various coordinate systems. Geometric distortion caused by coordinate shifting remains a thorny issue for CD algorithms. In this paper, we propose a reusable self-supervised framework for bitemporal geometric distortion in CD tasks. The whole framework is composed of Pretext Representation Pre-training, Bitemporal Image Alignment, and Down-stream Decoder Fine-Tuning. With only single-stage pre-training, the key components of the framework can be reused for assistance in the bitemporal image alignment, while simultaneously enhancing the performance of the CD decoder. Experimental results in 2 large-scale realistic scenarios demonstrate that our proposed method can alleviate the bitemporal geometric distortion in CD tasks.

We propose a projection-based model order reduction procedure for the ageing of large prestressed concrete structures. Our work is motivated by applications in the nuclear industry, particularly in the simulation of containment buildings. Such numerical simulations involve a multi-modeling approach: a three-dimensional nonlinear thermo-hydro-visco-elastic rheological model is used for concrete; and prestressing cables are described by a one-dimensional linear thermo-elastic behavior. A kinematic linkage is performed in order to connect the concrete nodes and the steel nodes: coincident points in each material are assumed to have the same displacement. We develop an adaptive algorithm based on a Proper Orthogonal Decomposition (POD) in time and greedy in parameter to build a reduced order model (ROM). The nonlinearity of the operator entails that the computational cost of the ROM assembly scales with the size of the high-fidelity model. We develop an hyper-reduction strategy based on empirical quadrature to bypass this computational bottleneck: our approach relies on the construction of a reduced mesh to speed up online assembly costs of the ROM. We provide numerical results for a standard section of a double-walled containment building using a qualified and broadly-used industrial grade finite element solver for structural mechanics (code$\_$aster).

The aim of change-point detection is to discover the changes in behavior that lie behind time sequence data. In this article, we study the case where the data comes from an inhomogeneous Poisson process or a marked Poisson process. We present a methodology for detecting multiple offline change-points based on a minimum contrast estimator. In particular, we explain how to handle the continuous nature of the process with the available discrete observations. In addition, we select the appropriate number of regimes via a cross-validation procedure which is really handy here due to the nature of the Poisson process. Through experiments on simulated and real data sets, we demonstrate the interest of the proposed method. The proposed method has been implemented in the R package \texttt{CptPointProcess} R.

We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.

The Evidential regression network (ENet) estimates a continuous target and its predictive uncertainty without costly Bayesian model averaging. However, it is possible that the target is inaccurately predicted due to the gradient shrinkage problem of the original loss function of the ENet, the negative log marginal likelihood (NLL) loss. In this paper, the objective is to improve the prediction accuracy of the ENet while maintaining its efficient uncertainty estimation by resolving the gradient shrinkage problem. A multi-task learning (MTL) framework, referred to as MT-ENet, is proposed to accomplish this aim. In the MTL, we define the Lipschitz modified mean squared error (MSE) loss function as another loss and add it to the existing NLL loss. The Lipschitz modified MSE loss is designed to mitigate the gradient conflict with the NLL loss by dynamically adjusting its Lipschitz constant. By doing so, the Lipschitz MSE loss does not disturb the uncertainty estimation of the NLL loss. The MT-ENet enhances the predictive accuracy of the ENet without losing uncertainty estimation capability on the synthetic dataset and real-world benchmarks, including drug-target affinity (DTA) regression. Furthermore, the MT-ENet shows remarkable calibration and out-of-distribution detection capability on the DTA benchmarks.

北京阿比特科技有限公司