亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We prove the first polynomial separation between randomized and deterministic time-space tradeoffs of multi-output functions. In particular, we present a total function that on the input of $n$ elements in $[n]$, outputs $O(n)$ elements, such that: (1) There exists a randomized oblivious algorithm with space $O(\log n)$, time $O(n\log n)$ and one-way access to randomness, that computes the function with probability $1-O(1/n)$; (2) Any deterministic oblivious branching program with space $S$ and time $T$ that computes the function must satisfy $T^2S\geq\Omega(n^{2.5}/\log n)$. This implies that logspace randomized algorithms for multi-output functions cannot be black-box derandomized without an $\widetilde{\Omega}(n^{1/4})$ overhead in time. Since previously all the polynomial time-space tradeoffs of multi-output functions are proved via the Borodin-Cook method, which is a probabilistic method that inherently gives the same lower bound for randomized and deterministic branching programs, our lower bound proof is intrinsically different from previous works. We also examine other natural candidates for proving such separations, and show that any polynomial separation for these problems would resolve the long-standing open problem of proving $n^{1+\Omega(1)}$ time lower bound for decision problems with $\mathrm{polylog}(n)$ space.

相關內容

Many problems in science and engineering involve optimizing an expensive black-box function over a high-dimensional space. For such black-box optimization (BBO) problems, we typically assume a small budget for online function evaluations, but also often have access to a fixed, offline dataset for pretraining. Prior approaches seek to utilize the offline data to approximate the function or its inverse but are not sufficiently accurate far from the data distribution. We propose BONET, a generative framework for pretraining a novel black-box optimizer using offline datasets. In BONET, we train an autoregressive model on fixed-length trajectories derived from an offline dataset. We design a sampling strategy to synthesize trajectories from offline data using a simple heuristic of rolling out monotonic transitions from low-fidelity to high-fidelity samples. Empirically, we instantiate BONET using a causally masked Transformer and evaluate it on Design-Bench, where we rank the best on average, outperforming state-of-the-art baselines.

It is essential to select efficient topology of parameterized quantum circuits (PQCs) in variational quantum algorithms (VQAs). However, there are problems in current circuits, i.e. optimization difficulties caused by too many parameters or performance is hard to guarantee. How to reduce the number of parameters (number of single-qubit rotation gates and 2-qubit gates) in PQCs without reducing the performance has become a new challenge. To solve this problem, we propose a novel topology, called Block-Ring (BR) topology, to construct the PQCs. This topology allocate all qubits to several blocks, all-to-all mode is adopt inside each block and ring mode is applied to connect different blocks. Compared with the pure all-to-all topology circuits which own the best power, BR topology have similar performance and the number of parameters and 2-qubit gate reduced from 0(n^2) to 0(mn) , m is a hyperparameter set by ourselves. Besides, we compared BR topology with other topology circuits in terms of expressibility and entangling capability. Considering the effects of different 2-qubit gates on circuits, we also make a distinction between controlled X-rotation gates and controlled Z-rotation gates. Finally, the 1- and 2-layer configurations of PQCs are taken into consideration as well, which shows the BR's performance improvement in the condition of multilayer circuits.

A large class of problems in the current era of quantum devices involve interfacing between the quantum and classical system. These include calibration procedures, characterization routines, and variational algorithms. The control in these routines iteratively switches between the classical and the quantum computer. This results in the repeated compilation of the program that runs on the quantum system, scaling directly with the number of circuits and iterations. The repeated compilation results in a significant overhead throughout the routine. In practice, the total runtime of the program (classical compilation plus quantum execution) has an additional cost proportional to the circuit count. At practical scales, this can dominate the round-trip CPU-QPU time, between 5% and 80%, depending on the proportion of quantum execution time. To avoid repeated device-level compilation, we identify that machine code can be parametrized corresponding to pulse/gate parameters which can be dynamically adjusted during execution. Therefore, we develop a device-level partial-compilation (DLPC) technique that reduces compilation overhead to nearly constant, by using cheap remote procedure calls (RPC) from the QPU control software to the CPU. We then demonstrate the performance speedup of this on optimal pulse calibration, system characterization using randomized benchmarking (RB), and variational algorithms. We execute this modified pipeline on real trapped-ion quantum computers and observe significant reductions in compilation time, as much as 2.7x speedup for small-scale VQE problems.

Deep supervised learning algorithms generally require large numbers of labeled examples to achieve satisfactory performance. However, collecting and labeling too many examples can be costly and time-consuming. As a subset of unsupervised learning, self-supervised learning (SSL) aims to learn useful features from unlabeled examples without any human-annotated labels. SSL has recently attracted much attention and many related algorithms have been developed. However, there are few comprehensive studies that explain the connections and evolution of different SSL variants. In this paper, we provide a review of various SSL methods from the perspectives of algorithms, applications, three main trends, and open questions. First, the motivations of most SSL algorithms are introduced in detail, and their commonalities and differences are compared. Second, typical applications of SSL in domains such as image processing and computer vision (CV), as well as natural language processing (NLP), are discussed. Finally, the three main trends of SSL and the open research questions are discussed. A collection of useful materials is available at //github.com/guijiejie/SSL.

Declarative Distributed Systems (DDSs) are distributed systems grounded in logic programming. Although DDS model-checking is undecidable in general, we detect decidable cases by tweaking the data-source bounds, the message expressiveness, and the channel type.

Recently, denoising diffusion probabilistic models (DDPM) have been applied to image segmentation by generating segmentation masks conditioned on images, while the applications were mainly limited to 2D networks without exploiting potential benefits from the 3D formulation. In this work, we studied the DDPM-based segmentation model for 3D multiclass segmentation on two large multiclass data sets (prostate MR and abdominal CT). We observed that the difference between training and test methods led to inferior performance for existing DDPM methods. To mitigate the inconsistency, we proposed a recycling method which generated corrupted masks based on the model's prediction at a previous time step instead of using ground truth. The proposed method achieved statistically significantly improved performance compared to existing DDPMs, independent of a number of other techniques for reducing train-test discrepancy, including performing mask prediction, using Dice loss, and reducing the number of diffusion time steps during training. The performance of diffusion models was also competitive and visually similar to non-diffusion-based U-net, within the same compute budget. The JAX-based diffusion framework has been released at //github.com/mathpluscode/ImgX-DiffSeg.

The log-conformation formulation, although highly successful, was from the beginning formulated as a partial differential equation that contains an, for PDEs unusual, eigenvalue decomposition of the unknown field. To this day, most numerical implementations have been based on this or a similar eigenvalue decomposition, with Knechtges et al. (2014) being the only notable exception for two-dimensional flows. In this paper, we present an eigenvalue-free algorithm to compute the constitutive equation of the log-conformation formulation that works for two- and three-dimensional flows. Therefore, we first prove that the challenging terms in the constitutive equations are representable as a matrix function of a slightly modified matrix of the log-conformation field. We give a proof of equivalence of this term to the more common log-conformation formulations. Based on this formulation, we develop an eigenvalue-free algorithm to evaluate this matrix function. The resulting full formulation is first discretized using a finite volume method, and then tested on the confined cylinder and sedimenting sphere benchmarks.

Long-term outcomes of experimental evaluations are necessarily observed after long delays. We develop semiparametric methods for combining the short-term outcomes of experiments with observational measurements of short-term and long-term outcomes, in order to estimate long-term treatment effects. We characterize semiparametric efficiency bounds for various instances of this problem. These calculations facilitate the construction of several estimators. We analyze the finite-sample performance of these estimators with a simulation calibrated to data from an evaluation of the long-term effects of a poverty alleviation program.

With the rapid growth of mobile data traffic, the shortage of radio spectrum resource has become increasingly prominent. Millimeter wave (mmWave) small cells can be densely deployed in macro cells to improve network capacity and spectrum utilization. Such a network architecture is referred to as mmWave heterogeneous cellular networks (HetNets). Compared with the traditional wired backhaul, The integrated access and backhaul (IAB) architecture with wireless backhaul is more flexible and cost-effective for mmWave HetNets. However, the imbalance of throughput between the access and backhaul links will constrain the total system throughput. Consequently, it is necessary to jointly design of radio access and backhaul link. In this paper, we study the joint optimization of user association and backhaul resource allocation in mmWave HetNets, where different mmWave bands are adopted by the access and backhaul links. Considering the non-convex and combinatorial characteristics of the optimization problem and the dynamic nature of the mmWave link, we propose a multi-agent deep reinforcement learning (MADRL) based scheme to maximize the long-term total link throughput of the network. The simulation results show that the scheme can not only adjust user association and backhaul resource allocation strategy according to the dynamics in the access link state, but also effectively improve the link throughput under different system configurations.

Human-in-the-loop aims to train an accurate prediction model with minimum cost by integrating human knowledge and experience. Humans can provide training data for machine learning applications and directly accomplish some tasks that are hard for computers in the pipeline with the help of machine-based approaches. In this paper, we survey existing works on human-in-the-loop from a data perspective and classify them into three categories with a progressive relationship: (1) the work of improving model performance from data processing, (2) the work of improving model performance through interventional model training, and (3) the design of the system independent human-in-the-loop. Using the above categorization, we summarize major approaches in the field, along with their technical strengths/ weaknesses, we have simple classification and discussion in natural language processing, computer vision, and others. Besides, we provide some open challenges and opportunities. This survey intends to provide a high-level summarization for human-in-the-loop and motivates interested readers to consider approaches for designing effective human-in-the-loop solutions.

北京阿比特科技有限公司