亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We explore the impact of coarse quantization on matrix completion in the extreme scenario of dithered one-bit sensing, where the matrix entries are compared with time-varying threshold levels. In particular, instead of observing a subset of high-resolution entries of a low-rank matrix, we have access to a small number of one-bit samples, generated as a result of these comparisons. In order to recover the low-rank matrix using its coarsely quantized known entries, we begin by transforming the problem of one-bit matrix completion (one-bit MC) with time-varying thresholds into a nuclear norm minimization problem. The one-bit sampled information is represented as linear inequality feasibility constraints. We then develop the popular singular value thresholding (SVT) algorithm to accommodate these inequality constraints, resulting in the creation of the One-Bit SVT (OB-SVT). Our findings demonstrate that incorporating multiple time-varying sampling threshold sequences in one-bit MC can significantly improve the performance of the matrix completion algorithm. In pursuit of achieving this objective, we utilize diverse thresholding schemes, namely uniform, Gaussian, and discrete thresholds. To accelerate the convergence of our proposed algorithm, we introduce three variants of the OB-SVT algorithm. Among these variants is the randomized sketched OB-SVT, which departs from using the entire information at each iteration, opting instead to utilize sketched data. This approach effectively reduces the dimension of the operational space and accelerates the convergence. We perform numerical evaluations comparing our proposed algorithm with the maximum likelihood estimation method previously employed for one-bit MC, and demonstrate that our approach can achieve a better recovery performance.

相關內容

In the realm of large multi-modal models (LMMs), efficient modality alignment is crucial yet often constrained by the scarcity of high-quality image-text data. To address this bottleneck, we introduce the ShareGPT4V dataset, a pioneering large-scale resource featuring 1.2 million highly descriptive captions, which surpasses existing datasets in diversity and information content, covering world knowledge, object properties, spatial relationships, and aesthetic evaluations. Specifically, ShareGPT4V originates from a curated 100K high-quality captions collected from advanced GPT4-Vision and has been expanded to 1.2M with a superb caption model trained on this subset. ShareGPT4V first demonstrates its effectiveness for the Supervised Fine-Tuning (SFT) phase, by substituting an equivalent quantity of detailed captions in existing SFT datasets with a subset of our high-quality captions, significantly enhancing the LMMs like LLaVA-7B, LLaVA-1.5-13B, and Qwen-VL-Chat-7B on the MME and MMBench benchmarks, with respective gains of 222.8/22.0/22.3 and 2.7/1.3/1.5. We further incorporate ShareGPT4V data into both the pre-training and SFT phases, obtaining ShareGPT4V-7B, a superior LMM based on a simple architecture that has remarkable performance across a majority of the multi-modal benchmarks. This project is available at //ShareGPT4V.github.io to serve as a pivotal resource for advancing the LMMs community.

Numerous quantum algorithms require the use of quantum error correction to overcome the intrinsic unreliability of physical qubits. However, error correction imposes a unique performance bottleneck, known as T-complexity, that can make an implementation of an algorithm as a quantum program run more slowly than on idealized hardware. In this work, we identify that programming abstractions for control flow, such as the quantum if-statement, can introduce polynomial increases in the T-complexity of a program. If not mitigated, this slowdown can diminish the computational advantage of a quantum algorithm. To enable reasoning about the costs of control flow, we present a cost model, using which a developer can analyze the T-complexity of a program under quantum error correction and pinpoint the sources of slowdown. We also present a set of program-level optimizations, using which a developer can rewrite a program to reduce its T-complexity, predict the T-complexity of the optimized program using the cost model, and then compile it to an efficient circuit via a straightforward strategy. We implement the program-level optimizations in Spire, an extension of the Tower quantum compiler. Using a set of 11 benchmark programs that use control flow, we show that the cost model is accurate, and that Spire's optimizations recover programs that are asymptotically efficient, meaning their runtime T-complexity under error correction is equal to their time complexity on idealized hardware. Our results show that optimizing a program before it is compiled to a circuit can yield better results than compiling the program to an inefficient circuit and then invoking a quantum circuit optimizer found in prior work. For our benchmarks, only 2 of 8 existing circuit optimizers recover circuits with asymptotically efficient T-complexity. Compared to these 2 optimizers, Spire uses 54x to 2400x less compile time.

Solving inverse problems requires knowledge of the forward operator, but accurate models can be computationally expensive and hence cheaper variants are desired that do not compromise reconstruction quality. This chapter reviews reconstruction methods in inverse problems with learned forward operators that follow two different paradigms. The first one is completely agnostic to the forward operator and learns its restriction to the subspace spanned by the training data. The framework of regularisation by projection is then used to find a reconstruction. The second one uses a simplified model of the physics of the measurement process and only relies on the training data to learn a model correction. We present the theory of these two approaches and compare them numerically. A common theme emerges: both methods require, or at least benefit from, training data not only for the forward operator, but also for its adjoint.

QR decomposition is an essential operation for solving linear equations and obtaining least-squares solutions. In high-performance computing systems, large-scale parallel QR decomposition often faces node faults. We address this issue by proposing a fault-tolerant algorithm that incorporates `coded computing' into the parallel Gram-Schmidt method, commonly used for QR decomposition. Coded computing introduces error-correcting codes into computational processes to enhance resilience against intermediate failures. While traditional coding strategies cannot preserve the orthogonality of $Q$, recent work has proven a post-orthogonalization condition that allows low-cost restoration of the degraded orthogonality. In this paper, we construct a checksum-generator matrix for multiple-node failures that satisfies the post-orthogonalization condition and prove that our code satisfies the maximum-distance separable (MDS) property with high probability. Furthermore, we consider in-node checksum storage setting where checksums are stored in original nodes. We obtain the minimal number of checksums required to be resilient to any $f$ failures under the in-node checksum storage, and also propose an in-node systematic MDS coding strategy that achieves the lower bound. Extensive experiments validate our theories and showcase the negligible overhead of our coded computing framework for fault-tolerant QR decomposition.

Perturbation theory is developed to analyze the impact of noise on data and has been an essential part of numerical analysis. Recently, it has played an important role in designing and analyzing matrix algorithms. One of the most useful tools in this subject, the Davis-Kahan sine theorem, provides an $\ell_2$ error bound on the perturbation of the leading singular vectors (and spaces). We focus on the case when the signal matrix has low rank and the perturbation is random, which occurs often in practice. In an earlier paper, O'Rourke, Wang, and the second author showed that in this case, one can obtain an improved theorem. In particular, the noise-to-gap ratio condition in the original setting can be weakened considerably. In the current paper, we develop an infinity norm version of the O'Rourke-Vu-Wang result. The key ideas in the proof are a new bootstrapping argument and the so-called iterative leave-one-out method, which may be of independent interest. Applying the new bounds, we develop new, simple, and quick algorithms for several well-known problems, such as finding hidden partitions and matrix completion. The core of these new algorithms is the fact that one is now able to quickly approximate certain key objects in the infinity norm, which has critical advantages over approximations in the $\ell_2$ norm, Frobenius norm, or spectral norm.

Consider a matroid where all elements are labeled with an element from $\mathbb{Z}_m$. We are interested in finding a base where the sum of the labels is congruent to $g \pmod m$. We show that this problem can be solved in $O(m^{2m} n r \log r)$ time for a matroid with $n$ elements and rank $r$, when $m$ is either the product of two primes or a prime power. The algorithm generalizes to all moduli, and in fact, to all abelian groups, if a classic additive combinatorics conjecture of Schrijver and Seymour is true. We also discuss the optimization version of the problem.

We present experimental and theoretical results on a method that applies a numerical solver iteratively to solve several non-negative quadratic programming problems in geometric optimization. The method gains efficiency by exploiting the potential sparsity of the intermediate solutions. We implemented the method to call quadprog of MATLAB iteratively. In comparison with a single call of quadprog, we obtain a 10-fold speedup on two proximity graph problems in $\mathbb{R}^d$ on some public data sets, a 10-fold speedup on the minimum enclosing ball problem on random points in a unit cube in $\mathbb{R}^d$, and a 5-fold speedup on the polytope distance problem on random points from a cube in $\mathbb{R}^d$ when the input size is significantly larger than the dimension; we also obtain a 2-fold or more speedup on deblurring some gray-scale space and thermal images via non-negative least square. We compare with two minimum enclosing ball software by G\"{a}rtner and Fischer et al.; for 1000 nearly cospherical points or random points in a unit cube, the iterative method overtakes the software by G\"{a}rtner at 20 dimensions and the software by Fischer et al. at 170 dimensions. In the image deblurring experiments, the iterative method compares favorably with other software that can solve non-negative least square, including FISTA with backtracking, SBB, FNNLS, and lsqnonneg of MATLAB. We analyze theoretically the number of iterations taken by the iterative scheme to reduce the gap between the current solution value and the optimum by a factor $e$. Under certain assumptions, we prove a bound proportional to the square root of the number of variables.

Quantum Hamiltonian simulation, which simulates the evolution of quantum systems and probes quantum phenomena, is one of the most promising applications of quantum computing. Recent experimental results suggest that Hamiltonian-oriented analog quantum simulation would be advantageous over circuit-oriented digital quantum simulation in the Noisy Intermediate-Scale Quantum (NISQ) machine era. However, programming analog quantum simulators is much more challenging due to the lack of a unified interface between hardware and software. In this paper, we design and implement SimuQ, the first framework for quantum Hamiltonian simulation that supports Hamiltonian programming and pulse-level compilation to heterogeneous analog quantum simulators. Specifically, in SimuQ, front-end users specify the target quantum system with Hamiltonian Modeling Language, and the Hamiltonian-level programmability of analog quantum simulators is specified through a new abstraction called the abstract analog instruction set (AAIS) and programmed in AAIS Specification Language by hardware providers. Through a solver-based compilation, SimuQ generates executable pulse schedules for real devices to simulate the evolution of desired quantum systems, which is demonstrated on superconducting (IBM), neutral-atom (QuEra), and trapped-ion (IonQ) quantum devices. Moreover, we demonstrate the advantages of exposing the Hamiltonian-level programmability of devices with native operations or interaction-based gates and establish a small benchmark of quantum simulation to evaluate SimuQ's compiler with the above analog quantum simulators.

Categorical semantics of type theories are often characterized as structure-preserving functors. This is because in category theory both the syntax and the domain of interpretation are uniformly treated as structured categories, so that we can express interpretations as structure-preserving functors between them. This mathematical characterization of semantics makes it convenient to manipulate and to reason about relationships between interpretations. Motivated by this success of functorial semantics, we address the question of finding a functorial analogue in abstract interpretation, a general framework for comparing semantics, so that we can bring similar benefits of functorial semantics to semantic abstractions used in abstract interpretation. Major differences concern the notion of interpretation that is being considered. Indeed, conventional semantics are value-based whereas abstract interpretation typically deals with more complex properties. In this paper, we propose a functorial approach to abstract interpretation and study associated fundamental concepts therein. In our approach, interpretations are expressed as oplax functors in the category of posets, and abstraction relations between interpretations are expressed as lax natural transformations representing concretizations. We present examples of these formal concepts from monadic semantics of programming languages and discuss soundness.

In multi-turn dialog, utterances do not always take the full form of sentences \cite{Carbonell1983DiscoursePA}, which naturally makes understanding the dialog context more difficult. However, it is essential to fully grasp the dialog context to generate a reasonable response. Hence, in this paper, we propose to improve the response generation performance by examining the model's ability to answer a reading comprehension question, where the question is focused on the omitted information in the dialog. Enlightened by the multi-task learning scheme, we propose a joint framework that unifies these two tasks, sharing the same encoder to extract the common and task-invariant features with different decoders to learn task-specific features. To better fusing information from the question and the dialog history in the encoding part, we propose to augment the Transformer architecture with a memory updater, which is designed to selectively store and update the history dialog information so as to support downstream tasks. For the experiment, we employ human annotators to write and examine a large-scale dialog reading comprehension dataset. Extensive experiments are conducted on this dataset, and the results show that the proposed model brings substantial improvements over several strong baselines on both tasks. In this way, we demonstrate that reasoning can indeed help better response generation and vice versa. We release our large-scale dataset for further research.

北京阿比特科技有限公司