亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Strictly serializable datastores greatly simplify the development of correct applications by providing strong consistency guarantees. However, existing techniques pay unnecessary costs for naturally consistent transactions, which arrive at servers in an order that is already strictly serializable. We find these transactions are prevalent in datacenter workloads. We exploit this natural arrival order by executing transaction requests with minimal costs while optimistically assuming they are naturally consistent, and then leverage a timestamp-based technique to efficiently verify if the execution is indeed consistent. In the process of designing such a timestamp-based technique, we identify a fundamental pitfall in relying on timestamps to provide strict serializability, and name it the timestamp-inversion pitfall. We find timestamp-inversion has affected several existing works. We present Natural Concurrency Control (NCC), a new concurrency control technique that guarantees strict serializability and ensures minimal costs -- i.e., one-round latency, lock-free, and non-blocking execution -- in the best (and common) case by leveraging natural consistency. NCC is enabled by three key components: non-blocking execution, decoupled response control, and timestamp-based consistency check. NCC avoids timestamp-inversion with a new technique: response timing control, and proposes two optimization techniques, asynchrony-aware timestamps and smart retry, to reduce false aborts. Moreover, NCC designs a specialized protocol for read-only transactions, which is the first to achieve the optimal best-case performance while ensuring strict serializability, without relying on synchronized clocks. Our evaluation shows that NCC outperforms state-of-the-art solutions by an order of magnitude on many workloads.

相關內容

Given a set of calibrated images of a scene, we present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives. While many approaches focus on recovering high-fidelity 3D scenes, we focus on parsing a scene into mid-level 3D representations made of a small set of textured primitives. Such representations are interpretable, easy to manipulate and suited for physics-based simulations. Moreover, unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images through differentiable rendering. Specifically, we model primitives as textured superquadric meshes and optimize their parameters from scratch with an image rendering loss. We highlight the importance of modeling transparency for each primitive, which is critical for optimization and also enables handling varying numbers of primitives. We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points, while providing amodal shape completions of unseen object regions. We compare our approach to the state of the art on diverse scenes from DTU, and demonstrate its robustness on real-life captures from BlendedMVS and Nerfstudio. We also showcase how our results can be used to effortlessly edit a scene or perform physical simulations. Code and video results are available at //www.tmonnier.com/DBW .

Empirical research typically involves a robustness-efficiency tradeoff. A researcher seeking to estimate a scalar parameter can invoke strong assumptions to motivate a restricted estimator that is precise but may be heavily biased, or they can relax some of these assumptions to motivate a more robust, but variable, unrestricted estimator. When a bound on the bias of the restricted estimator is available, it is optimal to shrink the unrestricted estimator towards the restricted estimator. For settings where a bound on the bias of the restricted estimator is unknown, we propose adaptive shrinkage estimators that minimize the percentage increase in worst case risk relative to an oracle that knows the bound. We show that adaptive estimators solve a weighted convex minimax problem and provide lookup tables facilitating their rapid computation. Revisiting five empirical studies where questions of model specification arise, we examine the advantages of adapting to -- rather than testing for -- misspecification.

Linearizability is a standard correctness criterion for concurrent algorithms, typically proved by establishing the algorithms' linearization points (LP). However, LPs often hinder abstraction, and for some algorithms such as the timestamped stack, it is unclear how to even identify their LPs. In this paper, we show how to develop declarative proofs of linearizability by foregoing LPs and instead employing axiomatization of so-called visibility relations. While visibility relations have been considered before for the timestamped stack, our study is the first to show how to derive the axiomatization systematically and intuitively from the sequential specification of the stack. In addition to the visibility relation, a novel separability relation emerges to generalize real-time precedence of procedure invocation. The visibility and separability relations have natural definitions for the timestamped stack, and enable a novel proof that reduces the algorithm to a simplified form where the timestamps are generated atomically.

Linearizability is a standard correctness criterion for concurrent algorithms, typically proved by establishing the algorithms' linearization points. However, relying on linearization points leads to proofs that are implementation-dependent, and thus hinder abstraction and reuse. In this paper we show that one can develop more declarative proofs by foregoing linearization points and instead relying on a technique of axiomatization of visibility relations. While visibility relations have been considered before, ours is the first study where the challenge is to formalize the helping nature of the algorithms. In particular, we show that by axiomatizing the properties of separation between events that contain bunches of help requests, we can extract what is common for high-level understanding of several descriptor-based helping algorithms of Harris et al. (RDCSS, MCAS, and optimizations), and produce novel proofs of their linearizability that share significant components.

Segmentation of objects in a video is challenging due to the nuances such as motion blurring, parallax, occlusions, changes in illumination, etc. Instead of addressing these nuances separately, we focus on building a generalizable solution that avoids overfitting to the individual intricacies. Such a solution would also help us save enormous resources involved in human annotation of video corpora. To solve Video Object Segmentation (VOS) in an unsupervised setting, we propose a new pipeline (FODVid) based on the idea of guiding segmentation outputs using flow-guided graph-cut and temporal consistency. Basically, we design a segmentation model incorporating intra-frame appearance and flow similarities, and inter-frame temporal continuation of the objects under consideration. We perform an extensive experimental analysis of our straightforward methodology on the standard DAVIS16 video benchmark. Though simple, our approach produces results comparable (within a range of ~2 mIoU) to the existing top approaches in unsupervised VOS. The simplicity and effectiveness of our technique opens up new avenues for research in the video domain.

Given a graph $G=(V,E)$, for a vertex set $S\subseteq V$, let $N(S)$ denote the set of vertices in $V$ that have a neighbor in $S$. Extending the concept of binding number of graphs by Woodall~(1973), for a vertex set $X \subseteq V$, we define the binding number of $X$, denoted by $\bind(X)$, as the maximum number $b$ such that for every $S \subseteq X$ where $N(S)\neq V(G)$ it holds that $|N(S)|\ge b {|S|}$. Given this definition, we prove that if a graph $V(G)$ contains a subset $X$ with $\bind(X)= 1/k$ where $k$ is an integer, then $G$ possesses a matching of size at least $|X|/(k+1)$. Using this statement, we derive tight bounds for the estimators of the matching size in planar graphs. These estimators are previously used in designing sublinear space algorithms for approximating the maching size in the data stream model of computation. In particular, we show that the number of locally superior vertices is a $3$ factor approximation of the matching size in planar graphs. The previous analysis by Jowhari (2023) proved a $3.5$ approximation factor. As another application, we show a simple variant of an estimator by Esfandiari \etal (2015) achieves $3$ factor approximation of the matching size in planar graphs. Namely, let $s$ be the number of edges with both endpoints having degree at most $2$ and let $h$ be the number of vertices with degree at least $3$. We prove that when the graph is planar, the size of matching is at least $(s+h)/3$. This result generalizes a known fact that every planar graph on $n$ vertices with minimum degree $3$ has a matching of size at least $n/3$.

This paper focuses on the problem of coflow scheduling with precedence constraints in identical parallel networks, which is a well-known $\mathcal{NP}$-hard problem. Coflow is a relatively new network abstraction used to characterize communication patterns in data centers. Both flow-level scheduling and coflow-level scheduling problems are examined, with the key distinction being the scheduling granularity. The proposed algorithm effectively determines the scheduling order of coflows by employing the primal-dual method. When considering workload sizes and weights that are dependent on the network topology in the input instances, our proposed algorithm for the flow-level scheduling problem achieves an approximation ratio of $O(\chi)$ where $\chi$ is the coflow number of the longest path in the directed acyclic graph (DAG). Additionally, when taking into account workload sizes that are topology-dependent, the algorithm achieves an approximation ratio of $O(R\chi)$, where $R$ represents the ratio of maximum weight to minimum weight. For the coflow-level scheduling problem, the proposed algorithm achieves an approximation ratio of $O(m\chi)$, where $m$ is the number of network cores, when considering workload sizes and weights that are topology-dependent. Moreover, when considering workload sizes that are topology-dependent, the algorithm achieves an approximation ratio of $O(Rm\chi)$. In the coflows of multi-stage job scheduling problem, the proposed algorithm achieves an approximation ratio of $O(\chi)$. Although our theoretical results are based on a limited set of input instances, experimental findings show that the results for general input instances outperform the theoretical results, thereby demonstrating the effectiveness and practicality of the proposed algorithm.

Supply chain operations traditionally involve a variety of complex decision making problems. Over the last few decades, supply chains greatly benefited from advances in computation, which allowed the transition from manual processing to automation and cost-effective optimization. Nonetheless, business operators still need to spend substantial efforts in \emph{explaining} and interpreting the optimization outcomes to stakeholders. Motivated by the recent advances in Large Language Models (LLMs), we study how this disruptive technology can help bridge the gap between supply chain automation and human comprehension and trust thereof. We design \name{} -- a framework that accepts as input queries in plain text, and outputs insights about the underlying optimization outcomes. Our framework does not forgo the state-of-the-art combinatorial optimization technology, but rather leverages it to quantitatively answer what-if scenarios (e.g., how would the cost change if we used supplier B instead of supplier A for a given demand?). Importantly, our design does not require sending proprietary data over to LLMs, which can be a privacy concern in some circumstances. We demonstrate the effectiveness of our framework on a real server placement scenario within Microsoft's cloud supply chain. Along the way, we develop a general evaluation benchmark, which can be used to evaluate the accuracy of the LLM output in other scenarios.

This work presents the first applications of self-supervised learning applied to data from digital antenna arrays. Encoder-decoder networks are pretrained on digital array data to perform a self-supervised noisy-reconstruction task called channel in-painting, in which the network infers the contents of array data that has been masked with zeros. The self-supervised step requires no human-labeled data. The encoder architecture and weights from pretraining are then transferred to a new network with a task-specific decoder, and the new network is trained on a small volume of labeled data. We show that pretraining on the unlabeled data allows the new network to perform the task of bandwidth regression on the digital array data better than an equivalent network that is trained on the same labeled data from random initialization.

In 1954, Alston S. Householder published Principles of Numerical Analysis, one of the first modern treatments on matrix decomposition that favored a (block) LU decomposition-the factorization of a matrix into the product of lower and upper triangular matrices. And now, matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, Hilbert space, and things in the complex domain. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields.

北京阿比特科技有限公司