亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Byzantine agreement (BA), the task of $n$ parties to agree on one of their input bits in the face of malicious agents, is a powerful primitive that lies at the core of a vast range of distributed protocols. Interestingly, in protocols with the best overall communication, the demands of the parties are highly unbalanced: the amortized cost is $\tilde O(1)$ bits per party, but some parties must send $\Omega(n)$ bits. In best known balanced protocols, the overall communication is sub-optimal, with each party communicating $\tilde O(\sqrt{n})$. In this work, we ask whether asymmetry is inherent for optimizing total communication. Our contributions in this line are as follows: 1) We define a cryptographic primitive, succinctly reconstructed distributed signatures (SRDS), that suffices for constructing $\tilde O(1)$ balanced BA. We provide two constructions of SRDS from different cryptographic and Public-Key Infrastructure (PKI) assumptions. 2) The SRDS-based BA follows a paradigm of boosting from "almost-everywhere" agreement to full agreement, and does so in a single round. We prove that PKI setup and cryptographic assumptions are necessary for such protocols in which every party sends $o(n)$ messages. 3) We further explore connections between a natural approach toward attaining SRDS and average-case succinct non-interactive argument systems (SNARGs) for a particular type of NP-Complete problems (generalizing Subset-Sum and Subset-Product). Our results provide new approaches forward, as well as limitations and barriers, towards minimizing per-party communication of BA. In particular, we construct the first two BA protocols with $\tilde O(1)$ balanced communication, offering a tradeoff between setup and cryptographic assumptions, and answering an open question presented by King and Saia (DISC'09).

相關內容

SRDS:IEEE International Symposium on Reliable Distributed Systems。 Explanation:IEEE可靠分布式系統國際研討會。 Publisher:IEEE。 SIT:

The increasing size and severity of wildfires across western North America have generated dangerous levels of PM$_{2.5}$ pollution in recent years. In a warming climate, expanding the use of prescribed fires is widely considered to be the most robust fire mitigation strategy. However, reliably forecasting the potential air quality impact from these prescribed fires, a critical ingredient in determining the fires' location and time, at hourly to daily time scales remains a challenging problem. This paper proposes a novel integration of prescribed fire simulation with a spatio-temporal graph neural network-based PM$_{2.5}$ forecasting model. The experiments in this work focus on determining the optimal time for implementing prescribed fires in California as well as quantifying the potential air quality trade-offs involved in conducting more prescribed fires outside the fire season.

The efficiency of an AI system is contingent upon its ability to align with the specified requirements of a given task. How-ever, the inherent complexity of tasks often introduces the potential for harmful implications or adverse actions. This note explores the critical concept of capability within AI systems, representing what the system is expected to deliver. The articulation of capability involves specifying well-defined out-comes. Yet, the achievement of this capability may be hindered by deficiencies in implementation and testing, reflecting a gap in the system's competency (what it can do vs. what it does successfully). A central challenge arises in elucidating the competency of an AI system to execute tasks effectively. The exploration of system competency in AI remains in its early stages, occasionally manifesting as confidence intervals denoting the probability of success. Trust in an AI system hinges on the explicit modeling and detailed specification of its competency, connected intricately to the system's capability. This note explores this gap by proposing a framework for articulating the competency of AI systems. Motivated by practical scenarios such as the Glass Door problem, where an individual inadvertently encounters a glass obstacle due to a failure in their competency, this research underscores the imperative of delving into competency dynamics. Bridging the gap between capability and competency at a detailed level, this note contributes to advancing the discourse on bolstering the reliability of AI systems in real-world applications.

It is a very hard task to compute an exact solution for the differential equations, with differences, system that allows the determination of the M|M|m|m system transient probabilities. The respective complexity grows with m. The computations are extremely fastidious and the length and the fact that the expressions obtained are often approximate, and not exact, will not allow the transient probabilities behavior as time functions characterization. To overcome these problems, in this work it is analyzed how that system can supply approximate values to the M|M|m|m queue system. It is also presented an asymptotic method to solve the system that becomes possible in many cases to obtain simple approximated expressions for those probabilities using the M|M|Inf transient probabilities, very well-known and very much easier to study.

Given a natural number $k\ge 2$, we consider the $k$-submodular cover problem ($k$-SC). The objective is to find a minimum cost subset of a ground set $\mathcal{X}$ subject to the value of a $k$-submodular utility function being at least a certain predetermined value $\tau$. For this problem, we design a bicriteria algorithm with a cost at most $O(1/\epsilon)$ times the optimal value, while the utility is at least $(1-\epsilon)\tau/r$, where $r$ depends on the monotonicity of $g$.

We present $\mathcal{X}^3$ (pronounced XCube), a novel generative model for high-resolution sparse 3D voxel grids with arbitrary attributes. Our model can generate millions of voxels with a finest effective resolution of up to $1024^3$ in a feed-forward fashion without time-consuming test-time optimization. To achieve this, we employ a hierarchical voxel latent diffusion model which generates progressively higher resolution grids in a coarse-to-fine manner using a custom framework built on the highly efficient VDB data structure. Apart from generating high-resolution objects, we demonstrate the effectiveness of XCube on large outdoor scenes at scales of 100m$\times$100m with a voxel size as small as 10cm. We observe clear qualitative and quantitative improvements over past approaches. In addition to unconditional generation, we show that our model can be used to solve a variety of tasks such as user-guided editing, scene completion from a single scan, and text-to-3D. More results and details can be found at //research.nvidia.com/labs/toronto-ai/xcube/.

Today's scientific simulations require significant data volume reduction because of the enormous amounts of data produced and the limited I/O bandwidth and storage space. Error-bounded lossy compression has been considered one of the most effective solutions to the above problem. However, little work has been done to improve error-bounded lossy compression for Adaptive Mesh Refinement (AMR) simulation data. Unlike the previous work that only leverages 1D compression, in this work, we propose an approach (TAC) to leverage high-dimensional SZ compression for each refinement level of AMR data. To remove the data redundancy across different levels, we propose several pre-process strategies and adaptively use them based on the data features. We further optimize TAC to TAC+ by improving the lossless encoding stage of SZ compression to handle many small AMR data blocks after the pre-processing efficiently. Experiments on 10 AMR datasets from three real-world large-scale AMR simulations demonstrate that TAC+ can improve the compression ratio by up to 4.9$\times$ under the same data distortion, compared to the state-of-the-art method. In addition, we leverage the flexibility of our approach to tune the error bound for each level, which achieves much lower data distortion on two application-specific metrics.

Suppose that $K\subset\C$ is compact and that $z_0\in\C\backslash K$ is an external point. An optimal prediction measure for regression by polynomials of degree at most $n,$ is one for which the variance of the prediction at $z_0$ is as small as possible. Hoel and Levine (\cite{HL}) have considered the case of $K=[-1,1]$ and $z_0=x_0\in \R\backslash [-1,1],$ where they show that the support of the optimal measure is the $n+1$ extremme points of the Chebyshev polynomial $T_n(x)$ and characterizing the optimal weights in terms of absolute values of fundamental interpolating Lagrange polynomials. More recently, \cite{BLO} has given the equivalence of the optimal prediction problem with that of finding polynomials of extremal growth. They also study in detail the case of $K=[-1,1]$ and $z_0=ia\in i\R,$ purely imaginary. In this work we generalize the Hoel-Levine formula to the general case when the support of the optimal measure is a finite set and give a formula for the optimal weights in terms of a $\ell_1$ minimization problem.

Angluin's L$^*$ algorithm learns the minimal deterministic finite automaton (DFA) of a regular language using membership and equivalence queries. Its probabilistic approximatively correct (PAC) version substitutes an equivalence query by numerous random membership queries to get a high level confidence to the answer. Thus it can be applied to any kind of device and may be viewed as an algorithm for synthesizing an automaton abstracting the behavior of the device based on observations. Here we are interested on how Angluin's PAC learning algorithm behaves for devices which are obtained from a DFA by introducing some noise. More precisely we study whether Angluin's algorithm reduces the noise and produces a DFA closer to the original one than the noisy device. We propose several ways to introduce the noise: (1) the noisy device inverts the classification of words w.r.t. the DFA with a small probability, (2) the noisy device modifies with a small probability the letters of the word before asking its classification w.r.t. the DFA, (3) the noisy device combines the classification of a word w.r.t. the DFA and its classification w.r.t. a counter automaton, and (4) the noisy DFA is obtained by a random process from two DFA such that the language of the first one is included in the second one. Then when a word is accepted (resp. rejected) by the first (resp. second) one, it is also accepted (resp. rejected) and in the remaining cases, it is accepted with probability 0.5. Our main experimental contributions consist in showing that: (1) Angluin's algorithm behaves well whenever the noisy device is produced by a random process, (2) but poorly with a structured noise, and, that (3) is able to eliminate pathological behaviours specified in a regular way. Theoretically, we show that randomness almost surely yields systems with non-recursively enumerable languages.

Given an arbitrary set of high dimensional points in $\ell_1$, there are known negative results that preclude the possibility of mapping them to a low dimensional $\ell_1$ space while preserving distances with small multiplicative distortion. This is in stark contrast with dimension reduction in Euclidean space ($\ell_2$) where such mappings are always possible. While the first non-trivial lower bounds for $\ell_1$ dimension reduction were established almost 20 years ago, there has been minimal progress in understanding what sets of points in $\ell_1$ are conducive to a low-dimensional mapping. In this work, we shift the focus from the worst-case setting and initiate the study of a characterization of $\ell_1$ metrics that are conducive to dimension reduction in $\ell_1$. Our characterization focuses on metrics that are defined by the disagreement of binary variables over a probability distribution -- any $\ell_1$ metric can be represented in this form. We show that, for configurations of $n$ points in $\ell_1$ obtained from tree Ising models, we can reduce dimension to $\mathrm{polylog}(n)$ with constant distortion. In doing so, we develop technical tools for embedding capped metrics (also known as truncated metrics) which have been studied because of their applications in computer vision, and are objects of independent interest in metric geometry.

Probabilistic record linkage is often used to match records from two files, in particular when the variables common to both files comprise imperfectly measured identifiers like names and demographic variables. We consider bipartite record linkage settings in which each entity appears at most once within a file, i.e., there are no duplicates within the files, but some entities appear in both files. In this setting, the analyst desires a point estimate of the linkage structure that matches each record to at most one record from the other file. We propose an approach for obtaining this point estimate by maximizing the expected $F$-score for the linkage structure. We target the approach for record linkage methods that produce either (an approximate) posterior distribution of the unknown linkage structure or probabilities of matches for record pairs. Using simulations and applications with genuine data, we illustrate that the $F$-score estimators can lead to sensible estimates of the linkage structure.

北京阿比特科技有限公司