亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Node repair is a crucial problem in erasure-code-based distributed storage systems. An important metric for repair efficiency is the I/O cost which equals the total amount of data accessed at helper nodes to repair a failed node. In this work, a general formula for computing the I/O cost of linear repair schemes is derived from a new perspective, i.e., by investigating the Hamming weight of a related linear space. Applying the formula to Reed-Solomon (RS) codes, we obtain lower bounds on the I/O cost for full-length RS codes with two and three parities. Furthermore, we build linear repair schemes for the RS codes with improved I/O cost. For full-length RS codes with two parities, our scheme meets the lower bound on the I/O cost.

相關內容

Passwords remain a widely-used authentication mechanism, despite their well-known security and usability limitations. To improve on this situation, next-generation authentication mechanisms, based on behavioral biometric factors such as eye movement and brainwave have emerged. However, their usability remains relatively under-explored. To fill this gap, we conducted an empirical user study (n=32 participants) to evaluate three brain-based and three eye-based authentication mechanisms, using both qualitative and quantitative methods. Our findings show good overall usability according to the System Usability Scale for both categories of mechanisms, with average SUS scores in the range of 78.6-79.6 and the best mechanisms rated with an "excellent" score. Participants particularly identified brainwave authentication as more secure yet more privacy-invasive and effort-intensive compared to eye movement authentication. However, the significant number of neutral responses indicates participants' need for more detailed information about the security and privacy implications of these authentication methods. Building on the collected evidence, we identify three key areas for improvement: privacy, authentication interface design, and verification time. We offer recommendations for designers and developers to improve the usability and security of next-generation authentication mechanisms.

This paper presents the first systematic study of the evaluation of Deep Neural Networks (DNNs) for discrete dynamical systems under stochastic assumptions, with a focus on wildfire prediction. We develop a framework to study the impact of stochasticity on two classes of evaluation metrics: classification-based metrics, which assess fidelity to observed ground truth (GT), and proper scoring rules, which test fidelity-to-statistic. Our findings reveal that evaluating for fidelity-to-statistic is a reliable alternative in highly stochastic scenarios. We extend our analysis to real-world wildfire data, highlighting limitations in traditional wildfire prediction evaluation methods, and suggest interpretable stochasticity-compatible alternatives.

The ability of Large Language Models (LLMs) to critique and refine their reasoning is crucial for their application in evaluation, feedback provision, and self-improvement. This paper introduces CriticBench, a comprehensive benchmark designed to assess LLMs' abilities to critique and rectify their reasoning across a variety of tasks. CriticBench encompasses five reasoning domains: mathematical, commonsense, symbolic, coding, and algorithmic. It compiles 15 datasets and incorporates responses from three LLM families. Utilizing CriticBench, we evaluate and dissect the performance of 17 LLMs in generation, critique, and correction reasoning, i.e., GQC reasoning. Our findings reveal: (1) a linear relationship in GQC capabilities, with critique-focused training markedly enhancing performance; (2) a task-dependent variation in correction effectiveness, with logic-oriented tasks being more amenable to correction; (3) GQC knowledge inconsistencies that decrease as model size increases; and (4) an intriguing inter-model critiquing dynamic, where stronger models are better at critiquing weaker ones, while weaker models can surprisingly surpass stronger ones in their self-critique. We hope these insights into the nuanced critique-correct reasoning of LLMs will foster further research in LLM critique and self-improvement.

We introduce a novel concept of convergence for Markovian processes within Orlicz spaces, extending beyond the conventional approach associated with $L_p$ spaces. After showing that Markovian operators are contractive in Orlicz spaces, our key technical contribution is an upper bound on their contraction coefficient, which admits a closed-form expression. The bound is tight in some settings, and it recovers well-known results, such as the connection between contraction and ergodicity, ultra-mixing and Doeblin's minorisation. Specialising our approach to $L_p$ spaces leads to a significant improvement upon classical Riesz-Thorin's interpolation methods. Furthermore, by exploiting the flexibility offered by Orlicz spaces, we can tackle settings where the stationary distribution is heavy-tailed, a severely under-studied setup. As an application of the framework put forward in the paper, we introduce tighter bounds on the mixing time of Markovian processes, better exponential concentration bounds for MCMC methods, and better lower bounds on the burn-in period. To conclude, we show how our results can be used to prove the concentration of measure phenomenon for a sequence of Markovian random variables.

Optimal control problems can be solved by applying the Pontryagin maximum principle and then solving for a Hamiltonian dynamical system. In this paper, we propose novel learning frameworks to tackle optimal control problems. By applying the Pontryagin maximum principle to the original optimal control problem, the learning focus shifts to reduced Hamiltonian dynamics and corresponding adjoint variables. The reduced Hamiltonian networks can be learned by going backward in time and then minimizing loss function deduced from the Pontryagin maximum principle's conditions. The learning process is further improved by progressively learning a posterior distribution of reduced Hamiltonians, utilizing a variational autoencoder which leads to more effective path exploration process. We apply our learning frameworks to control tasks and obtain competitive results.

We explore the cryptographic power of arbitrary shared physical resources. The most general such resource is access to a fresh entangled quantum state at the outset of each protocol execution. We call this the Common Reference Quantum State (CRQS) model, in analogy to the well-known Common Reference String (CRS). The CRQS model is a natural generalization of the CRS model but appears to be more powerful: in the two-party setting, a CRQS can sometimes exhibit properties associated with a Random Oracle queried once by measuring a maximally entangled state in one of many mutually unbiased bases. We formalize this notion as a Weak One-Time Random Oracle (WOTRO), where we only ask of the $m$-bit output to have some randomness when conditioned on the $n$-bit input. We show that when $n-m\in\omega(\lg n)$, any protocol for WOTRO in the CRQS model can be attacked by an (inefficient) adversary. Moreover, our adversary is efficiently simulatable, which rules out the possibility of proving the computational security of a scheme by a fully black-box reduction to a cryptographic game assumption. On the other hand, we introduce a non-game quantum assumption for hash functions that implies WOTRO in the CRQS model (where the CRQS consists only of EPR pairs). We first build a statistically secure WOTRO protocol where $m=n$, then hash the output. The impossibility of WOTRO has the following consequences. First, we show the fully-black-box impossibility of a quantum Fiat-Shamir transform, extending the impossibility result of Bitansky et al. (TCC 2013) to the CRQS model. Second, we show a fully-black-box impossibility result for a strenghtened version of quantum lightning (Zhandry, Eurocrypt 2019) where quantum bolts have an additional parameter that cannot be changed without generating new bolts. Our results also apply to $2$-message protocols in the plain model.

This paper analyzes a popular computational framework to solve infinite-dimensional Bayesian inverse problems, discretizing the prior and the forward model in a finite-dimensional weighted inner product space. We demonstrate the benefit of working on a weighted space by establishing operator-norm bounds for finite element and graph-based discretizations of Mat\'ern-type priors and deconvolution forward models. For linear-Gaussian inverse problems, we develop a general theory to characterize the error in the approximation to the posterior. We also embed the computational framework into ensemble Kalman methods and MAP estimators for nonlinear inverse problems. Our operator-norm bounds for prior discretizations guarantee the scalability and accuracy of these algorithms under mesh refinement.

Answering questions that require reading texts in an image is challenging for current models. One key difficulty of this task is that rare, polysemous, and ambiguous words frequently appear in images, e.g., names of places, products, and sports teams. To overcome this difficulty, only resorting to pre-trained word embedding models is far from enough. A desired model should utilize the rich information in multiple modalities of the image to help understand the meaning of scene texts, e.g., the prominent text on a bottle is most likely to be the brand. Following this idea, we propose a novel VQA approach, Multi-Modal Graph Neural Network (MM-GNN). It first represents an image as a graph consisting of three sub-graphs, depicting visual, semantic, and numeric modalities respectively. Then, we introduce three aggregators which guide the message passing from one graph to another to utilize the contexts in various modalities, so as to refine the features of nodes. The updated nodes have better features for the downstream question answering module. Experimental evaluations show that our MM-GNN represents the scene texts better and obviously facilitates the performances on two VQA tasks that require reading scene texts.

Many tasks in natural language processing can be viewed as multi-label classification problems. However, most of the existing models are trained with the standard cross-entropy loss function and use a fixed prediction policy (e.g., a threshold of 0.5) for all the labels, which completely ignores the complexity and dependencies among different labels. In this paper, we propose a meta-learning method to capture these complex label dependencies. More specifically, our method utilizes a meta-learner to jointly learn the training policies and prediction policies for different labels. The training policies are then used to train the classifier with the cross-entropy loss function, and the prediction policies are further implemented for prediction. Experimental results on fine-grained entity typing and text classification demonstrate that our proposed method can obtain more accurate multi-label classification results.

We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.

北京阿比特科技有限公司