亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Voting is a means to agree on a collective decision based on available choices (e.g., candidates), where participants agree to abide by their outcome. To improve some features of e-voting, decentralized blockchain-based solutions can be employed, where the blockchain represents a public bulletin board that in contrast to a centralized bulletin board provides extremely high availability, censorship resistance, and correct code execution. A blockchain ensures that all entities in the voting system have the same view of the actions made by others due to its immutability and append-only features. The existing remote blockchain-based boardroom voting solution called Open Voting Network (OVN) provides the privacy of votes, universal & End-to-End verifiability, and perfect ballot secrecy; however, it supports only two choices and lacks robustness enabling recovery from stalling participants. We present BBB-Voting, an equivalent blockchain-based approach for decentralized voting such as OVN, but in contrast to it, BBB-Voting supports 1-out-of-$k$ choices and provides robustness that enables recovery from stalling participants. We make a cost-optimized implementation using an Ethereum-based environment respecting Ethereum Enterprise Alliance standards, which we compare with OVN and show that our work decreases the costs for voters by 13.5% in normalized gas consumption. Finally, we show how BBB-Voting can be extended to support the number of participants limited only by the expenses paid by the authority and the computing power to obtain the tally.

相關內容

Context. Algorithmic racism is the term used to describe the behavior of technological solutions that constrains users based on their ethnicity. Lately, various data-driven software systems have been reported to discriminate against Black people, either for the use of biased data sets or due to the prejudice propagated by software professionals in their code. As a result, Black people are experiencing disadvantages in accessing technology-based services, such as housing, banking, and law enforcement. Goal. This study aims to explore algorithmic racism from the perspective of software professionals. Method. A survey questionnaire was applied to explore the understanding of software practitioners on algorithmic racism, and data analysis was conducted using descriptive statistics and coding techniques. Results. We obtained answers from a sample of 73 software professionals discussing their understanding and perspectives on algorithmic racism in software development. Our results demonstrate that the effects of algorithmic racism are well-known among practitioners. However, there is no consensus on how the problem can be effectively addressed in software engineering. In this paper, some solutions to the problem are proposed based on the professionals' narratives. Conclusion. Combining technical and social strategies, including training on structural racism for software professionals, is the most promising way to address the algorithmic racism problem and its effects on the software solutions delivered to our society.

Many categorical frameworks have been proposed to formalize the idea of gluing Petri nets with each other. Such frameworks model net gluings in terms of sharing of resources or synchronization of transitions. Interpretations given to these gluings are more or less satisfactory when we consider Petri nets with a semantics attached to them. In this work, we define a framework to compose Petri nets together in such a way that their semantics is respected. In addition to this, we show how our framework generalizes the previously defined ones.

We study the problem of fair sequential decision making given voter preferences. In each round, a decision rule must choose a decision from a set of alternatives where each voter reports which of these alternatives they approve. Instead of going with the most popular choice in each round, we aim for proportional representation. We formalize this aim using axioms based on Proportional Justified Representation (PJR), which were proposed in the literature on multi-winner voting and were recently adapted to multi-issue decision making. The axioms require that every group of $\alpha\%$ of the voters, if it agrees in every round (i.e., approves a common alternative), then those voters must approve at least $\alpha\%$ of the decisions. A stronger version of the axioms requires that every group of $\alpha\%$ of the voters that agrees in a $\beta$ fraction of rounds must approve $\beta\cdot\alpha\%$ of the decisions. We show that three attractive voting rules satisfy axioms of this style. One of them (Sequential Phragm\'en) makes its decisions online, and the other two satisfy strengthened versions of the axioms but make decisions semi-online (Method of Equal Shares) or fully offline (Proportional Approval Voting). The first two are polynomial-time computable, and the latter is based on an NP-hard optimization, but it admits a polynomial-time local search algorithm that satisfies the same axiomatic properties. We present empirical results about the performance of these rules based on synthetic data and U.S. political elections. We also run experiments where votes are cast by preference models trained on user responses from the moral machine dataset about ethical dilemmas.

Post-quantum security is critical in the quantum era. Quantum computers, along with quantum algorithms, make the standard cryptography based on RSA or ECDSA over FL or Blockchain vulnerable. The implementation of post-quantum cryptography (PQC) over such systems is poorly understood as PQC is still in its standardization phase. In this work, we propose a hybrid approach to employ PQC over blockchain-based FL (BFL), where we combine a stateless signature scheme like Dilithium (or Falcon) with a stateful hash-based signature scheme like the extended Merkle Signature Scheme (XMSS). We propose a linearbased formulaic approach to device role selection mechanisms based on multiple factors to address the performance aspect. Our holistic approach of utilizing a verifiable random function (VRF) to assist in the blockchain consensus mechanism shows the practicality of the proposed approaches. The proposed method and extensive experimental results contribute to enhancing the security and performance aspects of BFL systems.

Patient monitoring in intensive care units, although assisted by biosensors, needs continuous supervision of staff. To reduce the burden on staff members, IT infrastructures are built to record monitoring data and develop clinical decision support systems. These systems, however, are vulnerable to artifacts (e.g. muscle movement due to ongoing treatment), which are often indistinguishable from real and potentially dangerous signals. Video recordings could facilitate the reliable classification of biosignals using object detection (OD) methods to find sources of unwanted artifacts. Due to privacy restrictions, only blurred videos can be stored, which severely impairs the possibility to detect clinically relevant events such as interventions or changes in patient status with standard OD methods. Hence, new kinds of approaches are necessary that exploit every kind of available information due to the reduced information content of blurred footage and that are at the same time easily implementable within the IT infrastructure of a normal hospital. In this paper, we propose a new method for exploiting information in the temporal succession of video frames. To be efficiently implementable using off-the-shelf object detectors that comply with given hardware constraints, we repurpose the image color channels to account for temporal consistency, leading to an improved detection rate of the object classes. Our method outperforms a standard YOLOv5 baseline model by +1.7% [email protected] while also training over ten times faster on our proprietary dataset. We conclude that this approach has shown effectiveness in the preliminary experiments and holds potential for more general video OD in the future.

Network delays, throughput bottlenecks and privacy issues push Artificial Intelligence of Things (AIoT) designers towards evaluating the feasibility of moving model training and execution (inference) as near as possible to the terminals. Meanwhile, results from the TinyML community demonstrate that, in some cases, it is possible to execute model inference directly on the terminals themselves, even if these are small microcontroller-based devices. However, to date, researchers and practitioners in the domain lack convenient all-in-one toolkits to help them evaluate the feasibility of moving execution of arbitrary models to arbitrary low-power IoT hardware. To this effect, we present in this paper U-TOE, a universal toolkit we designed to facilitate the task of AIoT designers and researchers, by combining functionalities from a low-power embedded OS, a generic model transpiler and compiler, an integrated performance measurement module, and an open-access remote IoT testbed. We provide an open source implementation of U-TOE and we demonstrate its use to experimentally evaluate the performance of a wide variety of models, on a wide variety of low-power boards, based on popular microcontroller architectures (ARM Cortex-M and RISC-V). U-TOE thus allows easily reproducible and customisable comparative evaluation experiments in this domain, on a wide variety of IoT hardware all-at-once. The availability of a toolkit such as U-TOE is desirable to accelerate the field of AIoT, towards fully exploiting the potential of edge computing.

Stealth addresses represent an approach to enhancing privacy within public and distributed blockchains, such as Ethereum and Bitcoin. Stealth address protocols generate a distinct, randomly generated address for the recipient, thereby concealing interactions between entities. In this study, we introduce BaseSAP, an autonomous base-layer protocol for embedding stealth addresses within the application layer of programmable blockchains. BaseSAP expands upon previous research to develop a modular protocol for executing unlikable transactions on public blockchains. BaseSAP allows for developing additional stealth address layers using different cryptographic algorithms on top of the primary implementation, capitalizing on its modularity. To demonstrate the effectiveness of our proposed protocol, we present simulations of an advanced Secp256k1-based dual-key stealth address protocol. This protocol is designed on top of BaseSAP and is deployed on the Goerli and Sepolia test networks as the first prototype implementation. Furthermore, we provide cost analyses and underscore potential security ramifications and attack vectors that could affect the privacy of stealth addresses. Our study reveals the flexibility of the BaseSAP protocol and offers insight into the broader implications of stealth address technology.

Deaf or hard-of-hearing (DHH) speakers typically have atypical speech caused by deafness. With the growing support of speech-based devices and software applications, more work needs to be done to make these devices inclusive to everyone. To do so, we analyze the use of openly-available automatic speech recognition (ASR) tools with a DHH Japanese speaker dataset. As these out-of-the-box ASR models typically do not perform well on DHH speech, we provide a thorough analysis of creating personalized ASR systems. We collected a large DHH speaker dataset of four speakers totaling around 28.05 hours and thoroughly analyzed the performance of different training frameworks by varying the training data sizes. Our findings show that 1000 utterances (or 1-2 hours) from a target speaker can already significantly improve the model performance with minimal amount of work needed, thus we recommend researchers to collect at least 1000 utterances to make an efficient personalized ASR system. In cases where 1000 utterances is difficult to collect, we also discover significant improvements in using previously proposed data augmentation techniques such as intermediate fine-tuning when only 200 utterances are available.

The estimation of unknown parameters in simulations, also known as calibration, is crucial for practical management of epidemics and prediction of pandemic risk. A simple yet widely used approach is to estimate the parameters by minimizing the sum of the squared distances between actual observations and simulation outputs. It is shown in this paper that this method is inefficient, particularly when the epidemic models are developed based on certain simplifications of reality, also known as imperfect models which are commonly used in practice. To address this issue, a new estimator is introduced that is asymptotically consistent, has a smaller estimation variance than the least squares estimator, and achieves the semiparametric efficiency. Numerical studies are performed to examine the finite sample performance. The proposed method is applied to the analysis of the COVID-19 pandemic for 20 countries based on the SEIR (Susceptible-Exposed-Infectious-Recovered) model with both deterministic and stochastic simulations. The estimation of the parameters, including the basic reproduction number and the average incubation period, reveal the risk of disease outbreaks in each country and provide insights to the design of public health interventions.

Continued model-based decision support is associated with particular challenges, especially in long-term projects. Due to the regularly changing questions and the often changing understanding of the underlying system, the models used must be regularly re-evaluated, -modelled and -implemented with respect to changing modelling purpose, system boundaries and mapped causalities. Usually, this leads to models with continuously growing complexity and volume. In this work we aim to reevaluate the idea of the model family, dating back to the 1990s, and use it to promote this as a mindset in the creation of decision support frameworks in large research projects. The idea is to generally not develop and enhance a single standalone model, but to divide the research tasks into interacting smaller models which specifically correspond to the research question. This strategy comes with many advantages, which we explain using the example of a family of models for decision support in the COVID-19 crisis and corresponding success stories. We describe the individual models, explain their role within the family, and how they are used - individually and with each other.

北京阿比特科技有限公司