亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Cryptanalysis result of key expansion algorithms in AES and SM4 revealed that, (1) there exist weaknesses in their S-Boxes, and (2) the round key expansion algorithm is reversible, i.e., the initial key can be recovered from any round key, which may be an exploitable weakness by attacker. To solve these problems, first we constructed a non-degenerate 2D exponential hyper chaotic map (2D-ECM), derived the recursion formula to calculate the number of S-Boxes that satisfied three conditions, and designed a strong S-Box construction algorithm without weakness. Then based on 2D-ECM and S-Box, we designed an irreversible key expansion algorithm, to transform the initial key into independent round keys, to make the initial key can not be recovered from any round key. Security and statistical analysis demonstrated the flexible and effectiveness of the proposed irreversible key expansion algorithm.

相關內容

Efficient contact tracing and isolation is an effective strategy to control epidemics. It was used effectively during the Ebola epidemic and successfully implemented in several parts of the world during the ongoing COVID-19 pandemic. An important consideration in contact tracing is the budget on the number of individuals asked to quarantine -- the budget is limited for socioeconomic reasons. In this paper, we present a Markov Decision Process (MDP) framework to formulate the problem of using contact tracing to reduce the size of an outbreak while asking a limited number of people to quarantine. We formulate each step of the MDP as a combinatorial problem, MinExposed, which we demonstrate is NP-Hard; as a result, we develop an LP-based approximation algorithm. Though this algorithm directly solves MinExposed, it is often impractical in the real world due to information constraints. To this end, we develop a greedy approach based on insights from the analysis of the previous algorithm, which we show is more interpretable. A key feature of the greedy algorithm is that it does not need complete information of the underlying social contact network. This makes the heuristic implementable in practice and is an important consideration. Finally, we carry out experiments on simulations of the MDP run on real-world networks, and show how the algorithms can help in bending the epidemic curve while limiting the number of isolated individuals. Our experimental results demonstrate that the greedy algorithm and its variants are especially effective, robust, and practical in a variety of realistic scenarios, such as when the contact graph and specific transmission probabilities are not known. All code can be found in our GitHub repository: //github.com/gzli929/ContactTracing.

In this work, we consider $d$-{\sc Hyperedge Estimation} and $d$-{\sc Hyperedge Sample} problem in a hypergraph $\cH(U(\cH),\cF(\cH))$ in the query complexity framework, where $U(\cH)$ denotes the set of vertices and $\cF(\cH)$ denotes the set of hyperedges. The oracle access to the hypergraph is called {\sc Colorful Independence Oracle} ({\sc CID}), which takes $d$ (non-empty) pairwise disjoint subsets of vertices $\dsubset \subseteq U(\cH)$ as input, and answers whether there exists a hyperedge in $\cH$ having (exactly) one vertex in each $A_i, i \in \{1,2,\ldots,d\}$. The problem of $d$-{\sc Hyperedge Estimation} and $d$-{\sc Hyperedge Sample} with {\sc CID} oracle access is important in its own right as a combinatorial problem. Also, Dell {\it{et al.}}~[SODA '20] established that {\em decision} vs {\em counting} complexities of a number of combinatorial optimization problems can be abstracted out as $d$-{\sc Hyperedge Estimation} problems with a {\sc CID} oracle access. The main technical contribution of the paper is an algorithm that estimates $m= \size{\cF(\cH)}$ with $\hat{m}$ such that { $$ \frac{1}{C_{d}\log^{d-1} n} \;\leq\; \frac{\hat{m}}{m} \;\leq\; C_{d} \log ^{d-1} n . $$ by using at most $C_{d}\log ^{d+2} n$ many {\sc CID} queries, where $n$ denotes the number of vertices in the hypergraph $\cH$ and $C_{d}$ is a constant that depends only on $d$}. Our result coupled with the framework of Dell {\it{et al.}}~[SODA '21] implies improved bounds for a number of fundamental problems.

In this paper we propose an accurate, and computationally efficient method for incorporating adaptive spatial resolution into weakly-compressible Smoothed Particle Hydrodynamics (SPH) schemes. Particles are adaptively split and merged in an accurate manner. Critically, the method ensures that the number of neighbors of each particle is optimal, leading to an efficient algorithm. A set of background particles is used to specify either geometry-based spatial resolution, where the resolution is a function of distance to a solid body, or solution-based adaptive resolution, where the resolution is a function of the computed solution. This allows us to simulate problems using particles having length variations of the order of 1:250 with much fewer particles than currently reported with other techniques. The method is designed to automatically adapt when any solid bodies move. The algorithms employed are fully parallel. We consider a suite of benchmark problems to demonstrate the accuracy of the approach. We then consider the classic problem of the flow past a circular cylinder at a range of Reynolds numbers and show that the proposed method produces accurate results with a significantly reduced number of particles. We provide an open source implementation and a fully reproducible manuscript.

This paper presents a machine-learning-enhanced longitudinal scanline method to extract vehicle trajectories from high-angle traffic cameras. The Dynamic Mode Decomposition (DMD) method is applied to extract vehicle strands by decomposing the Spatial-Temporal Map (STMap) into the sparse foreground and low-rank background. A deep neural network named Res-UNet+ was designed for the semantic segmentation task by adapting two prevalent deep learning architectures. The Res-UNet+ neural networks significantly improve the performance of the STMap-based vehicle detection, and the DMD model provides many interesting insights for understanding the evolution of underlying spatial-temporal structures preserved by STMap. The model outputs were compared with the previous image processing model and mainstream semantic segmentation deep neural networks. After a thorough evaluation, the model is proved to be accurate and robust against many challenging factors. Last but not least, this paper fundamentally addressed many quality issues found in NGSIM trajectory data. The cleaned high-quality trajectory data are published to support future theoretical and modeling research on traffic flow and microscopic vehicle control. This method is a reliable solution for video-based trajectory extraction and has wide applicability.

Measuring the predictability and complexity of time series using entropy is essential tool de-signing and controlling a nonlinear system. However, the existing methods have some drawbacks related to the strong dependence of entropy on the parameters of the methods. To overcome these difficulties, this study proposes a new method for estimating the entropy of a time series using the LogNNet neural network model. The LogNNet reservoir matrix is filled with time series elements according to our algorithm. The accuracy of the classification of images from the MNIST-10 database is considered as the entropy measure and denoted by NNetEn. The novelty of entropy calculation is that the time series is involved in mixing the input information in the res-ervoir. Greater complexity in the time series leads to a higher classification accuracy and higher NNetEn values. We introduce a new time series characteristic called time series learning inertia that determines the learning rate of the neural network. The robustness and efficiency of the method is verified on chaotic, periodic, random, binary, and constant time series. The comparison of NNetEn with other methods of entropy estimation demonstrates that our method is more robust and accurate and can be widely used in practice.

We introduce a new numerical method for solving time-harmonic acoustic scattering problems. The main focus is on plane waves scattered by smoothly varying material inhomogeneities. The proposed method works for any frequency $\omega$, but is especially efficient for high-frequency problems. It is based on a time-domain approach and consists of three steps: \emph{i)} computation of a suitable incoming plane wavelet with compact support in the propagation direction; \emph{ii)} solving a scattering problem in the time domain for the incoming plane wavelet; \emph{iii)} reconstruction of the time-harmonic solution from the time-domain solution via a Fourier transform in time. An essential ingredient of the new method is a front-tracking mesh adaptation algorithm for solving the problem in \emph{ii)}. By exploiting the limited support of the wave front, this allows us to make the number of the required degrees of freedom to reach a given accuracy significantly less dependent on the frequency $\omega$. We also present a new algorithm for computing the Fourier transform in \emph{iii)} that exploits the reduced number of degrees of freedom corresponding to the adapted meshes. Numerical examples demonstrate the advantages of the proposed method and the fact that the method can also be applied with external source terms such as point sources and sound-soft scatterers. The gained efficiency, however, is limited in the presence of trapping modes.

Wireless sensor networks (WSNs) comprise several spatially distributed sensor nodes that communicate over an open radio channel, thereby making the network vulnerable to eavesdroppers (EDs). As a physical layer security approach, intelligent reflecting surface (IRS) technology has recently emerged as an effective technique for security in WSNs. Unlike prior works that do not consider the role of the IRS in facilitating the parameter estimation in WSN, we propose a scheme for joint transmit and reflective beamformer (JTRB) design for secure parameter estimation at the fusion center (FC) in the presence of an ED. To solve the resulting non-convex optimization problem, we develop a semidefinite relaxation (SDR)-based iterative algorithm, which alternately yields the transmit beamformer at each sensor node and the corresponding reflection phases at the IRS, to achieve the minimum mean-squared error (MMSE) parameter estimate at the FC, subject to transmit power and ED signal-to-noise ratio (SNR) constraints. Our simulation results demonstrate robust MSE and security performance of the proposed IRS-based JTRB technique.

Entropy-conserving numerical fluxes are a cornerstone of modern high-order entropy-dissipative discretizations of conservation laws. In addition to entropy conservation, other structural properties mimicking the continuous level such as pressure equilibrium and kinetic energy preservation are important. This note proves that there are no numerical fluxes conserving (one of) Harten's entropies for the compressible Euler equations that also preserve pressure equilibria and have a density flux independent of the pressure. This is in contrast to fluxes based on the physical entropy, where even kinetic energy preservation can be achieved in addition.

We show that any $n$-bit string can be recovered with high probability from $\exp(\widetilde{O}(n^{1/5}))$ independent random subsequences.

In the group testing problem, the goal is to identify a subset of defective items within a larger set of items based on tests whose outcomes indicate whether any defective item is present. This problem is relevant in areas such as medical testing, DNA sequencing, and communications. In this paper, we study a doubly-regular design in which the number of tests-per-item and the number of items-per-test are fixed. We analyze the performance of this test design alongside the Definite Defectives (DD) decoding algorithm in several settings, namely, (i) the sub-linear regime $k=o(n)$ with exact recovery, (ii) the linear regime $k=\Theta(n)$ with approximate recovery, and (iii) the size-constrained setting, where the number of items per test is constrained. Under setting (i), we show that our design together with the DD algorithm, matches an existing achievability result for the DD algorithm with the near-constant tests-per-item design, which is known to be asymptotically optimal in broad scaling regimes. Under setting (ii), we provide novel approximate recovery bounds that complement a hardness result regarding exact recovery. Lastly, under setting (iii), we improve on the best known upper and lower bounds in scaling regimes where the maximum allowed test size grows with the total number of items.

北京阿比特科技有限公司