亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We present an improved post-quantum version of Sakalauskas matrix power function key-agreement protocol, using rectangular matrices instead the original square ones. Sakalauskas matrix power function is an efficient and secure way to generate a shared secret key, and using rectangular matrices can provide additional flexibility and security in some applications. This method reduces the computational complexity by allowing smaller random integers matrices while maintaining equal security. Another advantage of using the rank-deficient rectangular matrices over key agreement protocols is that it provides more protection against several linearization attacks.

相關內容

Wang et al. (IEEE Transactions on Information Theory, vol. 62, no. 8, 2016) proposed an explicit construction of an $(n=k+2,k)$ Minimum Storage Regenerating (MSR) code with $2$ parity nodes and subpacketization $2^{k/3}$. The number of helper nodes for this code is $d=k+1=n-1$, and this code has the smallest subpacketization among all the existing explicit constructions of MSR codes with the same $n,k$ and $d$. In this paper, we present a new construction of MSR codes for a wider range of parameters. More precisely, we still fix $d=k+1$, but we allow the code length $n$ to be any integer satisfying $n\ge k+2$. The field size of our code is linear in $n$, and the subpacketization of our code is $2^{n/3}$. This value is slightly larger than the subpacketization of the construction by Wang et al. because their code construction only guarantees optimal repair for all the systematic nodes while our code construction guarantees optimal repair for all nodes.

Recurrent events, including cardiovascular events, are commonly observed in biomedical studies. Researchers must understand the effects of various treatments on recurrent events and investigate the underlying mediation mechanisms by which treatments may reduce the frequency of recurrent events are crucial. Although causal inference methods for recurrent event data have been proposed, they cannot be used to assess mediation. This study proposed a novel methodology of causal mediation analysis that accommodates recurrent outcomes of interest in a given individual. A formal definition of causal estimands (direct and indirect effects) within a counterfactual framework is given, empirical expressions for these effects are identified. To estimate these effects, a semiparametric estimator with triple robustness against model misspecification was developed. The proposed methodology was demonstrated in a real-world application. The method was applied to measure the effects of two diabetes drugs on the recurrence of cardiovascular disease and to examine the mediating role of kidney function in this process.

Tropical semiring has proven successful in several research areas, including optimal control, bioinformatics, discrete event systems, or solving a decision problem. In previous studies, a matrix two-factorization algorithm based on the tropical semiring has been applied to investigate bipartite and tripartite networks. Tri-factorization algorithms based on standard linear algebra are used for solving tasks such as data fusion, co-clustering, matrix completion, community detection, and more. However, there is currently no tropical matrix tri-factorization approach, which would allow for the analysis of multipartite networks with a high number of parts. To address this, we propose the triFastSTMF algorithm, which performs tri-factorization over the tropical semiring. We apply it to analyze a four-partition network structure and recover the edge lengths of the network. We show that triFastSTMF performs similarly to Fast-NMTF in terms of approximation and prediction performance when fitted on the whole network. When trained on a specific subnetwork and used to predict the whole network, triFastSTMF outperforms Fast-NMTF by several orders of magnitude smaller error. The robustness of triFastSTMF is due to tropical operations, which are less prone to predict large values compared to standard operations.

Gun violence is a major problem in contemporary American society. However, relatively little is known about the effects of firearm injuries on survivors and their family members and how these effects vary across subpopulations. To study these questions and, more generally, to address a gap in the causal inference literature, we present a framework for the study of effect modification or heterogeneous treatment effects in difference-in-differences designs. We implement a new matching technique, which combines profile matching and risk set matching, to (i) preserve the time alignment of covariates, exposure, and outcomes, avoiding pitfalls of other common approaches for difference-in-differences, and (ii) explicitly control biases due to imbalances in observed covariates in subgroups discovered from the data. Our case study shows significant and persistent effects of nonfatal firearm injuries on several health outcomes for those injured and on the mental health of their family members. Sensitivity analyses reveal that these results are moderately robust to unmeasured confounding bias. Finally, while the effects for those injured are modified largely by the severity of the injury and its documented intent, for families, effects are strongest for those whose relative's injury is documented as resulting from an assault, self-harm, or law enforcement intervention.

Calibrating agent-based models (ABMs) in economics and finance typically involves a derivative-free search in a very large parameter space. In this work, we benchmark a number of search methods in the calibration of a well-known macroeconomic ABM on real data, and further assess the performance of "mixed strategies" made by combining different methods. We find that methods based on random-forest surrogates are particularly efficient, and that combining search methods generally increases performance since the biases of any single method are mitigated. Moving from these observations, we propose a reinforcement learning (RL) scheme to automatically select and combine search methods on-the-fly during a calibration run. The RL agent keeps exploiting a specific method only as long as this keeps performing well, but explores new strategies when the specific method reaches a performance plateau. The resulting RL search scheme outperforms any other method or method combination tested, and does not rely on any prior information or trial and error procedure.

Quantum information processing and its subfield, quantum image processing, are rapidly growing fields as a result of advancements in the practicality of quantum mechanics. In this paper, we propose a quantum algorithm for processing information, such as one-dimensional time series and two-dimensional images, in the frequency domain. The information of interest is encoded into the magnitude of probability amplitude or the coefficient of each basis state. The oracle for filtering operates based on postselection results, and its explicit circuit design is presented. This oracle is versatile enough to perform all basic filtering, including high pass, low pass, band pass, band stop, and many other processing techniques. Finally, we present two novel schemes for transposing matrices in this paper. They use similar encoding rules but with deliberate choices in terms of selecting basis states. These schemes could potentially be useful for other quantum information processing tasks, such as edge detection. The proposed techniques are implemented on the IBM Qiskit quantum simulator. Some results are compared with traditional information processing results to verify their correctness and are presented in this paper.

As the popularity of mobile photography continues to grow, considerable effort is being invested in the reconstruction of degraded images. Due to the spatial variation in optical aberrations, which cannot be avoided during the lens design process, recent commercial cameras have shifted some of these correction tasks from optical design to postprocessing systems. However, without engaging with the optical parameters, these systems only achieve limited correction for aberrations.In this work, we propose a practical method for recovering the degradation caused by optical aberrations. Specifically, we establish an imaging simulation system based on our proposed optical point spread function model. Given the optical parameters of the camera, it generates the imaging results of these specific devices. To perform the restoration, we design a spatial-adaptive network model on synthetic data pairs generated by the imaging simulation system, eliminating the overhead of capturing training data by a large amount of shooting and registration. Moreover, we comprehensively evaluate the proposed method in simulations and experimentally with a customized digital-single-lens-reflex (DSLR) camera lens and HUAWEI HONOR 20, respectively. The experiments demonstrate that our solution successfully removes spatially variant blur and color dispersion. When compared with the state-of-the-art deblur methods, the proposed approach achieves better results with a lower computational overhead. Moreover, the reconstruction technique does not introduce artificial texture and is convenient to transfer to current commercial cameras. Project Page: \url{//github.com/TanGeeGo/ImagingSimulation}.

Despite recent advances in syncing lip movements with any audio waves, current methods still struggle to balance generation quality and the model's generalization ability. Previous studies either require long-term data for training or produce a similar movement pattern on all subjects with low quality. In this paper, we propose StyleSync, an effective framework that enables high-fidelity lip synchronization. We identify that a style-based generator would sufficiently enable such a charming property on both one-shot and few-shot scenarios. Specifically, we design a mask-guided spatial information encoding module that preserves the details of the given face. The mouth shapes are accurately modified by audio through modulated convolutions. Moreover, our design also enables personalized lip-sync by introducing style space and generator refinement on only limited frames. Thus the identity and talking style of a target person could be accurately preserved. Extensive experiments demonstrate the effectiveness of our method in producing high-fidelity results on a variety of scenes. Resources can be found at //hangz-nju-cuhk.github.io/projects/StyleSync.

We propose a novel sparse sliced inverse regression method based on random projections in a large $p$ small $n$ setting. Embedded in a generalized eigenvalue framework, the proposed approach finally reduces to parallel execution of low-dimensional (generalized) eigenvalue decompositions, which facilitates high computational efficiency. Theoretically, we prove that this method achieves the minimax optimal rate of convergence under suitable assumptions. Furthermore, our algorithm involves a delicate reweighting scheme, which can significantly enhance the identifiability of the active set of covariates. Extensive numerical studies demonstrate high superiority of the proposed algorithm in comparison to competing methods.

Traditional methods for link prediction can be categorized into three main types: graph structure feature-based, latent feature-based, and explicit feature-based. Graph structure feature methods leverage some handcrafted node proximity scores, e.g., common neighbors, to estimate the likelihood of links. Latent feature methods rely on factorizing networks' matrix representations to learn an embedding for each node. Explicit feature methods train a machine learning model on two nodes' explicit attributes. Each of the three types of methods has its unique merits. In this paper, we propose SEAL (learning from Subgraphs, Embeddings, and Attributes for Link prediction), a new framework for link prediction which combines the power of all the three types into a single graph neural network (GNN). GNN is a new type of neural network which directly accepts graphs as input and outputs their labels. In SEAL, the input to the GNN is a local subgraph around each target link. We prove theoretically that our local subgraphs also reserve a great deal of high-order graph structure features related to link existence. Another key feature is that our GNN can naturally incorporate latent features and explicit features. It is achieved by concatenating node embeddings (latent features) and node attributes (explicit features) in the node information matrix for each subgraph, thus combining the three types of features to enhance GNN learning. Through extensive experiments, SEAL shows unprecedentedly strong performance against a wide range of baseline methods, including various link prediction heuristics and network embedding methods.

北京阿比特科技有限公司