亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we introduce the applications of third-order reduced biquaternion tensors in color video processing. We first develop algorithms for computing the singular value decomposition (SVD) of a third-order reduced biquaternion tensor via a new Ht-product. As theoretical applications, we define the Moore-Penrose inverse of a third-order reduced biquaternion tensor and develop its characterizations. In addition, we discuss the general (or Hermitian) solutions to reduced biquaternion tensor equation $\mathcal{A}\ast_{Ht} \mathcal{X}=\mathcal{B}$ as well as its least-square solution. Finally, we compress the color video by this SVD, and the experimental data shows that our method is faster than the compared scheme.

相關內容

We propose a multivariate extension of the Lorenz curve based on multivariate rearrangements of optimal transport theory. We define a vector Lorenz map as the integral of the vector quantile map associated with a multivariate resource allocation. Each component of the Lorenz map is the cumulative share of each resource, as in the traditional univariate case. The pointwise ordering of such Lorenz maps defines a new multivariate majorization order, which is equivalent to preference by any social planner with inequality averse multivariate rank dependent social evaluation functional. We define a family of multi-attribute Gini index and complete ordering based on the Lorenz map. We propose the level sets of an Inverse Lorenz Function as a practical tool to visualize and compare inequality in two dimensions, and apply it to income-wealth inequality in the United States between 1989 and 2022.

Matrix form data sets arise in many areas, so there are lots of works about the matrix regression models. One special model of these models is the adaptive nuclear norm regularized trace regression, which has been proven have good statistical performances. In order to accelerate the computation of this model, we consider the technique called screening rule. According to matrix decomposition and optimal condition of the model, we develop a safe subspace screening rule that can be used to identify inactive subspace of the solution decomposition and reduce the dimension of the solution. To evaluate the efficiency of the safe subspace screening rule, we embed this result into the alternating direction method of multipliers algorithm under a sequence of the tuning parameters. Under this process, each solution under the tuning parameter provides a matrix decomposition space. Then, the safe subspace screening rule is applied to eliminate inactive subspace, reduce the solution dimension and accelerate the computation process. Some numerical experiments are implemented on simulation data sets and real data sets, which illustrate the efficiency of our screening rule.

Forecasting the occurrence and absence of novel disease outbreaks is essential for disease management. Here, we develop a general model, with no real-world training data, that accurately forecasts outbreaks and non-outbreaks. We propose a novel framework, using a feature-based time series classification method to forecast outbreaks and non-outbreaks. We tested our methods on synthetic data from a Susceptible-Infected-Recovered model for slowly changing, noisy disease dynamics. Outbreak sequences give a transcritical bifurcation within a specified future time window, whereas non-outbreak (null bifurcation) sequences do not. We identified incipient differences in time series of infectives leading to future outbreaks and non-outbreaks. These differences are reflected in 22 statistical features and 5 early warning signal indicators. Classifier performance, given by the area under the receiver-operating curve, ranged from 0.99 for large expanding windows of training data to 0.7 for small rolling windows. Real-world performances of classifiers were tested on two empirical datasets, COVID-19 data from Singapore and SARS data from Hong Kong, with two classifiers exhibiting high accuracy. In summary, we showed that there are statistical features that distinguish outbreak and non-outbreak sequences long before outbreaks occur. We could detect these differences in synthetic and real-world data sets, well before potential outbreaks occur.

In this paper, we focus on testing multivariate normality using the BHEP test with data that are missing completely at random. Our objective is twofold: first, to gain insight into the asymptotic behavior of BHEP test statistics under two widely used approaches for handling missing data, namely complete-case analysis and imputation, and second, to compare the power performance of test statistic under these approaches. It is observed that under the imputation approach, the affine invariance of test statistics is not preserved. To address this issue, we propose an appropriate bootstrap algorithm for approximating p-values. Extensive simulation studies demonstrate that both mean and median approaches exhibit greater power compared to testing with complete-case analysis, and open some questions for further research.

In this paper, we analyze the preservation of asymptotic properties of partially dissipative hyperbolic systems when switching to a discrete setting. We prove that one of the simplest consistent and unconditionally stable numerical methods - the central finite-differences scheme - preserves both the asymptotic behaviour and the parabolic relaxation limit of one-dimensional partially dissipative hyperbolic systems which satisfy the Kalman rank condition. The large time asymptotic-preserving property is achieved by conceiving time-weighted perturbed energy functionals in the spirit of the hypocoercivity theory. For the relaxation-preserving property, drawing inspiration from the observation that solutions in the continuous case exhibit distinct behaviours in low and high frequencies, we introduce a novel discrete Littlewood-Paley theory tailored to the central finite-difference scheme. This allows us to prove Bernstein-type estimates for discrete differential operators and leads to a new relaxation result: the strong convergence of the discrete linearized compressible Euler equations with damping towards the discrete heat equation, uniformly with respect to the mesh parameter.

Quantum algorithms offer significant speed-ups over their classical counterparts in various applications. In this paper, we develop quantum algorithms for the Kalman filter widely used in classical control engineering using the block encoding method. The entire calculation process is achieved by performing matrix operations on Hamiltonians based on the block encoding framework, including addition, multiplication, and inversion, which can be completed in a unified framework compared to previous quantum algorithms for solving control problems. We demonstrate that the quantum algorithm exponentially accelerates the computation of the Kalman filter compared to traditional methods. The time complexity can be reduced from $O(n^3)$ to $O(\kappa poly\log(n/\epsilon)\log(1/\epsilon'))$, where $n$ represents the matrix dimension, $\kappa$ denotes the condition number for the matrix to be inverted, $\epsilon$ indicates desired precision in block encoding, $\epsilon'$ signifies desired precision in matrix inversion. This paper provides a comprehensive quantum solution for implementing the Kalman filter and serves as an attempt to broaden the scope of quantum computation applications. Finally, we present an illustrative example implemented in Qiskit (a Python-based open-source toolkit) as a proof-of-concept.

In this paper, we consider using Schur complements to design preconditioners for twofold and block tridiagonal saddle point problems. One type of the preconditioners are based on the nested (or recursive) Schur complement, the other is based on an additive type Schur complement after permuting the original saddle point systems. We analyze different preconditioners incorporating the exact Schur complements. We show that some of them will lead to positively stable preconditioned systems if proper signs are selected in front of the Schur complements. These positive-stable preconditioners outperform other preconditioners if the Schur complements are further approximated inexactly. Numerical experiments for a 3-field formulation of the Biot model are provided to verify our predictions.

In this paper, we introduce the cumulative past information generating function (CPIG) and relative cumulative past information generating function (RCPIG). We study its properties. We establish its relation with generalized cumulative past entropy (GCPE). We defined CPIG stochastic order and its relation with dispersive order. We provide the results for the CPIG measure of the convoluted random variables in terms of the measures of its components. We found some inequality relating to Shannon entropy, CPIG and GCPE. Some characterization and estimation results are also discussed regarding CPIG. We defined divergence measures between two random variables, Jensen-cumulative past information generating function(JCPIG), Jensen fractional cumulative past entropy measure, cumulative past Taneja entropy, and Jensen cumulative past Taneja entropy information measure.

In this paper, we present a unified nonequilibrium model of continuum mechanics for compressible multiphase flows. The model, which is formulated within the framework of Symmetric Hyperbolic Thermodynamically Compatible (SHTC) equations, can describe the arbitrary number of phases that can be heat-conducting inviscid and viscous fluids, as well as elastoplastic solids. The phases are allowed to have different velocities, pressures, temperatures, and shear stresses, while the material interfaces are treated as diffuse interfaces with the volume fraction playing the role of the interface field. To relate our model to other multiphase approaches, we reformulate the SHTC governing equations in terms of the phase state parameters and put them in the form of Baer-Nunziato-type models. It is the Baer-Nunziato form of the SHTC equations which is then solved numerically using a robust second-order path-conservative MUSCL-Hancock finite volume method on Cartesian meshes. Due to the fact that the obtained governing equations are very challenging, we restrict our numerical examples to a simplified version of the model, focusing on the isentropic limit for three-phase mixtures. To address the stiffness properties of the relaxation source terms present in the model, the implemented scheme incorporates a semi-analytical time integration method specifically designed for the non-linear stiff source terms governing the strain relaxation. The validation process involves a wide range of benchmarks and several applications for compressible multiphase problems. Notably, results are presented for multiphase flows in all the relaxation limit cases of the model, including inviscid and viscous Newtonian fluids, as well as non-linear hyperelastic and elastoplastic solids.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

北京阿比特科技有限公司