亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The discrete cosine transform (DCT) is a central tool for image and video coding because it can be related to the Karhunen-Lo\`eve transform (KLT), which is the optimal transform in terms of retained transform coefficients and data decorrelation. In this paper, we introduce 16-, 32-, and 64-point low-complexity DCT approximations by minimizing individually the angle between the rows of the exact DCT matrix and the matrix induced by the approximate transforms. According to some classical figures of merit, the proposed transforms outperformed the approximations for the DCT already known in the literature. Fast algorithms were also developed for the low-complexity transforms, asserting a good balance between the performance and its computational cost. Practical applications in image encoding showed the relevance of the transforms in this context. In fact, the experiments showed that the proposed transforms had better results than the known approximations in the literature for the cases of 16, 32, and 64 blocklength.

相關內容

We study a family of reachability problems under waiting-time restrictions in temporal and vertex-colored temporal graphs. Given a temporal graph and a set of source vertices, we find the set of vertices that are reachable from a source via a time-respecting path, where the difference in timestamps between consecutive edges is at most a resting time. Given a vertex-colored temporal graph and a multiset query of colors, we find the set of vertices reachable from a source via a time-respecting path such that the vertex colors of the path agree with the multiset query and the difference in timestamps between consecutive edges is at most a resting time. These kind of problems have applications in understanding the spread of a disease in a network, tracing contacts in epidemic outbreaks, finding signaling pathways in the brain network, and recommending tours for tourists, among other. We present an algebraic algorithmic framework based on constrained multi\-linear sieving for solving the restless reachability problems we propose. In particular, parameterized by the length $k$ of a path sought, we show that the proposed problems can be solved in $O(2^k k m \Delta)$ time and $O(n \Delta)$ space, where $n$ is the number of vertices, $m$ the number of edges, and $\Delta$ the maximum resting time of an input temporal graph. In addition, we prove that our algorithms for the restless reachability problems in vertex-colored temporal graphs are optimal under plausible complexity-theoretic assumptions. Finally, with an open-source implementation, we demonstrate that our algorithm scales to large graphs with up to one billion temporal edges, despite the problems being NP-hard. Specifically, we present extensive experiments to evaluate our scalability claims both on synthetic and real-world graphs. Our implementation is efficiently engineered and highly optimized.

We study the identification of binary choice models with fixed effects. We provide a condition called sign saturation and show that this condition is sufficient for the identification of the model. In particular, we can guarantee identification even with bounded regressors. We also show that without this condition, the model is not identified unless the error distribution belongs to a small class. The same sign saturation condition is also essential for identifying the sign of treatment effects. A test is provided to check the sign saturation condition and can be implemented using existing algorithms for the maximum score estimator.

Many partial differential equations (PDEs) such as Navier--Stokes equations in fluid mechanics, inelastic deformation in solids, and transient parabolic and hyperbolic equations do not have an exact, primal variational structure. Recently, a variational principle based on the dual (Lagrange multiplier) field was proposed. The essential idea in this approach is to treat the given PDE as constraints, and to invoke an arbitrarily chosen auxiliary potential with strong convexity properties to be optimized. This leads to requiring a convex dual functional to be minimized subject to Dirichlet boundary conditions on dual variables, with the guarantee that even PDEs that do not possess a variational structure in primal form can be solved via a variational principle. The vanishing of the first variation of the dual functional is, up to Dirichlet boundary conditions on dual fields, the weak form of the primal PDE problem with the dual-to-primal change of variables incorporated. We derive the dual weak form for the linear, one-dimensional, transient convection-diffusion equation. A Galerkin discretization is used to obtain the discrete equations, with the trial and test functions chosen as linear combination of either RePU activation functions (shallow neural network) or B-spline basis functions; the corresponding stiffness matrix is symmetric. For transient problems, a space-time Galerkin implementation is used with tensor-product B-splines as approximating functions. Numerical results are presented for the steady-state and transient convection-diffusion equation, and transient heat conduction. The proposed method delivers sound accuracy for ODEs and PDEs and rates of convergence are established in the $L^2$ norm and $H^1$ seminorm for the steady-state convection-diffusion problem.

The domain decomposition (DD) nonlinear-manifold reduced-order model (NM-ROM) represents a computationally efficient method for integrating underlying physics principles into a neural network-based, data-driven approach. Compared to linear subspace methods, NM-ROMs offer superior expressivity and enhanced reconstruction capabilities, while DD enables cost-effective, parallel training of autoencoders by partitioning the domain into algebraic subdomains. In this work, we investigate the scalability of this approach by implementing a "bottom-up" strategy: training NM-ROMs on smaller domains and subsequently deploying them on larger, composable ones. The application of this method to the two-dimensional time-dependent Burgers' equation shows that extrapolating from smaller to larger domains is both stable and effective. This approach achieves an accuracy of 1% in relative error and provides a remarkable speedup of nearly 700 times.

Data augmentation is a widely adopted technique utilized to improve the robustness of automatic speech recognition (ASR). Employing a fixed data augmentation strategy for all training data is a common practice. However, it is important to note that there can be variations in factors such as background noise, speech rate, etc. among different samples within a single training batch. By using a fixed augmentation strategy, there is a risk that the model may reach a suboptimal state. In addition to the risks of employing a fixed augmentation strategy, the model's capabilities may differ across various training stages. To address these issues, this paper proposes the method of sample-adaptive data augmentation with progressive scheduling(PS-SapAug). The proposed method applies dynamic data augmentation in a two-stage training approach. It employs hybrid normalization to compute sample-specific augmentation parameters based on each sample's loss. Additionally, the probability of augmentation gradually increases throughout the training progression. Our method is evaluated on popular ASR benchmark datasets, including Aishell-1 and Librispeech-100h, achieving up to 8.13% WER reduction on LibriSpeech-100h test-clean, 6.23% on test-other, and 5.26% on AISHELL-1 test set, which demonstrate the efficacy of our approach enhancing performance and minimizing errors.

The SE and DE formulas are known as efficient quadrature formulas for integrals with endpoint singularities. Especially for integrals with algebraic singularity, explicit error bounds in a computable form have been given, which are useful for computation with guaranteed accuracy. Such explicit error bounds have also given for integrals with logarithmic singularity. However, the error bounds have two points to be discussed. The first point is on overestimation of divergence speed of logarithmic singularity. The second point is on the case where there exist both logarithmic and algebraic singularity. To remedy these points, this study provides new error bounds for integrals with logarithmic and algebraic singularity. Although existing and new error bounds described above handle integrals over the finite interval, the SE and DE formulas may be applied to integrals over the semi-infinite interval. On the basis of the new results, this study provides new error bounds for integrals over the semi-infinite interval with logarithmic and algebraic singularity at the origin.

Several hypothesis testing methods have been proposed to validate the assumption of isotropy in spatial point patterns. A majority of these methods are characterised by an unknown distribution of the test statistic under the null hypothesis of isotropy. Parametric approaches to approximating the distribution involve simulation of patterns from a user-specified isotropic model. Alternatively, nonparametric replicates of the test statistic under isotropy can be used to waive the need for specifying a model. In this paper, we first develop a general framework which allows for the integration of a selected nonparametric replication method into isotropy testing. We then conduct a large simulation study comprising application-like scenarios to assess the performance of tests with different parametric and nonparametric replication methods. In particular, we explore distortions in test size and power caused by model misspecification, and demonstrate the advantages of nonparametric replication in such scenarios.

Block majorization-minimization (BMM) is a simple iterative algorithm for constrained nonconvex optimization that sequentially minimizes majorizing surrogates of the objective function in each block while the others are held fixed. BMM entails a large class of optimization algorithms such as block coordinate descent and its proximal-point variant, expectation-minimization, and block projected gradient descent. We first establish that for general constrained nonsmooth nonconvex optimization, BMM with $\rho$-strongly convex and $L_g$-smooth surrogates can produce an $\epsilon$-approximate first-order optimal point within $\widetilde{O}((1+L_g+\rho^{-1})\epsilon^{-2})$ iterations and asymptotically converges to the set of first-order optimal points. Next, we show that BMM combined with trust-region methods with diminishing radius has an improved complexity of $\widetilde{O}((1+L_g) \epsilon^{-2})$, independent of the inverse strong convexity parameter $\rho^{-1}$, allowing improved theoretical and practical performance with `flat' surrogates. Our results hold robustly even when the convex sub-problems are solved as long as the optimality gaps are summable. Central to our analysis is a novel continuous first-order optimality measure, by which we bound the worst-case sub-optimality in each iteration by the first-order improvement the algorithm makes. We apply our general framework to obtain new results on various algorithms such as the celebrated multiplicative update algorithm for nonnegative matrix factorization by Lee and Seung, regularized nonnegative tensor decomposition, and the classical block projected gradient descent algorithm. Lastly, we numerically demonstrate that the additional use of diminishing radius can improve the convergence rate of BMM in many instances.

We present a novel formulation for the dynamics of geometrically exact Timoshenko beams and beam structures made of viscoelastic material featuring complex, arbitrarily curved initial geometries. An $\textrm{SO}(3)$-consistent and second-order accurate time integration scheme for accelerations, velocities and rate-dependent viscoelastic strain measures is adopted. To achieve high efficiency and geometrical flexibility, the spatial discretization is carried out with the isogemetric collocation (IGA-C) method, which permits bypassing elements integration keeping all the advantages of the isogeometric analysis (IGA) in terms of high-order space accuracy and geometry representation. Moreover, a primal formulation guarantees the minimal kinematic unknowns. The generalized Maxwell model is deployed directly to the one-dimensional beam strain and stress measures. This allows to express the internal variables in terms of the same kinematic unknowns, as for the case of linear elastic rate-independent materials bypassing the complexities introduced by the viscoelastic material. As a result, existing $\textrm{SO}(3)$-consistent linearizations of the governing equations in the strong form (and associated updating formulas) can straightforwardly be used. Through a series of numerical tests, the attributes and potentialities of the proposed formulation are demonstrated. In particular, we show the capability to accurately simulate beams and beam systems featuring complex initial geometry and topology, opening interesting perspectives in the inverse design of programmable mechanical meta-materials and objects.

Let $N$ components be partitioned into two communities, denoted ${\cal P}_+$ and ${\cal P}_-$, possibly of different sizes. Assume that they are connected via a directed and weighted Erd\"os-R\'enyi random graph (DWER) with unknown parameter $ p \in (0, 1).$ The weights assigned to the existing connections are of mean-field type, scaling as $N^{-1}$. At each time unit, we observe the state of each component: either it sends some signal to its successors (in the directed graph) or remains silent otherwise. In this paper, we show that it is possible to find the communities ${\cal P}_+$ and ${\cal P}_-$ based only on the activity of the $N$ components observed over $T$ time units. More specifically, we propose a simple algorithm for which the probability of {\it exact recovery} converges to $1$ as long as $(N/T^{1/2})\log(NT) \to 0$, as $T$ and $N$ diverge. Interestingly, this simple algorithm does not require any prior knowledge on the other model parameters (e.g. the edge probability $p$). The key step in our analysis is to derive an asymptotic approximation of the one unit time-lagged covariance matrix associated to the states of the $N$ components, as $N$ diverges. This asymptotic approximation relies on the study of the behavior of the solutions of a matrix equation of Stein type satisfied by the simultaneous (0-lagged) covariance matrix associated to the states of the components. This study is challenging, specially because the simultaneous covariance matrix is random since it depends on the underlying DWER random graph.

北京阿比特科技有限公司