亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Data compression algorithms typically rely on identifying repeated sequences of symbols from the original data to provide a compact representation of the same information, while maintaining the ability to recover the original data from the compressed sequence. Using data transformations prior to the compression process has the potential to enhance the compression capabilities, being lossless as long as the transformation is invertible. Floating point data presents unique challenges to generate invertible transformations with high compression potential. This paper identifies key conditions for basic operations of floating point data that guarantee lossless transformations. Then, we show four methods that make use of these observations to deliver lossless compression of real datasets, where we improve compression rates up to 40 %.

相關內容

We consider multi-variate signals spanned by the integer shifts of a set of generating functions with distinct frequency profiles and the problem of reconstructing them from samples taken on a random periodic set. We show that such a sampling strategy succeeds with high probability provided that the density of the sampling pattern exceeds the number of frequency profiles by a logarithmic factor. The signal model includes bandlimited functions with multi-band spectra. While in this well-studied setting delicate constructions provide sampling strategies that meet the information theoretic benchmark of Shannon and Landau, the sampling pattern that we consider provides, at the price of a logarithmic oversampling factor, a simple alternative that is accompanied by favorable a priori stability margins (snug frames). More generally, we also treat bandlimited functions with arbitrary compact spectra, and different measures of its complexity and approximation rates by integer tiles. At the technical level, we elaborate on recent work on relevant sampling, with the key difference that the reconstruction guarantees that we provide hold uniformly for all signals, rather than for a subset of well-concentrated ones. This is achieved by methods of concentration of measure formulated on the Zak domain.

We show how to reduce the computational time of the practical implementation of the Raviart-Thomas mixed method for second-order elliptic problems. The implementation takes advantage of a recent result which states that certain local subspaces of the vector unknown can be eliminated from the equations by transforming them into stabilization functions; see the paper published online in JJIAM on August 10, 2023. We describe in detail the new implementation (in MATLAB and a laptop with Intel(R) Core (TM) i7-8700 processor which has six cores and hyperthreading) and present numerical results showing 10 to 20% reduction in the computational time for the Raviart-Thomas method of index $k$, with $k$ ranging from 1 to 20, applied to a model problem.

In the area of query complexity of Boolean functions, the most widely studied cost measure of an algorithm is the worst-case number of queries made by it on an input. Motivated by the most natural cost measure studied in online algorithms, the competitive ratio, we consider a different cost measure for query algorithms for Boolean functions that captures the ratio of the cost of the algorithm and the cost of an optimal algorithm that knows the input in advance. The cost of an algorithm is its largest cost over all inputs. Grossman, Komargodski and Naor [ITCS'20] introduced this measure for Boolean functions, and dubbed it instance complexity. Grossman et al. showed, among other results, that monotone Boolean functions with instance complexity 1 are precisely those that depend on one or two variables. We complement the above-mentioned result of Grossman et al. by completely characterizing the instance complexity of symmetric Boolean functions. As a corollary we conclude that the only symmetric Boolean functions with instance complexity 1 are the Parity function and its complement. We also study the instance complexity of some graph properties like Connectivity and k-clique containment. In all the Boolean functions we study above, and those studied by Grossman et al., the instance complexity turns out to be the ratio of query complexity to minimum certificate complexity. It is a natural question to ask if this is the correct bound for all Boolean functions. We show a negative answer in a very strong sense, by analyzing the instance complexity of the Greater-Than and Odd-Max-Bit functions. We show that the above-mentioned ratio is linear in the input size for both of these functions, while we exhibit algorithms for which the instance complexity is a constant.

A non-intrusive proper generalized decomposition (PGD) strategy, coupled with an overlapping domain decomposition (DD) method, is proposed to efficiently construct surrogate models of parametric linear elliptic problems. A parametric multi-domain formulation is presented, with local subproblems featuring arbitrary Dirichlet interface conditions represented through the traces of the finite element functions used for spatial discretization at the subdomain level, with no need for additional auxiliary basis functions. The linearity of the operator is exploited to devise low-dimensional problems with only few active boundary parameters. An overlapping Schwarz method is used to glue the local surrogate models, solving a linear system for the nodal values of the parametric solution at the interfaces, without introducing Lagrange multipliers to enforce the continuity in the overlapping region. The proposed DD-PGD methodology relies on a fully algebraic formulation allowing for real-time computation based on the efficient interpolation of the local surrogate models in the parametric space, with no additional problems to be solved during the execution of the Schwarz algorithm. Numerical results for parametric diffusion and convection-diffusion problems are presented to showcase the accuracy of the DD-PGD approach, its robustness in different regimes and its superior performance with respect to standard high-fidelity DD methods.

The problem of sequential anomaly detection and identification is considered in the presence of a sampling constraint. Specifically, multiple data streams are generated by distinct sources and the goal is to quickly identify those that exhibit ``anomalous'' behavior, when it is not possible to sample every source at each time instant. Thus, in addition to a stopping rule, which determines when to stop sampling, and a decision rule, which indicates which sources to identify as anomalous upon stopping, one needs to specify a sampling rule that determines which sources to sample at each time instant. The focus of this work is on ordering sampling rules, which sample the data sources, among those currently estimated as anomalous (resp. non-anomalous), for which the corresponding local test statistics have the smallest (resp. largest) values. It is shown that with an appropriate design, which is specified explicitly, an ordering sampling rule leads to the optimal expected time for stopping, among all policies that satisfy the same sampling and error constraints, to a first-order asymptotic approximation as the false positive and false negative error rates under control both go to zero. This is the first asymptotic optimality result for ordering sampling rules when multiple sources can be sampled per time instant. Moreover, this is established under a general setup where the number of anomalies is not required to be a priori known. A novel proof technique is introduced, which unifies different versions of the problem regarding the homogeneity of the sources and prior information on the number of anomalies.

Miura surfaces are the solutions of a constrained nonlinear elliptic system of equations. This system is derived by homogenization from the Miura fold, which is a type of origami fold with multiple applications in engineering. A previous inquiry, gave suboptimal conditions for existence of solutions and proposed an $H^2$-conformal finite element method to approximate them. In this paper, the existence of Miura surfaces is studied using a mixed formulation. It is also proved that the constraints propagate from the boundary to the interior of the domain for well-chosen boundary conditions. Then, a numerical method based on a least-squares formulation, Taylor--Hood finite elements and a Newton method is introduced to approximate Miura surfaces. The numerical method is proved to converge at order one in space and numerical tests are performed to demonstrate its robustness.

Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.

Our research proposes a novel method for reducing the dimensionality of functional data, specifically for the case where the response is a scalar and the predictor is a random function. Our method utilizes distance covariance, and has several advantages over existing methods. Unlike current techniques which require restrictive assumptions such as linear conditional mean and constant covariance, our method has mild requirements on the predictor. Additionally, our method does not involve the use of the unbounded inverse of the covariance operator. The link function between the response and predictor can be arbitrary, and our proposed method maintains the advantage of being model-free, without the need to estimate the link function. Furthermore, our method is naturally suited for sparse longitudinal data. We utilize functional principal component analysis with truncation as a regularization mechanism in the development of our method. We provide justification for the validity of our proposed method, and establish statistical consistency of the estimator under certain regularization conditions. To demonstrate the effectiveness of our proposed method, we conduct simulation studies and real data analysis. The results show improved performance compared to existing methods.

We investigate the combinatorics of max-pooling layers, which are functions that downsample input arrays by taking the maximum over shifted windows of input coordinates, and which are commonly used in convolutional neural networks. We obtain results on the number of linearity regions of these functions by equivalently counting the number of vertices of certain Minkowski sums of simplices. We characterize the faces of such polytopes and obtain generating functions and closed formulas for the number of vertices and facets in a 1D max-pooling layer depending on the size of the pooling windows and stride, and for the number of vertices in a special case of 2D max-pooling.

We propose an approach to compute inner and outer-approximations of the sets of values satisfying constraints expressed as arbitrarily quantified formulas. Such formulas arise for instance when specifying important problems in control such as robustness, motion planning or controllers comparison. We propose an interval-based method which allows for tractable but tight approximations. We demonstrate its applicability through a series of examples and benchmarks using a prototype implementation.

北京阿比特科技有限公司