亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Quasi-periodic responses composed of multiple base frequencies widely exist in science and engineering problems. The multiple harmonic balance (MHB) method is one of the most commonly used approaches for such problems. However, it is limited by low-order estimations due to complex symbolic operations in practical uses. Many variants have been developed to improve the MHB method, among which the time domain MHB-like methods are regarded as crucial improvements because of their high efficiency and simple derivation. But there is still one main drawback remaining to be addressed. The time domain MHB-like methods negatively suffer from non-physical solutions, which have been shown to be caused by aliasing (mixtures of the high-order into the low-order harmonics). Inspired by the collocation-based harmonic balancing framework recently established by our group, we herein propose a reconstruction multiple harmonic balance (RMHB) method to reconstruct the conventional MHB method using discrete time domain collocations. Our study shows that the relation between the MHB and time domain MHB-like methods is determined by an aliasing matrix, which is non-zero when aliasing occurs. On this basis, a conditional equivalence is established to form the RMHB method. Three numerical examples demonstrate that this new method is more robust and efficient than the state-of-the-art methods.

相關內容

It has been observed by several authors that well-known periodization strategies like tent or Chebychev transforms lead to remarkable results for the recovery of multivariate functions from few samples. So far, theoretical guarantees are missing. The goal of this paper is twofold. On the one hand, we give such guarantees and briefly describe the difficulties of the involved proof. On the other hand, we combine these periodization strategies with recent novel constructive methods for the efficient subsampling of finite frames in $\mathbb{C}$. As a result we are able to reconstruct non-periodic multivariate functions from very few samples. The used sampling nodes are the result of a two-step procedure. Firstly, a random draw with respect to the Chebychev measure provides an initial node set. A further sparsification technique selects a significantly smaller subset of these nodes with equal approximation properties. This set of sampling nodes scales linearly in the dimension of the subspace on which we project and works universally for the whole class of functions. The method is based on principles developed by Batson, Spielman, and Srivastava and can be numerically implemented. Samples on these nodes are then used in a (plain) least-squares sampling recovery step on a suitable hyperbolic cross subspace of functions resulting in a near-optimal behavior of the sampling error. Numerical experiments indicate the applicability of our results.

Graph databases have grown in popularity in recent years as they are able to efficiently store and query complex relationships between data. Incidentally, navigation data and road networks can be processed, sampled or modified efficiently when stored as a graph. As a result, graph databases are a solution for solving route planning tasks that comes more and more to the attention of developers of autonomous vehicles. To achieve a computational performance that enables the realization of route planning on large road networks or for a great number of agents concurrently, several aspects need to be considered in the design of the solution. Based on a concrete use case for centralized route planning, we discuss the characteristics and properties of a use case that can significantly influence the computational effort or efficiency of the database management system. Subsequently we evaluate the performance of both Neo4j and ArangoDB depending on these properties. With these results, it is not only possible to choose the most suitable database system but also to improve the resulting performance by addressing relevant aspects in the design of the application.

Deep Neural Networks (DNNs) have demonstrated impressive performance across a wide range of tasks. However, deploying DNNs on edge devices poses significant challenges due to stringent power and computational budgets. An effective solution to this issue is software-hardware (SW-HW) co-design, which allows for the tailored creation of DNN models and hardware architectures that optimally utilize available resources. However, SW-HW co-design traditionally suffers from slow optimization speeds because their optimizers do not make use of heuristic knowledge, also known as the ``cold start'' problem. In this study, we present a novel approach that leverages Large Language Models (LLMs) to address this issue. By utilizing the abundant knowledge of pre-trained LLMs in the co-design optimization process, we effectively bypass the cold start problem, substantially accelerating the design process. The proposed method achieves a significant speedup of 25x. This advancement paves the way for the rapid and efficient deployment of DNNs on edge devices.

One-shot channel simulation is a fundamental data compression problem concerned with encoding a single sample from a target distribution $Q$ using a coding distribution $P$ using as few bits as possible on average. Algorithms that solve this problem find applications in neural data compression and differential privacy and can serve as a more efficient alternative to quantization-based methods. Sadly, existing solutions are too slow or have limited applicability, preventing widespread adoption. In this paper, we conclusively solve one-shot channel simulation for one-dimensional problems where the target-proposal density ratio is unimodal by describing an algorithm with optimal runtime. We achieve this by constructing a rejection sampling procedure equivalent to greedily searching over the points of a Poisson process. Hence, we call our algorithm greedy Poisson rejection sampling (GPRS) and analyze the correctness and time complexity of several of its variants. Finally, we empirically verify our theorems, demonstrating that GPRS significantly outperforms the current state-of-the-art method, A* coding.

The Poisson-Boltzmann equation (PBE) is an implicit solvent continuum model for calculating the electrostatic potential and energies of ionic solvated biomolecules. However, its numerical solution remains a significant challenge due strong singularities and nonlinearity caused by the singular source terms and the exponential nonlinear terms, respectively. An efficient method for the treatment of singularities in the linear PBE was introduced in \cite{BeKKKS:18}, that is based on the RS tensor decomposition for both electrostatic potential and the discretized Dirac delta distribution. In this paper, we extend this regularization method to the nonlinear PBE. We apply the PBE only to the regular part of the solution corresponding to the modified right-hand side via extraction of the long-range part in the discretized Dirac delta distribution. The total electrostatic potential is obtained by adding the long-range solution to the directly precomputed short-range potential. The main computational benefit of the approach is the automatic maintaining of the continuity in the Cauchy data on the solute-solvent interface. The boundary conditions are also obtained from the long-range component of the precomputed canonical tensor representation of the Newton kernel. In the numerical experiments, we illustrate the accuracy of the nonlinear regularized PBE (NRPBE) over the classical variant.

This paper introduces an approach to decoupling singularly perturbed boundary value problems for fourth-order ordinary differential equations that feature a small positive parameter $\epsilon$ multiplying the highest derivative. We specifically examine Lidstone boundary conditions and demonstrate how to break down fourth-order differential equations into a system of second-order problems, with one lacking the parameter and the other featuring $\epsilon$ multiplying the highest derivative. To solve this system, we propose a mixed finite element algorithm and incorporate the Shishkin mesh scheme to capture the solution near boundary layers. Our solver is both direct and of high accuracy, with computation time that scales linearly with the number of grid points. We present numerical results to validate the theoretical results and the accuracy of our method.

Conducting valid statistical analyses is challenging in the presence of missing-not-at-random (MNAR) data, where the missingness mechanism is dependent on the missing values themselves even conditioned on the observed data. Here, we consider a MNAR model that generalizes several prior popular MNAR models in two ways: first, it is less restrictive in terms of statistical independence assumptions imposed on the underlying joint data distribution, and second, it allows for all variables in the observed sample to have missing values. This MNAR model corresponds to a so-called criss-cross structure considered in the literature on graphical models of missing data that prevents nonparametric identification of the entire missing data model. Nonetheless, part of the complete-data distribution remains nonparametrically identifiable. By exploiting this fact and considering a rich class of exponential family distributions, we establish sufficient conditions for identification of the complete-data distribution as well as the entire missingness mechanism. We then propose methods for testing the independence restrictions encoded in such models using odds ratio as our parameter of interest. We adopt two semiparametric approaches for estimating the odds ratio parameter and establish the corresponding asymptotic theories: one involves maximizing a conditional likelihood with order statistics and the other uses estimating equations. The utility of our methods is illustrated via simulation studies.

This paper investigates the problem of simultaneously predicting multiple binary responses by utilizing a shared set of covariates. Our approach incorporates machine learning techniques for binary classification, without making assumptions about the underlying observations. Instead, our focus lies on a group of predictors, aiming to identify the one that minimizes prediction error. Unlike previous studies that primarily address estimation error, we directly analyze the prediction error of our method using PAC-Bayesian bounds techniques. In this paper, we introduce a pseudo-Bayesian approach capable of handling incomplete response data. Our strategy is efficiently implemented using the Langevin Monte Carlo method. Through simulation studies and a practical application using real data, we demonstrate the effectiveness of our proposed method, producing comparable or sometimes superior results compared to the current state-of-the-art method.

Ancestry-specific proteome-wide association studies (PWAS) based on genetically predicted protein expression can reveal complex disease etiology specific to certain ancestral groups. These studies require ancestry-specific models for protein expression as a function of SNP genotypes. In order to improve protein expression prediction in ancestral populations historically underrepresented in genomic studies, we propose a new penalized maximum likelihood estimator for fitting ancestry-specific joint protein quantitative trait loci models. Our estimator borrows information across ancestral groups, while simultaneously allowing for heterogeneous error variances and regression coefficients. We propose an alternative parameterization of our model which makes the objective function convex and the penalty scale invariant. To improve computational efficiency, we propose an approximate version of our method and study its theoretical properties. Our method provides a substantial improvement in protein expression prediction accuracy in individuals of African ancestry, and in a downstream PWAS analysis, leads to the discovery of multiple associations between protein expression and blood lipid traits in the African ancestry population.

Large language models (LLMs) have significantly advanced the field of natural language processing (NLP), providing a highly useful, task-agnostic foundation for a wide range of applications. The great promise of LLMs as general task solvers motivated people to extend their functionality largely beyond just a ``chatbot'', and use it as an assistant or even replacement for domain experts and tools in specific domains such as healthcare, finance, and education. However, directly applying LLMs to solve sophisticated problems in specific domains meets many hurdles, caused by the heterogeneity of domain data, the sophistication of domain knowledge, the uniqueness of domain objectives, and the diversity of the constraints (e.g., various social norms, cultural conformity, religious beliefs, and ethical standards in the domain applications). To fill such a gap, explosively-increase research, and practices have been conducted in very recent years on the domain specialization of LLMs, which, however, calls for a comprehensive and systematic review to better summarizes and guide this promising domain. In this survey paper, first, we propose a systematic taxonomy that categorizes the LLM domain-specialization techniques based on the accessibility to LLMs and summarizes the framework for all the subcategories as well as their relations and differences to each other. We also present a comprehensive taxonomy of critical application domains that can benefit from specialized LLMs, discussing their practical significance and open challenges. Furthermore, we offer insights into the current research status and future trends in this area.

北京阿比特科技有限公司