亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this work, we propose an Operator Learning (OpL) method for solving boundary value inverse problems in partial differential equations (PDEs), focusing on recovering diffusion coefficients from boundary data. Inspired by the classical Direct Sampling Method (DSM), our operator learner, named $\gamma$-deepDSM, has two key components: (1) a data-feature generation process that applies a learnable fractional Laplace-Beltrami operator to the boundary data, and (2) a convolutional neural network that operates on these data features to produce reconstructions. To facilitate this workflow, leveraging FEALPy \cite{wei2024fealpy}, a cross-platform Computed-Aided-Engineering engine, our another contribution is to develop a set of finite element method (FEM) modules fully integrated with PyTorch, called Learning-Automated FEM (LA-FEM). The new LA-FEM modules in FEALPy conveniently allows efficient parallel GPU computing, batched computation of PDEs, and auto-differentiation, without the need for additional loops, data format conversions, or device-to-device transfers. With LA-FEM, the PDE solvers with learnable parameters can be directly integrated into neural network models.

相關內容

Measurement-based quantum computation (MBQC) offers a fundamentally unique paradigm to design quantum algorithms. Indeed, due to the inherent randomness of quantum measurements, the natural operations in MBQC are not deterministic and unitary, but are rather augmented with probabilistic byproducts. Yet, the main algorithmic use of MBQC so far has been to completely counteract this probabilistic nature in order to simulate unitary computations expressed in the circuit model. In this work, we propose designing MBQC algorithms that embrace this inherent randomness and treat the random byproducts in MBQC as a resource for computation. As a natural application where randomness can be beneficial, we consider generative modeling, a task in machine learning centered around generating complex probability distributions. To address this task, we propose a variational MBQC algorithm equipped with control parameters that allow one to directly adjust the degree of randomness to be admitted in the computation. Our algebraic and numerical findings indicate that this additional randomness can lead to significant gains in expressivity and learning performance for certain generative modeling tasks, respectively. These results highlight the potential advantages in exploiting the inherent randomness of MBQC and motivate further research into MBQC-based algorithms.

In this paper, we address the challenge of Markov Chain Monte Carlo algorithms within the Approximate Bayesian Computation framework, which often get trapped in local optima due to their inherent local exploration mechanism. We propose a novel Global-Local ABC-MCMC algorithm that combines the "exploration" capabilities of global proposals with the "exploitation" finesse of local proposals. By integrating iterative importance resampling into the likelihood-free framework, we establish an effective global proposal distribution. We select the optimum mixture of global and local moves based on a unit cost version of expected squared jumped distance via sequential optimization. Furthermore, we propose two adaptive schemes: The first involves a normalizing flow-based probabilistic distribution learning model to iteratively improve the proposal for importance sampling, and the second focuses on optimizing the efficiency of the local sampler by utilizing Langevin dynamics and common random numbers. We numerically demonstrate that our method improves sampling efficiency and achieve more reliable convergence for complex posteriors. A software package implementing this method is available at //github.com/caofff/GL-ABC-MCMC.

In a Jacobi--Davidson (JD) type method for singular value decomposition (SVD) problems, called JDSVD, a large symmetric and generally indefinite correction equation is solved iteratively at each outer iteration, which constitutes the inner iterations and dominates the overall efficiency of JDSVD. In this paper, a convergence analysis is made on the minimal residual (MINRES) method for the correction equation. Motivated by the results obtained, at each outer iteration a new correction equation is derived that extracts useful information from current subspaces to construct effective preconditioners for the correction equation and is proven to retain the same convergence of outer iterations of JDSVD.The resulting method is called inner preconditioned JDSVD (IPJDSVD) method; it is also a new JDSVD method, and any viable preconditioner for the correction equations in JDSVD is straightforwardly applicable to those in IPJDSVD. Convergence results show that MINRES for the new correction equation can converge much faster when there is a cluster of singular values closest to a given target. A new thick-restart IPJDSVD algorithm with deflation and purgation is proposed that simultaneously accelerates the outer and inner convergence of the standard thick-restart JDSVD and computes several singular triplets. Numerical experiments justify the theory and illustrate the considerable superiority of IPJDSVD to JDSVD, and demonstrate that a similar two-stage IPJDSVD algorithm substantially outperforms the most advanced PRIMME\_SVDS software nowadays for computing the smallest singular triplets.

In this paper, we employ group rings and automorphism groups of binary linear codes to construct new record-breaking binary linear codes. We consider the semidirect product of abelian groups and cyclic groups and use these groups to construct linear codes. Finally, we obtain some linear codes which have better parameters than the code in \cite{bib5}. All the calculation results and corresponding data are listed in the paper or posted online.

In this paper, we study the embedded feature selection problem in linear Support Vector Machines (SVMs), in which a cardinality constraint is employed, leading to an interpretable classification model. The problem is NP-hard due to the presence of the cardinality constraint, even though the original linear SVM amounts to a problem solvable in polynomial time. To handle the hard problem, we first introduce two mixed-integer formulations for which novel semidefinite relaxations are proposed. Exploiting the sparsity pattern of the relaxations, we decompose the problems and obtain equivalent relaxations in a much smaller cone, making the conic approaches scalable. To make the best usage of the decomposed relaxations, we propose heuristics using the information of its optimal solution. Moreover, an exact procedure is proposed by solving a sequence of mixed-integer decomposed semidefinite optimization problems. Numerical results on classical benchmarking datasets are reported, showing the efficiency and effectiveness of our approach.

We characterise the behaviour of the maximum Diaconis-Ylvisaker prior penalized likelihood estimator in high-dimensional logistic regression, where the number of covariates is a fraction $\kappa \in (0,1)$ of the number of observations $n$, as $n \to \infty$. We derive the estimator's aggregate asymptotic behaviour under this proportional asymptotic regime, when covariates are independent normal random variables with mean zero and the linear predictor has asymptotic variance $\gamma^2$. From this foundation, we devise adjusted $Z$-statistics, penalized likelihood ratio statistics, and aggregate asymptotic results with arbitrary covariate covariance. While the maximum likelihood estimate asymptotically exists only for a narrow range of $(\kappa, \gamma)$ values, the maximum Diaconis-Ylvisaker prior penalized likelihood estimate not only exists always but is also directly computable using maximum likelihood routines. Thus, our asymptotic results also hold for $(\kappa, \gamma)$ values where results for maximum likelihood are not attainable, with no overhead in implementation or computation. We study the estimator's shrinkage properties, compare it to alternative estimation methods that can operate with proportional asymptotics, and present procedures for the estimation of unknown constants that describe the asymptotic behaviour of our estimator. We also provide a conjecture about the behaviour of our estimator when an intercept parameter is present in the model. We present results from extensive numerical studies to demonstrate the theoretical advances and strong evidence to support the conjecture, and illustrate the methodology we put forward through the analysis of a real-world data set on digit recognition.

In this work we build optimal experimental designs for precise estimation of the functional coefficient of a function-on-function linear regression model where both the response and the factors are continuous functions of time. After obtaining the variance-covariance matrix of the estimator of the functional coefficient which minimizes the integrated sum of square of errors, we extend the classical definition of optimal design to this estimator, and we provide the expression of the A-optimal and of the D-optimal designs. Examples of optimal designs for dynamic experimental factors are then computed through a suitable algorithm, and we discuss different scenarios in terms of the set of basis functions used for their representation. Finally, we present an example with simulated data to illustrate the feasibility of our methodology.

Not accounting for competing events in survival analysis can lead to biased estimates, as individuals who die from other causes do not have the opportunity to develop the event of interest. Formal definitions and considerations for causal effects in the presence of competing risks have been published, but not for the mediation analysis setting. We propose, for the first time, an approach based on the path-specific effects framework to account for competing risks in longitudinal mediation analysis with time-to-event outcomes. We do so by considering the pathway through the competing event as another mediator, which is nested within our longitudinal mediator of interest. We provide a theoretical formulation and related definitions of the effects of interest based on the mediational g-formula, as well as a detailed description of the algorithm. We also present an application of our algorithm to data from the Strong Heart Study, a prospective cohort of American Indian adults. In this application, we evaluated the mediating role of the blood pressure trajectory (measured during three visits) on the association between arsenic and cadmium, in separate models, with time to cardiovascular disease, accounting for competing risks by death. Identifying the effects through different paths enables us to evaluate the impact of metals on the outcome of interest, as well as through competing risks, more transparently.

Evaluating the performance of classifiers is critical in machine learning, particularly in high-stakes applications where the reliability of predictions can significantly impact decision-making. Traditional performance measures, such as accuracy and F-score, often fail to account for the uncertainty inherent in classifier predictions, leading to potentially misleading assessments. This paper introduces the Certainty Ratio ($C_\rho$), a novel metric designed to quantify the contribution of confident (certain) versus uncertain predictions to any classification performance measure. By integrating the Probabilistic Confusion Matrix ($CM^\star$) and decomposing predictions into certainty and uncertainty components, $C_\rho$ provides a more comprehensive evaluation of classifier reliability. Experimental results across 21 datasets and multiple classifiers, including Decision Trees, Naive-Bayes, 3-Nearest Neighbors, and Random Forests, demonstrate that $C_\rho$ reveals critical insights that conventional metrics often overlook. These findings emphasize the importance of incorporating probabilistic information into classifier evaluation, offering a robust tool for researchers and practitioners seeking to improve model trustworthiness in complex environments.

For many problems, quantum algorithms promise speedups over their classical counterparts. However, these results predominantly rely on asymptotic worst-case analysis, which overlooks significant overheads due to error correction and the fact that real-world instances often contain exploitable structure. In this work, we employ the hybrid benchmarking method to evaluate the potential of quantum Backtracking and Grover's algorithm against the 2023 SAT competition main track winner in solving random $k$-SAT instances with tunable structure, designed to represent industry-like scenarios, using both $T$-depth and $T$-count as cost metrics to estimate quantum run times. Our findings reproduce the results of Campbell, Khurana, and Montanaro (Quantum '19) in the unstructured case using hybrid benchmarking. However, we offer a more sobering perspective in practically relevant regimes: almost all quantum speedups vanish, even asymptotically, when minimal structure is introduced or when $T$-count is considered instead of $T$-depth. Moreover, when the requirement is for the algorithm to find a solution within a single day, we find that only Grover's algorithm has the potential to outperform classical algorithms, but only in a very limited regime and only when using $T$-depth. We also discuss how more sophisticated heuristics could restore the asymptotic scaling advantage for quantum backtracking, but our findings suggest that the potential for practical quantum speedups in more structured $k$-SAT solving will remain limited.

北京阿比特科技有限公司