亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Motivated by a recent method for approximate solution of Fredholm equations of the first kind, we develop a corresponding method for a class of Fredholm equations of the \emph{second kind}. In particular, we consider the class of equations for which the solution is a probability measure. The approach centres around specifying a functional whose gradient flow admits a minimizer corresponding to a regularized version of the solution of the underlying equation and using a mean-field particle system to approximately simulate that flow. Theoretical support for the method is presented, along with some illustrative numerical results.

相關內容

Integration:Integration, the VLSI Journal。 Explanation:集成,VLSI雜志。 Publisher:Elsevier。 SIT:

In this work, we close the fundamental gap of theory and practice by providing an improved regret bound for linear ensemble sampling. We prove that with an ensemble size logarithmic in $T$, linear ensemble sampling can achieve a frequentist regret bound of $\tilde{\mathcal{O}}(d^{3/2}\sqrt{T})$, matching state-of-the-art results for randomized linear bandit algorithms, where $d$ and $T$ are the dimension of the parameter and the time horizon respectively. Our approach introduces a general regret analysis framework for linear bandit algorithms. Additionally, we reveal a significant relationship between linear ensemble sampling and Linear Perturbed-History Exploration (LinPHE), showing that LinPHE is a special case of linear ensemble sampling when the ensemble size equals $T$. This insight allows us to derive a new regret bound of $\tilde{\mathcal{O}}(d^{3/2}\sqrt{T})$ for LinPHE, independent of the number of arms. Our contributions advance the theoretical foundation of ensemble sampling, bringing its regret bounds in line with the best known bounds for other randomized exploration algorithms.

When there are multiple outcome series of interest, Synthetic Control analyses typically proceed by estimating separate weights for each outcome. In this paper, we instead propose estimating a common set of weights across outcomes, by balancing either a vector of all outcomes or an index or average of them. Under a low-rank factor model, we show that these approaches lead to lower bias bounds than separate weights, and that averaging leads to further gains when the number of outcomes grows. We illustrate this via a re-analysis of the impact of the Flint water crisis on educational outcomes.

We formulate and analyze interior penalty discontinuous Galerkin methods for coupled elliptic PDEs modeling excitable tissue, represented by intracellular and extracellular domains sharing a common interface. The PDEs are coupled through a dynamic boundary condition, posed on the interface, that relates the normal gradients of the solutions to the time derivative of their jump. This system is referred to as the Extracellular Membrane Intracellular model or the cell-by-cell model. Due to the dynamic nature of the interface condition and to the presence of corner singularities, the analysis of discontinuous Galerkin methods is non-standard. We prove the existence and uniqueness of solutions by a reformulation of the problem to one posed on the membrane. Convergence is shown by utilizing face-to-element lifting operators and notions of weak consistency suitable for solutions with low spatial regularity. Further, we present parameter-robust preconditioned iterative solvers. Numerical examples in idealized geometries demonstrate our theoretical findings, and simulations in multiple cells portray the robustness of the method.

Quantum no-cloning theorem gives rise to the intriguing possibility of quantum copy protection where we encode a program or functionality in a quantum state such that a user in possession of k copies cannot create k+1 copies, for any k. Introduced by Aaronson (CCC'09) over a decade ago, copy protection has proven to be notoriously hard to achieve. Previous work has been able to achieve copy-protection for various functionalities only in restricted models: (i) in the bounded collusion setting where k -> k+1 security is achieved for a-priori fixed collusion bound k (in the plain model with the same computational assumptions as ours, by Liu, Liu, Qian, Zhandry [TCC'22]), or, (ii) only k -> 2k security is achieved (relative to a structured quantum oracle, by Aaronson [CCC'09]). In this work, we give the first unbounded collusion-resistant (i.e. multiple-copy secure) copy-protection schemes, answering the long-standing open question of constructing such schemes, raised by multiple previous works starting with Aaronson (CCC'09). More specifically, we obtain the following results. - We construct (i) public-key encryption, (ii) public-key functional encryption, (iii) signature and (iv) pseudorandom function schemes whose keys are copy-protected against unbounded collusions in the plain model (i.e. without any idealized oracles), assuming (post-quantum) subexponentially secure iO and LWE. - We show that any unlearnable functionality can be copy-protected against unbounded collusions, relative to a classical oracle. - As a corollary of our results, we rule out the existence of hyperefficient quantum shadow tomography, * even given non-black-box access to the measurements, assuming subexponentially secure iO and LWE, or, * unconditionally relative to a quantumly accessible classical oracle, and hence answer an open question by Aaronson (STOC'18).

In this study, we seek to understand how macroeconomic factors such as GDP, inflation, Unemployment Insurance, and S&P 500 index; as well as microeconomic factors such as health, race, and educational attainment impacted the unemployment rate for about 20 years in the United States. Our research question is to identify which factor(s) contributed the most to the unemployment rate surge using linear regression. Results from our studies showed that GDP (negative), inflation (positive), Unemployment Insurance (contrary to popular opinion; negative), and S&P 500 index (negative) were all significant factors, with inflation being the most important one. As for health issue factors, our model produced resultant correlation scores for occurrences of Cardiovascular Disease, Neurological Disease, and Interpersonal Violence with unemployment. Race as a factor showed a huge discrepancies in the unemployment rate between Black Americans compared to their counterparts. Asians had the lowest unemployment rate throughout the years. As for education attainment, results showed that having a higher education attainment significantly reduced one chance of unemployment. People with higher degrees had the lowest unemployment rate. Results of this study will be beneficial for policymakers and researchers in understanding the unemployment rate during the pandemic.

Higher order artificial neurons whose outputs are computed by applying an activation function to a higher order multinomial function of the inputs have been considered in the past, but did not gain acceptance due to the extra parameters and computational cost. However, higher order neurons have significantly greater learning capabilities since the decision boundaries of higher order neurons can be complex surfaces instead of just hyperplanes. The boundary of a single quadratic neuron can be a general hyper-quadric surface allowing it to learn many nonlinearly separable datasets. Since quadratic forms can be represented by symmetric matrices, only $\frac{n(n+1)}{2}$ additional parameters are needed instead of $n^2$. A quadratic Logistic regression model is first presented. Solutions to the XOR problem with a single quadratic neuron are considered. The complete vectorized equations for both forward and backward propagation in feedforward networks composed of quadratic neurons are derived. A reduced parameter quadratic neural network model with just $ n $ additional parameters per neuron that provides a compromise between learning ability and computational cost is presented. Comparison on benchmark classification datasets are used to demonstrate that a final layer of quadratic neurons enables networks to achieve higher accuracy with significantly fewer hidden layer neurons. In particular this paper shows that any dataset composed of $\mathcal{C}$ bounded clusters can be separated with only a single layer of $\mathcal{C}$ quadratic neurons.

In this work, we study the generalizability of diffusion models by looking into the hidden properties of the learned score functions, which are essentially a series of deep denoisers trained on various noise levels. We observe that as diffusion models transition from memorization to generalization, their corresponding nonlinear diffusion denoisers exhibit increasing linearity. This discovery leads us to investigate the linear counterparts of the nonlinear diffusion models, which are a series of linear models trained to match the function mappings of the nonlinear diffusion denoisers. Surprisingly, these linear denoisers are approximately the optimal denoisers for a multivariate Gaussian distribution characterized by the empirical mean and covariance of the training dataset. This finding implies that diffusion models have the inductive bias towards capturing and utilizing the Gaussian structure (covariance information) of the training dataset for data generation. We empirically demonstrate that this inductive bias is a unique property of diffusion models in the generalization regime, which becomes increasingly evident when the model's capacity is relatively small compared to the training dataset size. In the case that the model is highly overparameterized, this inductive bias emerges during the initial training phases before the model fully memorizes its training data. Our study provides crucial insights into understanding the notable strong generalization phenomenon recently observed in real-world diffusion models.

This study develops an algorithm to solve a variation of the Shortest Common Superstring (SCS) problem. There are two modifications to the base SCS problem. First, one string in the set S is allowed to have up to K mistakes, defined as not matching the SCS in at most K positions. Second, no string in S can be a substring of another in S. The algorithm proposed for the problem is exact.

In this article, we explored the usage of the equivariance criterion in linear model with fixed-X for the estimation and extended the model to allow multiple populations, which, in turn, leads to a larger transformation group. The minimum risk equivariant estimators of the coefficient vector and the covariance matrix were derived via the maximal invariants, which was consistent with earlier works. This article serves as an early exploration of the equivariance criterion in linear model.

We describe a mesh-free three-dimensional (3D) numerical scheme for solving the incompressible semi-geostrophic equations, based on semi-discrete optimal transport techniques. These results generalise previous two-dimensional (2D) implementations. The optimal transport methods we adopt are known for their structural preservation and energy conservation qualities and achieve an excellent level of efficiency and numerical energy-conservation. We use this scheme to generate numerical simulations of an important benchmark problem. To our knowledge, this is the first fully 3D simulation of this particular cyclone, evidencing the model's applicability to atmospheric and oceanic phenomena and offering a novel, robust tool for meteorological and oceanographic modelling.

北京阿比特科技有限公司