亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Nowadays, low-rank approximations of matrices are an important component of many methods in science and engineering. Traditionally, low-rank approximations are considered in unitary invariant norms, however, recently element-wise approximations have also received significant attention in the literature. In this paper, we propose an accelerated alternating minimization algorithm for solving the problem of low-rank approximation of matrices in the Chebyshev norm. Through the numerical evaluation we demonstrate the effectiveness of the proposed procedure for large-scale problems. We also theoretically investigate the alternating minimization method and introduce the notion of a $2$-way alternance of rank $r$. We show that the presence of a $2$-way alternance of rank $r$ is the necessary condition of the optimal low-rank approximation in the Chebyshev norm and that all limit points of the alternating minimization method satisfy this condition.

相關內容

Analytical workflows in functional magnetic resonance imaging are highly flexible with limited best practices as to how to choose a pipeline. While it has been shown that the use of different pipelines might lead to different results, there is still a lack of understanding of the factors that drive these differences and of the stability of these differences across contexts. We use community detection algorithms to explore the pipeline space and assess the stability of pipeline relationships across different contexts. We show that there are subsets of pipelines that give similar results, especially those sharing specific parameters (e.g. number of motion regressors, software packages, etc.). Those pipeline-to-pipeline patterns are stable across groups of participants but not across different tasks. By visualizing the differences between communities, we show that the pipeline space is mainly driven by the size of the activation area in the brain and the scale of statistic values in statistic maps.

Machine learning is becoming increasingly popular in the context of particle physics. Supervised learning, which uses labeled Monte Carlo (MC) simulations, remains one of the most widely used methods for discriminating signals beyond the Standard Model. However, this paper suggests that supervised models may depend excessively on artifacts and approximations from Monte Carlo simulations, potentially limiting their ability to generalize well to real data. This study aims to enhance the generalization properties of supervised models by reducing the sharpness of local minima. It reviews the application of four distinct white-box adversarial attacks in the context of classifying Higgs boson decay signals. The attacks are divided into weight space attacks, and feature space attacks. To study and quantify the sharpness of different local minima this paper presents two analysis methods: gradient ascent and reduced Hessian eigenvalue analysis. The results show that white-box adversarial attacks significantly improve generalization performance, albeit with increased computational complexity.

Threshold tolerance graphs and their complement graphs, known as co-TT graphs, were introduced by Monma, Reed, and Trotter[24]. Building on this, Hell et al.[19] introduced the concept of negative interval. Then they proceeded to define signedinterval digraphs/ bigraphs, demonstrating their equivalence to several seemingly distinct classes of digraphs/ bigraphs. They also showed that co-TT graphs are equivalent to symmetric signed-interval digraphs, where some vertices of the digraphs have loops and others do not. We have showed that this actually solve the representation characterization problem of co-TT graphs posed by Monma, Reed and Trotter [24]. In this paper, we characterize signed-interval bigraphs and signed-interval graphs in terms of their biadjacency matrices and adjacency matrices, respectively. Moreover we emphasize on the geometric representation of signed-interval graphs, i.e. co-TT graphs. Finally, by utilizing the geometric representation of signed-interval graphs, we resolve the open problem of characterizing co-TT graphs in terms of minimal forbidden induced subgraphs, a problem initially posed by Monma, Reed, and Trotter in the same paper.

Characteristic formulae give a complete logical description of the behaviour of processes modulo some chosen notion of behavioural semantics. They allow one to reduce equivalence or preorder checking to model checking, and are exactly the formulae in the modal logics characterizing classic behavioural equivalences and preorders for which model checking can be reduced to equivalence or preorder checking. This paper studies the complexity of determining whether a formula is characteristic for some finite, loop-free process in each of the logics providing modal characterizations of the simulation-based semantics in van Glabbeek's branching-time spectrum. Since characteristic formulae in each of those logics are exactly the consistent and prime ones, it presents complexity results for the satisfiability and primality problems, and investigates the boundary between modal logics for which those problems can be solved in polynomial time and those for which they become computationally hard. Amongst other contributions, this article also studies the complexity of constructing characteristic formulae in the modal logics characterizing simulation-based semantics, both when such formulae are presented in explicit form and via systems of equations.

Despite significant effort, the quantum machine learning community has only demonstrated quantum learning advantages for artificial cryptography-inspired datasets when dealing with classical data. In this paper we address the challenge of finding learning problems where quantum learning algorithms can achieve a provable exponential speedup over classical learning algorithms. We reflect on computational learning theory concepts related to this question and discuss how subtle differences in definitions can result in significantly different requirements and tasks for the learner to meet and solve. We examine existing learning problems with provable quantum speedups and find that they largely rely on the classical hardness of evaluating the function that generates the data, rather than identifying it. To address this, we present two new learning separations where the classical difficulty primarily lies in identifying the function generating the data. Furthermore, we explore computational hardness assumptions that can be leveraged to prove quantum speedups in scenarios where data is quantum-generated, which implies likely quantum advantages in a plethora of more natural settings (e.g., in condensed matter and high energy physics). We also discuss the limitations of the classical shadow paradigm in the context of learning separations, and how physically-motivated settings such as characterizing phases of matter and Hamiltonian learning fit in the computational learning framework.

Shifts in data distribution can substantially harm the performance of clinical AI models. Hence, various methods have been developed to detect the presence of such shifts at deployment time. However, root causes of dataset shifts are varied, and the choice of shift mitigation strategies is highly dependent on the precise type of shift encountered at test time. As such, detecting test-time dataset shift is not sufficient: precisely identifying which type of shift has occurred is critical. In this work, we propose the first unsupervised dataset shift identification framework, effectively distinguishing between prevalence shift (caused by a change in the label distribution), covariate shift (caused by a change in input characteristics) and mixed shifts (simultaneous prevalence and covariate shifts). We discuss the importance of self-supervised encoders for detecting subtle covariate shifts and propose a novel shift detector leveraging both self-supervised encoders and task model outputs for improved shift detection. We report promising results for the proposed shift identification framework across three different imaging modalities (chest radiography, digital mammography, and retinal fundus images) on five types of real-world dataset shifts, using four large publicly available datasets.

This paper analyzes a full discretization of a three-dimensional stochastic Allen-Cahn equation with multiplicative noise. The discretization uses the Euler scheme for temporal discretization and the finite element method for spatial discretization. A key contribution of this work is the introduction of a novel stability estimate for a discrete stochastic convolution, which plays a crucial role in establishing pathwise uniform convergence estimates for fully discrete approximations of nonlinear stochastic parabolic equations. By using this stability estimate in conjunction with the discrete stochastic maximal $L^p$-regularity estimate, the study derives a pathwise uniform convergence rate that encompasses general general spatial $L^q$-norms. Moreover, the theoretical convergence rate is verified by numerical experiments.

We consider the stochastic heat equation driven by a multiplicative Gaussian noise that is white in time and spatially homogeneous in space. Assuming that the spatial correlation function is given by a Riesz kernel of order $\alpha \in (0,1)$, we prove a central limit theorem for power variations and other related functionals of the solution. To our surprise, there is no asymptotic bias despite the low regularity of the noise coefficient in the multiplicative case. We trace this circumstance back to cancellation effects between error terms arising naturally in second-order limit theorems for power variations.

The Markov chain Monte Carlo (MCMC) method is widely used in various fields as a powerful numerical integration technique for systems with many degrees of freedom. In MCMC methods, probabilistic state transitions can be considered as a random walk in state space, and random walks allow for sampling from complex distributions. However, paradoxically, it is necessary to carefully suppress the randomness of the random walk to improve computational efficiency. By breaking detailed balance, we can create a probability flow in the state space and perform more efficient sampling along this flow. Motivated by this idea, practical and efficient nonreversible MCMC methods have been developed over the past ten years. In particular, the lifting technique, which introduces probability flows in an extended state space, has been applied to various systems and has proven more efficient than conventional reversible updates. We review and discuss several practical approaches to implementing nonreversible MCMC methods, including the shift method in the cumulative distribution and the directed-worm algorithm.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

北京阿比特科技有限公司