亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper proposes a verification method for sparse linear systems $Ax=b$ with general and nonsingular coefficients. A verification method produces the error bound for a given approximate solution. Conventional methods use one of two approaches. One approach is to verify the computed solution of the normal equation $A^TAx=A^Tb$ by exploiting symmetric and positive definiteness; however, the condition number of $A^TA$ is the square of that for $A$. The other approach uses an approximate inverse matrix of the coefficient; however, the approximate inverse may be dense even if $A$ is sparse. Here, we propose a method for the verification of solutions of sparse linear systems based on $LDL^T$ decomposition. The proposed method can reduce the fill-in and is applicable to many problems. Moreover, an efficient iterative refinement method is proposed for obtaining accurate solutions.

相關內容

This paper proposes novel noise-free Bayesian optimization strategies that rely on a random exploration step to enhance the accuracy of Gaussian process surrogate models. The new algorithms retain the ease of implementation of the classical GP-UCB algorithm, but the additional random exploration step accelerates their convergence, nearly achieving the optimal convergence rate. Furthermore, to facilitate Bayesian inference with an intractable likelihood, we propose to utilize optimization iterates for maximum a posteriori estimation to build a Gaussian process surrogate model for the unnormalized log-posterior density. We provide bounds for the Hellinger distance between the true and the approximate posterior distributions in terms of the number of design points. We demonstrate the effectiveness of our Bayesian optimization algorithms in non-convex benchmark objective functions, in a machine learning hyperparameter tuning problem, and in a black-box engineering design problem. The effectiveness of our posterior approximation approach is demonstrated in two Bayesian inference problems for parameters of dynamical systems.

Fog computing offers increased performance and efficiency for Industrial Internet of Things (IIoT) applications through distributed data processing in nearby proximity to sensors. Given resource constraints and their contentious use in IoT networks, current strategies strive to optimise which data processing tasks should be selected to run on fog devices. In this paper, we advance a more effective data processing architecture for optimisation purposes. Specifically, we consider the distinct functions of sensor data streaming, multi-stream data aggregation and event handling, required by IoT applications for identifying actionable events. We retrofit this event processing pipeline into a logical architecture, structured as a service function tree (SFT), comprising service function chains. We present a novel algorithm for mapping the SFT into a fog network topology in which nodes selected to process SFT functions (microservices) have the requisite resource capacity and network speed to meet their event processing deadlines. We used simulations to validate the algorithm's effectiveness in finding a successful SFT mapping to a physical network. Overall, our approach overcomes the bottlenecks of single service placement strategies for fog computing through composite service placements of SFTs.

This paper considers the graph signal processing problem of anomaly detection in time series of graphs. We examine two related, complementary inference tasks: the detection of anomalous graphs within a time series, and the detection of temporally anomalous vertices. We approach these tasks via the adaptation of statistically principled methods for joint graph inference, specifically \emph{multiple adjacency spectral embedding} (MASE). We demonstrate that our method is effective for our inference tasks. Moreover, we assess the performance of our method in terms of the underlying nature of detectable anomalies. We further provide the theoretical justification for our method and insight into its use. Applied to the Enron communication graph, a large-scale commercial search engine time series of graphs, and a larval Drosophila connectome data, our approaches demonstrate their applicability and identify the anomalous vertices beyond just large degree change.

Continual learning on edge devices poses unique challenges due to stringent resource constraints. This paper introduces a novel method that leverages stochastic competition principles to promote sparsity, significantly reducing deep network memory footprint and computational demand. Specifically, we propose deep networks that comprise blocks of units that compete locally to win the representation of each arising new task; competition takes place in a stochastic manner. This type of network organization results in sparse task-specific representations from each network layer; the sparsity pattern is obtained during training and is different among tasks. Crucially, our method sparsifies both the weights and the weight gradients, thus facilitating training on edge devices. This is performed on the grounds of winning probability for each unit in a block. During inference, the network retains only the winning unit and zeroes-out all weights pertaining to non-winning units for the task at hand. Thus, our approach is specifically tailored for deployment on edge devices, providing an efficient and scalable solution for continual learning in resource-limited environments.

Additive noise models (ANMs) are an important setting studied in causal inference. Most of the existing works on ANMs assume causal sufficiency, i.e., there are no unobserved confounders. This paper focuses on confounded ANMs, where a set of treatment variables and a target variable are affected by an unobserved confounder that follows a multivariate Gaussian distribution. We introduce a novel approach for estimating the average causal effects (ACEs) of any subset of the treatment variables on the outcome and demonstrate that a small set of interventional distributions is sufficient to estimate all of them. In addition, we propose a randomized algorithm that further reduces the number of required interventions to poly-logarithmic in the number of nodes. Finally, we demonstrate that these interventions are also sufficient to recover the causal structure between the observed variables. This establishes that a poly-logarithmic number of interventions is sufficient to infer the causal effects of any subset of treatments on the outcome in confounded ANMs with high probability, even when the causal structure between treatments is unknown. The simulation results indicate that our method can accurately estimate all ACEs in the finite-sample regime. We also demonstrate the practical significance of our algorithm by evaluating it on semi-synthetic data.

This invited paper is a passionate pitch for the significance of logic in scientific education. Logic helps focus on the essential core to identify the foundations of ideas and provides corresponding longevity with the resulting approach to new and old problems. Logic operates symbolically, where each part has a precise meaning and the meaning of the whole is compositional, so a simple function of the meaning of the pieces. This compositionality in the meaning of logical operators is the basis for compositionality in reasoning about logical operators. Both semantic and deductive compositionalities help explain what happens in reasoning. The correctness-critical core of an idea or an algorithm is often expressible eloquently and particularly concisely in logic. The opinions voiced in this paper are influenced by the author's teaching of courses on cyber-physical systems, constructive logic, compiler design, programming language semantics, and imperative programming principles. In each of those courses, different aspects of logic come up for different purposes to elucidate significant ideas particularly clearly. While there is a bias of the thoughts in this paper toward computer science, some courses have been heavily frequented by students from other majors so that some transfer of the thoughts to other science and engineering disciplines is plausible.

While large language models (LLMs) can already achieve strong performance on standard generic summarization benchmarks, their performance on more complex summarization task settings is less studied. Therefore, we benchmark LLMs on instruction controllable text summarization, where the model input consists of both a source article and a natural language requirement for desired summary characteristics. To this end, we curate an evaluation-only dataset for this task setting and conduct human evaluations of five LLM-based systems to assess their instruction-following capabilities in controllable summarization. We then benchmark LLM-based automatic evaluation for this task with 4 different evaluation protocols and 11 LLMs, resulting in 40 evaluation methods. Our study reveals that instruction controllable text summarization remains a challenging task for LLMs, since (1) all LLMs evaluated still make factual and other types of errors in their summaries; (2) no LLM-based evaluation methods can achieve a strong alignment with human annotators when judging the quality of candidate summaries; (3) different LLMs show large performance gaps in summary generation and evaluation capabilities. We make our collected benchmark InstruSum publicly available to facilitate future research in this direction.

In this paper, we suggest new SAT encodings of the partial-ordering based ILP model for the graph coloring problem (GCP) and the bandwidth coloring problem (BCP). The GCP asks for the minimum number of colors that can be assigned to the vertices of a given graph such that each two adjacent vertices get different colors. The BCP is a generalization, where each edge has a weight that enforces a minimal "distance" between the assigned colors, and the goal is to minimize the "largest" color used. For the widely studied GCP, we experimentally compare our new SAT encoding to the state-of-the-art approaches on the DIMACS benchmark set. Our evaluation confirms that this SAT encoding is effective for sparse graphs and even outperforms the state-of-the-art on some DIMACS instances. For the BCP, our theoretical analysis shows that the partial-ordering based SAT and ILP formulations have an asymptotically smaller size than that of the classical assignment-based model. Our practical evaluation confirms not only a dominance compared to the assignment-based encodings but also to the state-of-the-art approaches on a set of benchmark instances. Up to our knowledge, we have solved several open instances of the BCP from the literature for the first time.

In this paper, the statistical properties of Newton s method algorithm output in a specific case have been studied. The relative frequency density of this sample converges to a well-defined function, prompting us to explore its distribution. Through rigorous mathematical proof, we demonstrate that the probability density function follows a Cauchy distribution. Additionally, a new method to generate a uniform distribution is proposed. To further confirm our findings, we employed statistical tests, including the Kolmogorov-Smirnov test and Anderson-Darling test, which showed high p-values. Furthermore, we show that the distribution of the distance between two successive outputs can be obtained through a transformation method applied to the Cauchy distribution.

In this paper, we propose a novel Feature Decomposition and Reconstruction Learning (FDRL) method for effective facial expression recognition. We view the expression information as the combination of the shared information (expression similarities) across different expressions and the unique information (expression-specific variations) for each expression. More specifically, FDRL mainly consists of two crucial networks: a Feature Decomposition Network (FDN) and a Feature Reconstruction Network (FRN). In particular, FDN first decomposes the basic features extracted from a backbone network into a set of facial action-aware latent features to model expression similarities. Then, FRN captures the intra-feature and inter-feature relationships for latent features to characterize expression-specific variations, and reconstructs the expression feature. To this end, two modules including an intra-feature relation modeling module and an inter-feature relation modeling module are developed in FRN. Experimental results on both the in-the-lab databases (including CK+, MMI, and Oulu-CASIA) and the in-the-wild databases (including RAF-DB and SFEW) show that the proposed FDRL method consistently achieves higher recognition accuracy than several state-of-the-art methods. This clearly highlights the benefit of feature decomposition and reconstruction for classifying expressions.

北京阿比特科技有限公司