Skip to main content

Posts

Paper Summary #2: Convolutional Neural Network on Graphs with Fast Localized Spectral Filtering (NIPS 2016)

This post summarizes the NIPS 2016 paper by Defferrard et al on " Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering ". Convolutional Neural Networks have accomplished many breakthroughs, ranging from a classification of a million images of ImageNet to very complex tasks like object tracking in surveillance videos. These advancements are not restricted to image data. The CNNs (and, in general, deep learning concepts) have been able to achieve state-of-the-art results even on text and speech applications. CNNs are proved to be very powerful tool in solving many problems from images, text and speech domain. If that is the case then the question that we want to ask here is, can we use CNNs to solve problems on graphs as well. If we take a closer look at the data domain that we were dealing with, we realize that this data has specific structure, e.g. images are nothing but the 2-D grid of pixels, the text is a stream of words and can be t
Recent posts

Paper Summary #1: Learning Convolutional Neural Network for Graphs (ICML2016)

This post summarizes the PATCHY-SAN algorithm proposed by Niepert et al in their ICML 2016 paper " Learning Convolutional Neural Networks for Graphs " In the literature, the convolution operation on graphs is defined in 3 domains: spectral domain [ Defferrard et al , Kipf et al ], spatial domain [ Masci et al , Boscaini et al ] and embedding domain [ Maron et al ]. The method proposed by Niepert et al in "Learning Convolutional Neural Networks for Graphs" falls into the category of spatial domain algorithm. The broad aim of the paper is to learn a function over a bunch of graphs that will give us a general representation of them. These representations can then be used for any task such graph classification. Challenges in applying convolution directly on graphs: There are two main problems that needs to be addressed before we could apply convolution on graphs. Challenge 1 : Images can be thought of as a regular graph, where each pixel is denoted by a

CNN for Graph: Notes on IPAM UCLA talk- Part II

This post is about summary of a talk by Federico Monti on "Deep Geometric Matrix Completion". It is a second post in a series of four posts. The first post discusses about a talk by Xavier Bresson at IPAM UCLA workshop : Link to first post . IPAM recently hosted a workshop on New Deep Learning Techniques . This blog is about a talk at the workshop by Federico Monti on Deep Geometric Matrix Completion . Deep Geometric Matrix Completion is a geometric deep learning based approach for Recommendation Systems . The problem of recommending an item to customers can be formulated as a matrix completion task. Matrix completion as a constraint minimization problem: Many researchers have posed a matrix completion problem as constraint minimization problem. Candes et al, 2008 proposed a method that reconstructs a matrix \(X\) which is as close to the original user-item matrix as possible. The method puts additional low rank constraint on the matrix that acts as a r

CNN for Graph: Notes on IPAM UCLA talk- Part I

This post is about my understanding of Xavier Bresson's talk at IPAM UCLA on "Convolutional Neural Networks on Graphs". This post has 3 section: the first section talks about how convolution and pooling operations are defined on graphs, the second section talks about different graph CNN architectures proposed so far and the last section talks about Graph Neural Network - a framework to deal with arbitrary shape graphs. Institute for Pure & Applied Mathematics( IPAM ), UCLA recently organized a workshop on New Deep Learning Techniques with many great talks on emerging methods in deep learning. The concept of applying convolution on non-Euclidean data is interesting in its in own way. Xavier Bresson presented a nice overview of it in his talk at the workshop. In this post, I have discussed some of the concepts , that I found worth noting down. CNNs are successful in solving many problems in computer vision, NLP, speech, that seemed impossible to solve a fe