NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Reviewer 1
Originality: This work introduces a novel featurization of graphs, with focused application in chemistry. I feel it is an important contribution in the field, and I would read this paper all the way through if I read the abstract in another context. Quality: This is a complete work with careful empirical validation. There are error bars in one plot, but no mention I saw of what they indicate. Error bars in the tables would be appreciated to ascertain whether small differences are significant or worth considering further. I am unqualified to critique the theory portion of the paper, so I hope another reviewer can comment on its validity and impact. Clarity: The paper is overall well-written. The field of graph neural networks is large and growing, but I feel the work is adequately cited. I have a few typographical nits: - line 38 "supervisedly" is awkward - line 56: "that has not parameters", typo - line 76: improper use of "entails" - line 112: "now it suffices", awkward phrasing Notationally: - between line 78 and 79, the notation for V is confusing to me. Shouldn't there be a third index, e.g. V_{i,0,C} should be the {0,1} value indicating if atom i is a carbon? Otherwise, it should be made clearer that V_{i,0} is a vector. Significance: I believe this work is significant, as it opens a route for CBOW-like methods for graph classification and regression problems.
Reviewer 2
The authors provide a simple method to create representations for graph in an unsupervised fashion. These representations are used in multiple prediction tasks. The results are interesting in that this simple method works surprisingly well. However, it is not very clear whether these methods are broadly applicable (apart from the molecule domain) or if there are any conditions under which they may not work well. The baselines also look weak. The authors refer to the Appendix. But I could not find the Appendix in supplementary material (only source code was available). -- Update: After reading the author feedback My main complaint was the lack of comparison with MPNN and DTNN in QM9 and QM8. But, this was because I assumed wrongly that the supplemental material was not available. In the feedback the authors pointed out that the supplemental material was indeed available. The comparison with MPNN and DTNN is present in table S18 of the paper. Though they claim that a direct comparison with MPNN and DTNN is not fair because MPNN and DTNN uses 3d information, this table gives us an idea of how it fares with MPNN and DTNN. NGram XGB performs best in 6 out of 12 tasks. If we incorporate this result in Table 2 (where they claim that their method performs best in 9 out those 12 tasks), it will not significantly alter their claims of performance. In light of this, I will increase my rating.
Reviewer 3
[Originality] This paper proposes a novel method for learning unsupervised representation for molecules. This is critical because most of the molecule datasets are small. Learning an unsupervised representation allows the model to better generalize and potentially utilize unlabeled data in a semi-supervised setting. Currently there are few methods working on learning unsupervised molecular representation and therefore I think this paper is original. [Quality]: The paper is technically sound. The paper provides theoretical analysis characterizing the model's representation power and generalization bound, which is important for understanding the model. It would be good to see the average sparsity of c(n) on some molecule datasets. The paper performed extensive empirical comparison against a wide range of baselines, therefore I believe the experimental results support the claims. [Clarity]: The submission is mostly clear. Due to the space limit, the paper is very dense and most of the details are provided in the supplementary. If this paper gets accepted, I think the author should reorganize the paper properly to move some parts of the appendix into the main paper to improve its readability. [Significance] As I mentioned, the paper conducted experiments on standard benchmarks and compared against many baselines. The results are significant and I believe this paper will encourage many researchers to design unsupervised / pretraining methods for molecules. As an extension, the authors can test the model in a semi-supervised scenario, using unlabeled molecules to derive the graph embeddings. Ideally the method should work even better when unlabeled molecules are incorporated. =============================================== Upon reading other reviewer's comments, I found that MPNN baselines are missing on QM9 and delaney, which outperforms the proposed method. Despite that, I think the proposed method is still novel. Therefore, I am keeping the original score but lowering my confidence due to missing MPNN baseline.