Top 10 Papers on Static and Dynamic Graph Neural Networks (GNNs)

Answered using GPT 40 by GPT40

Question:

Explain 5 papers for both static and dynamic GNNs that Graph is pre-defined. Use real truth references

Answer:

Certainly! Below are five papers each for static and dynamic Graph Neural Networks (GNNs) where the graph structure is pre-defined. These papers are well-cited and have contributed significantly to the field.

Static GNNs

  1. Graph Convolutional Networks (GCNs)

    • Paper: Kipf, T. N., & Welling, M. (2017). "Semi-Supervised Classification with Graph Convolutional Networks." ICLR.
    • Summary: This seminal paper introduces Graph Convolutional Networks (GCNs), which extend convolutional neural networks (CNNs) to graph-structured data. The authors propose a layer-wise propagation rule based on a first-order approximation of spectral graph convolutions. GCNs have been widely adopted for tasks like node classification and link prediction.
    • Link: arXiv:1609.02907
  2. Graph Attention Networks (GATs)

    • Paper: Veličković, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., & Bengio, Y. (2018). "Graph Attention Networks." ICLR.
    • Summary: This paper introduces Graph Attention Networks (GATs), which leverage attention mechanisms to assign different weights to different nodes in a neighborhood. This allows the model to focus on the most relevant parts of the graph, improving performance on tasks like node classification.
    • Link: arXiv:1710.10903
  3. GraphSAGE

    • Paper: Hamilton, W. L., Ying, R., & Leskovec, J. (2017). "Inductive Representation Learning on Large Graphs." NeurIPS.
    • Summary: GraphSAGE (Graph Sample and AggregatE) is designed for inductive learning on large graphs. It generates embeddings by sampling and aggregating features from a node's local neighborhood, making it scalable and applicable to unseen nodes.
    • Link: arXiv:1706.02216
  4. ChebNet

    • Paper: Defferrard, M., Bresson, X., & Vandergheynst, P. (2016). "Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering." NeurIPS.
    • Summary: This paper introduces ChebNet, which uses Chebyshev polynomials to approximate graph convolutions. This approach reduces computational complexity and allows for localized filtering on graphs.
    • Link: arXiv:1606.09375
  5. Graph Isomorphism Network (GIN)

    • Paper: Xu, K., Hu, W., Leskovec, J., & Jegelka, S. (2019). "How Powerful are Graph Neural Networks?" ICLR.
    • Summary: The authors propose the Graph Isomorphism Network (GIN), which is theoretically proven to be as powerful as the Weisfeiler-Lehman graph isomorphism test. GIN achieves state-of-the-art results on various graph classification benchmarks.
    • Link: arXiv:1810.00826

Dynamic GNNs

  1. EvolveGCN

    • Paper: Pareja, A., Domeniconi, G., Chen, J., Ma, T., Suzumura, T., Kanezashi, H., ... & Karypis, G. (2020). "EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs." AAAI.
    • Summary: EvolveGCN introduces a method to adapt GCNs for dynamic graphs by evolving the GCN parameters over time using an LSTM-based approach. This allows the model to capture temporal changes in the graph structure.
    • Link: arXiv:1902.10191
  2. Dynamic Graph Neural Networks (DGNN)

    • Paper: Manessi, F., Rozza, A., & Manzo, M. (2020). "Dynamic Graph Neural Networks." Pattern Recognition.
    • Summary: This paper proposes a framework for dynamic GNNs that can handle evolving graph structures. The model updates node embeddings dynamically as the graph changes, making it suitable for real-time applications.
    • Link: arXiv:2002.04999
  3. Temporal Graph Networks (TGN)

    • Paper: Rossi, E., Chamberlain, B., Frasca, F., Eynard, D., Monti, F., & Bronstein, M. (2020). "Temporal Graph Networks for Deep Learning on Dynamic Graphs." ICML.
    • Summary: TGN introduces a framework for learning on dynamic graphs by combining temporal point processes with GNNs. This allows the model to capture both the temporal and structural aspects of the graph.
    • Link: arXiv:2006.10637
  4. DySAT

    • Paper: Sankar, A., Wu, Y., Gou, L., Zhang, W., & Yang, H. (2020). "DySAT: Deep Neural Representation Learning on Dynamic Graphs via Self-Attention Networks." WSDM.
    • Summary: DySAT leverages self-attention mechanisms to learn node representations in dynamic graphs. The model captures both structural and temporal dependencies, achieving state-of-the-art performance on several benchmarks.
    • Link: arXiv:1812.00934
  5. JODIE

    • Paper: Kumar, S., Zhang, X., & Leskovec, J. (2019). "Predicting Dynamic Embedding Trajectory in Temporal Interaction Networks." KDD.
    • Summary: JODIE focuses on temporal interaction networks and predicts the future trajectory of node embeddings. The model uses a recurrent neural network to update embeddings based on temporal interactions.
    • Link: arXiv:1908.01207

These papers provide a comprehensive overview of both static and dynamic GNNs with pre-defined graph structures, showcasing the evolution and diversity of approaches in this rapidly growing field.