[1] Pan, S., Hu, R., Long, G., Jiang, J., Yao, L., & Zhang, C. (2018). Adversarially regularized graph autoencoder for graph embedding. arXiv preprint arXiv:1802.04407.
[2] Kipf, T. N., & Welling, M. (2016). Variational graph auto-encoders. arXiv preprint arXiv:1611.07308.
[3] Kipf, T. N., & Welling, M. (2016). Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.
[4] Wang, C., Pan, S., Long, G., Zhu, X., & Jiang, J. (2017). MGAE: Marginalized graph autoencoder for graph clustering, 889–898.
[5] Li, F., Zhu, Z., Zhang, X., Cheng, J., & Zhao, Y. (2019). Diffusion-induced graph representation learning. Neurocomputing, 360, 220–229.
[6] Park, J., Lee, M., Chang, H. J., Lee, K., & Choi, J. Y. (2019). Symmetric graph convolutional autoencoder for unsupervised graph representation learning, 6519–6528.
[7] Rezende, D. J., Mohamed, S., & Wierstra, D. (2014). Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning (pp. 1278–1286). PMLR.
[8] Xie, Q., Huang, J., Du, P., & Peng, M. (2021). Graph relational topic model with higher-order graph attention auto-encoders, 2604–2613.
[9] He, L., Bai, L., Yang, X., Du, H., & Liang, J. (2023). High-order graph attention network. Information Sciences, 630, 222–234.
[10] Li, B., Jiao, P., He, D., Zhang, W., & 0001, D. J. (2019). Network-specific variational autoencoder for embedding in attribute networks, 2663–2669.
[11] Niknam, G., Molaei, S., Zare, H., Clifton, D., & Pan, S. (2023). Graph representation learning based on deep generative Gaussian mixture models. Neurocomputing, 523, 157–169.
[12] Davidson, T. R., Falorsi, L., De Cao, N., Kipf, T., & Tomczak, J. M. (2018). Hyperspherical variational auto-encoders. arXiv preprint arXiv:1804.00891.
[13] Zheng, S., Zhu, Z., Zhang, X., Liu, Z., Cheng, J., & Zhao, Y. (2020). Distribution-induced bidirectional generative adversarial network for graph representation learning, 7224–7233.
[14] Dilokthanakul, N., Mediano, P. A., Garnelo, M., Lee, M. C., Salimbeni, H., Arulkumaran, K., & Shanahan, M. (2016). Deep unsupervised clustering with Gaussian mixture variational autoencoders. arXiv preprint arXiv:1611.02648.
[15] Kingma, D. P., & Welling, M. (2013). Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114.
[16] Bojchevski, A., & Günnemann, S. (2017). Deep Gaussian embedding of graphs: Unsupervised inductive learning via ranking. arXiv preprint arXiv:1707.03815.
[17] Namata, G., London, B., Getoor, L., & Huang, B. (2012). Query-driven active surveying for collective classification, 8(1).
[18] Sen, P., Namata, G., Bilgic, M., Getoor, L., Galligher, B., & Eliassi-Rad, T. (2008). Collective classification in network data. AI Magazine, 29(3), 93–93.
[19] Wang, W., Liu, X., Jiao, P., Chen, X., & Jin, D. (2018). A unified weakly supervised framework for community detection and semantic matching, 218–230. Springer.
[20] Shchur, O., Mumme, M., Bojchevski, A., & Günnemann, S. (2018). Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868.
[21] Tang, L., & Liu, H. (2009). Relational learning via latent social dimensions, 817–826.
[22] Huang, X., Li, J., & Hu, X. (2017). Label informed attributed network embedding, 731–739.
[23] Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
[24] Yang, Z., Cohen, W., & Salakhudinov, R. (2016). Revisiting semi-supervised learning with graph embeddings, 40–48. PMLR.
[25] Ma, J., Tang, W., Zhu, J., & Mei, Q. (2019). A flexible generative framework for graph-based semi-supervised learning. Advances in Neural Information Processing Systems, 32.
[26] Zhang, Z., Cui, P., Pei, J., Wang, X., & Zhu, W. (2021). Eigen-GNN: A graph structure preserving plug-in for GNNs. IEEE Transactions on Knowledge and Data Engineering.
[27] Zhu, M., Wang, X., Shi, C., Ji, H., & Cui, P. (2021). Interpreting and unifying graph neural networks with an optimization framework. In Proceedings of the Web Conference 2021 (pp. 1215–1226).
[28] Song, Z., Zhang, Y., & King, I. (2022). Towards an optimal asymmetric graph structure for robust semi-supervised node classification. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 1656–1665).15161616
[29] Xu, K., Li, C., Tian, Y., Sonobe, T., Kawarabayashi, K.-i., & Jegelka, S. (2018). Representation learning on graphs with jumping knowledge networks. In International Conference on Machine Learning (pp. 5453–5462). PMLR.
[30] Chen, M., Wei, Z., Huang, Z., Ding, B., & Li, Y. (2020). Simple and deep graph convolutional networks. In International Conference on Machine Learning (pp. 1725–1735). PMLR.
[31] Wang, Y., Yi, K., Liu, X., Wang, Y. G., & Jin, S. (2022). ACMP: Allen-Cahn message passing with attractive and repulsive forces for graph neural networks. In The Eleventh International Conference on Learning Representations.
[32] Zhao, T., Liu, Y., Neves, L., Woodford, O., Jiang, M., & Shah, N. (2021). Data augmentation for graph neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, 35, 11015–11023.
[33] Liu, M., Gao, H., & Ji, S. (2020). Towards deeper graph neural networks, 338–348.
[34] Monti, F., Boscaini, D., Masci, J., Rodola, E., Svoboda, J., & Bronstein, M. M. (2017). Geometric deep learning on graphs and manifolds using mixture model CNNs, 5115–5124.
[35] Velickovic, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., & Bengio, Y. (2017). Graph attention networks. stat, 1050, 20.
[36] Hamilton, W., Ying, Z., & Leskovec, J. (2017). Inductive representation learning on large graphs. Advances in Neural Information Processing Systems, 30.
[37] Verma, V., Qu, M., Kawaguchi, K., Lamb, A., Bengio, Y., Kannala, J., & Tang, J. (2021). GraphMix: Improved training of GNNs for semi-supervised learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 35, 10024–10032.
[38] Chapelle, O., Scholkopf, B., & Zien, A. (2009). Semi-supervised learning (Chapelle, O. et al., eds.; 2006)[book reviews]. IEEE Transactions on Neural Networks, 20(3), 542–542.
[39] Wang, X., Zhu, M., Bo, D., Cui, P., Shi, C., & Pei, J. (2020). AM-GCN: Adaptive multi-channel graph convolutional networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 1243–1253).
[40] Defferrard, M., Bresson, X., & Vandergheynst, P. (2016). Convolutional neural networks on graphs with fast localized spectral filtering. Advances in Neural Information Processing Systems, 29.
[41] Wu, J., He, J., & Xu, J. (2019). Net: Degree-specific graph neural networks for node and graph classification, 406–415.
[42] Abu-El-Haija, S., Perozzi, B., Kapoor, A., Alipourfard, N., Lerman, K., Harutyunyan, H., Ver Steeg, G., & Galstyan, A. (2019). MixHop: Higher-order graph convolutional architectures via sparsified neighborhood mixing, 21–29. PMLR.
[43] Perozzi, B., Al-Rfou, R., & Skiena, S. (2014). DeepWalk: Online learning of social representations, 701–710.
[44] Tang, J., Qu, M., Wang, M., Zhang, M., Yan, J., & Mei, Q. (2015). LINE: Large-scale information network embedding, 1067–1077.
[45] Zhu, Y., Xu, Y., Yu, F., Liu, Q., Wu, S., & Wang, L. (2020). Deep graph contrastive representation learning. arXiv preprint arXiv:2006.04131.
[46] Zhu, Y., Xu, Y., Yu, F., Liu, Q., Wu, S., & Wang, L. (2021). Graph contrastive learning with adaptive augmentation. In Proceedings of the Web Conference 2021 (pp. 2069–2080).
[47] Peng, Z., Huang, W., Luo, M., Zheng, Q., Rong, Y., Xu, T., & Huang, J. (2020). Graph representation learning via graphical mutual information maximization. In Proceedings of The Web Conference 2020 (pp. 259–270).
[48] Fatemi, B., El Asri, L., & Kazemi, S. M. (2021). SLAPS: Self-supervision improves structure learning for graph neural networks. Advances in Neural Information Processing Systems, 34, 22667–22681.