Hypergraph exmaple in documentation

Hi DGL team,

I want to ask about the incidence matrix of HGNN example here Hypergraph Neural Networks — DGL 1.0.3 documentation.

In the Loading Data part with Cora dataset, you use its adjacency matrix plus the self-loops to get the incidence matrix of the co-citation hypergraph. If I understand correctly, I think you may have to transpose for the final matrix to get the incidence matrix (num_nodes * num_hyperedges)?

Looking forward to your insights.

As you suggested, the incidence matrix is of shape (num_nodes, num_hyperedges). It just happens to be the case that num_hyperedges equals to num_nodes in this particular example.

Thanks a lot for the reply! I totally agree that it just happens to be the case. The part I am confused about is that after adding the self-loop, it seems that the rows are hyperedge (like [1,1,1,1] in the first row) instead of the column, hence do we need to transpose it to get the incidence matrix?

This might depend on if your graph is directed or not. Most GNN efforts deal with undirected graphs, where A is a symmetric matrix and it just does not matter.

Thanks for the answer!

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.