Experiments
Datasets
Bitcoin-Alpha, Bitcoin-OTC, WikiRfA, Epinions: https://snap.stanford.edu/data
Slahsdot: https://www.aminer.cn/data-sna
Parameter Settings
Dimensionality of embedding = 128
SNE [PAKDD'17]
Learning rate = 0.025
Number of samples to train = 10 Million
Window size = 2
Maximum length of random walk path = 40
Number of random walks starting at each node = 20
SiNE [SDM'18]
Learning rate = 0.5
L1 regularization = 0.001
L2 regularization = 0.0001
SIDE [WWW'18]
Learning rate = 0.025
Window size = 5
Maximum length of random walk path = 40
Number of random walks starting at each node = 80
BESIDE [CIKM'18]
Learning rate = 0.01
Regularization = 0.0001
SGCN [ICDM'18]
Learning rate = 0.5
Number of layers = 2
Loss2 regularization = 5
SLF [KDD'19]
Learning rate = 0.025
Sample size of the null relationships = 10 (for Bitcoin-OTC, Epinions), 20 (for Bitcoin-Alpha, WikiRfA, Slashdot)
Initialization parameter for the logistic activation function (i.e., p0) = 0.001
node2vec [KDD'16]
Number of random walks starting at each node = 10
Maximum length of random walk path = 80
Window size = 10
GraphGAN [AAAI'18]
Learning rate = 0.003
L2 regularization = 0.00005
Window size = 2
Number of samples for generator = 20
Experimental Results
RQ1: Are signed NE methods consistently more effective in various types of tasks than unsigned NE methods?
NOTE: In Epinions, the accuracies of GraphGAN and SGCN could not be obtained; they have not finished their training in a week.
RQ2: In the signed NE methods, does the utilization of negative links help provide higher accuracy in various tasks?
NOTE: In Epinions, the accuracies of SGCN_ALL and SGCN_P could not be obtained; they have not finished their training in a week.