You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you so much for the amazing work. I have one question about the implementation.
In the paper, the propagation rule in matrix form is
but in the code, it seems you add leaky_relu to both parts then add them together. I am kind of confused about why does the implementation match the equation?
side_embeddings = tf.concat(temp_embed, 0)
# transformed sum messages of neighbors.
sum_embeddings = tf.nn.leaky_relu(
tf.matmul(side_embeddings, self.weights['W_gc_%d' % k]) + self.weights['b_gc_%d' % k])
# bi messages of neighbors.
bi_embeddings = tf.multiply(ego_embeddings, side_embeddings)
# transformed bi messages of neighbors.
bi_embeddings = tf.nn.leaky_relu(
tf.matmul(bi_embeddings, self.weights['W_bi_%d' % k]) + self.weights['b_bi_%d' % k])
# non-linear activation.
ego_embeddings = sum_embeddings + bi_embeddings
# message dropout.
ego_embeddings = tf.nn.dropout(ego_embeddings, 1 - self.mess_dropout[k])
# normalize the distribution of embeddings.
norm_embeddings = tf.math.l2_normalize(ego_embeddings, axis=1)
The text was updated successfully, but these errors were encountered:
Hi,
Thank you so much for the amazing work. I have one question about the implementation.
In the paper, the propagation rule in matrix form is
but in the code, it seems you add leaky_relu to both parts then add them together. I am kind of confused about why does the implementation match the equation?
The text was updated successfully, but these errors were encountered: