-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Formulate page-rank as a torch.nn Layer #21
Comments
Do you mean wrap the stateless |
yes, why didn't I think of that? |
Simple way is to make all of the optional parameters as values to pass to the |
Below is what I came up with. import torch
from torch_ppr import page_rank
class PageRank(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
return page_rank(edge_index=x)
edge_index = torch.as_tensor(data=[(0, 1), (1, 2), (1, 3), (2, 4)]).t()
model = PageRank()
print(model(edge_index)) |
what about the rest of the arguments? all of the following can be passed to torch-ppr/src/torch_ppr/api.py Lines 34 to 40 in a5de688
|
Hoping below is correct usage: import torch
from torch_ppr import page_rank
from typing import Optional, Union
DeviceHint = Union[None, str, torch.device]
class PageRank(torch.nn.Module):
def __init__(self,
add_identity: bool = False,
max_iter: int = 1000,
alpha: float = 0.05,
epsilon: float = 1.0e-04,
x0: Optional[torch.Tensor] = None,
use_tqdm: bool = False,
device: DeviceHint = None):
super().__init__()
def forward(self, x):
return page_rank(edge_index=x)
edge_index = torch.as_tensor(data=[(0, 1), (1, 2), (1, 3), (2, 4)]).t()
model = PageRank(device='cuda')
print(model(edge_index))
# Input somthing to the model
x = edge_index
torch.onnx.export(model, # model being run
x, # model input (or a tuple for multiple inputs)
"page_rank.onnx", # where to save the model (can be a file or file-like object)
export_params=False) # store the trained parameter weights inside the model file |
The onnx conversion is not ready for operator, raise symbolic_registry.UnsupportedOperatorError(
torch.onnx.symbolic_registry.UnsupportedOperatorError: Exporting the operator ::sparse_coo_tensor
to ONNX opset version 13 is not supported. Please feel free to request support or submit a pull request
on PyTorch GitHub. Please, is there a work-around, i.e., do the power_iteration with a regular matrix? |
In theory, you can run the page-rank iterations with a full matrix; however, you'll lose much of its computational benefits, and are restricted to rather small graphs. Essentially, you would need:
A = torch.zeros(n, n)
A[edge_index[0], edge_index[1]] = 1.0
adj = adj + adj.t()
if add_identity
adj = adj + torch.eye(adj.shape[0])
adj = adj / adj.sum(dim=1, keepdims=True).clamp_min(1.0e-08)
x = (1 - alpha) * (adj @ x) + alpha * x0 |
Thank you for this repo!
The reason to request a 'layer' fomulation is to convert the function
page_rank
to an onnx graph withtorch.onnx
(only accepts models).Once I have the onnx model, I can compile it different hardware (other than cuda).
Maybe need just the forward pass, no need for a backward pass although I think the compute will be differentiable.
Thanks.
The text was updated successfully, but these errors were encountered: