You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your work on Hypertransformer! When I tried HyperTransformer on CAVE dataset with 32-scale factor super-resolution, the loss is so huge. In order to make the code suitable for 32x super resolution tasks, I upsampled the MS_image 4 times and then fed it into the feature extractor in backbone (i.e., the self.SFE in code). The HSIs in CAVE have been normailized into 0~1. But the numerical range of reocnstructed HSI is very huge too, about 3e4.
The size of MS_image: 8x8x31; Size of PAN_image(RGB image): 32x32x3, Batchsize:5;
Training Epoch: 1 Loss: 3612763.764423077
Training Epoch: 2 Loss: 766998.2764423077
Training Epoch: 3 Loss: 350786.46514423075
Training Epoch: 4 Loss: 230237.5733173077
Training Epoch: 5 Loss: 184773.47115384616
Training Epoch: 6 Loss: 160125.31189903847
Training Epoch: 7 Loss: 137537.77524038462
Training Epoch: 8 Loss: 117618.86358173077
Training Epoch: 9 Loss: 106500.88341346153
Training Epoch: 10 Loss: 96507.72115384616
Thank you for your work on Hypertransformer! When I tried HyperTransformer on CAVE dataset with 32-scale factor super-resolution, the loss is so huge. In order to make the code suitable for 32x super resolution tasks, I upsampled the MS_image 4 times and then fed it into the feature extractor in backbone (i.e., the self.SFE in code). The HSIs in CAVE have been normailized into 0~1. But the numerical range of reocnstructed HSI is very huge too, about 3e4.
The size of MS_image: 8x8x31; Size of PAN_image(RGB image): 32x32x3, Batchsize:5;
Training Epoch: 1 Loss: 3612763.764423077
Training Epoch: 2 Loss: 766998.2764423077
Training Epoch: 3 Loss: 350786.46514423075
Training Epoch: 4 Loss: 230237.5733173077
Training Epoch: 5 Loss: 184773.47115384616
Training Epoch: 6 Loss: 160125.31189903847
Training Epoch: 7 Loss: 137537.77524038462
Training Epoch: 8 Loss: 117618.86358173077
Training Epoch: 9 Loss: 106500.88341346153
Training Epoch: 10 Loss: 96507.72115384616
Furthermore, the output is displayed on RGB. I want to known if there is something wrong.
(https://github.com/Caoxuheng/imgs/blob/main/1.png)
The text was updated successfully, but these errors were encountered: