Skip to content

Code for the paper: "A Novel Cross Fusion Model with Fine-grained Detail Reconstruction for Remote Sensing Image Pan-sharpening ", TGSI 2024.

License

Notifications You must be signed in to change notification settings

JUSTM0VE0N/CF2N

Repository files navigation

A novel cross fusion model with fine-grained detail reconstruction for remote sensing image pan-sharpening.


PaperAbstractMethodInstallationDatasetTrainingTestingResultsCitation

python license


What's New

Abstract


Pan-sharpening aims to obtain high resolution multispectral (HRMS) images by integrating the information in the panchromatic and multispectral images. Existing pan-sharpening methods have demonstrated impressive sharpening performance. However, these methods inherently overlook the complementary characteristics and interaction between diverse source images, resulting in sharpened outcomes accompanied by distortion. To solve the above problems, we construct a novel cross fusion model with fine-grained detail reconstruction from the perspective of frequency-domain. The motivation of the model is twofold: (1) to reconstruct spatial detail representations from diverse source images, laying the foundation for the generation of fine details in the subsequent fused images; and (2) to enhance the interaction between diverse source features during the fusion process in order to attain high-fidelity fusion outcomes. Based on the theoretical model, we develop a frequency-spectral dual domain cross fusion network (CF2N) utilizing the deep learning technique. Consequently, the CF2N consist of two main stages, namely frequency-domain dominated detail reconstruction (FD2R) and frequency-spectral cross fusion (FSCF). Specifically, a more reasonable reconstruction of fine frequency details in HRMS can be achieved by performing adaptive weighted fusion of frequency details in the FD2R stage. Furthermore, the FSCF module, which seamlessly integrates frequency- and spectral-domain details in a highly interactive cross fusion manner. As a result, the CF2N possesses the capability to attain high frequency-spectral fidelity results with excellent interpretability. Extensive experiments show the superior performance of ours over state of the art, while maintaining high efficiency. All implementations of this work will be published at our website.

Method

The overall framework:


The flowchart depicting the proposed CF2N, which is guided by the constructed cross fusion model. The CF2N consists of two main stages: FD2R and FSCF.

Dependencies

Our released implementation is tested on:

  • Ubuntu 20.04
  • Python 3.7.x
  • PyTorch 1.11 / torchvision 1.11.0
  • Tensorboard 1.8
  • NVIDIA CUDA 11.3
  • 2x NVIDIA GTX 3060
$ cd CF2N
$ pip install -r requirements.yaml

The requirements.yaml I provided contains all the installed libraries for my conda environment. You can only download the libraries needed for this project. In addition, the requirements.yaml might be slight different on different machines. If you can't find a specific version, please try pip install related version first.

Dataset

The MS data supporting the findings of this study are available in PanCollection.

The HS data supporting the findings of this study are available in HyperPanCollection.

The SAR data supporting the findings of this study are available in SEN2MS-CR.

You can download the corresponding datasets at the link I provided.

Training

You can easily integrate your methodology into our framework.

$ cd CF2N
# An example command for training
$ python train.py --option_path option.yml

During the training, tensorboard logs are saved under the experiments directory. To run the tensorboard:

$ cd ./logs/CF2N
$ tensorboard --logdir=. --bind_all

The tensorboard visualization includes metric curves and map visualization.

Testing

With only batch size 1 is recomended.

$ cd ./

# An example command for testing
$ python test.py --option_path option.yml

Pre-trained Models and Results

• Pre-trained Models

We provide the trained models at [download link]. You can test it directly with our trained weights.

• The Reduced-resolution test results in the multispectral pan-sharpening task. (Taking WorldView-3 as a example)


• The Full-resolution test results in the multispectral pan-sharpening task. (Taking QuickBird as a example)


• The results in the hyperspectral pan-sharpening task.


• The results in the SAR-optical fusion task.


• Visualization of detail representation.


Eight examples of detail representation in different methods. The first line shows the original image pairs with the left side being PAN and the right side being MS; the second, third and fourth lines show the detail representations, the fusion results and the corresponding AEMs, respectively, with the left side being FusionNet and the right side being the proposed CF2N.

We provide all test results at [download link].

Citation

This paper is published in GSIS 2024.

@article{doi:10.1080/10095020.2024.2416899,
author = {Chuang Liu, Zhiqi Zhang, Mi Wang, Shao Xiang and Guangqi Xie},
title = {A novel cross fusion model with fine-grained detail reconstruction for remote sensing image pan-sharpening},
journal = {Geo-spatial Information Science},
volume = {0},
number = {0},
pages = {1-29},
year = {2024},
publisher = {Taylor & Francis},
doi = {10.1080/10095020.2024.2416899},
URL = {https://doi.org/10.1080/10095020.2024.2416899},
eprint = {https://doi.org/10.1080/10095020.2024.2416899}
}

Contact

We are glad to hear from you. If you have any questions, please feel free to contact us.

About

Code for the paper: "A Novel Cross Fusion Model with Fine-grained Detail Reconstruction for Remote Sensing Image Pan-sharpening ", TGSI 2024.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages