From 30545d63809e65aaf8ce7acd44de43f85d887eb3 Mon Sep 17 00:00:00 2001 From: WangYue0000 <142971062+WangYue0000@users.noreply.github.com> Date: Wed, 11 Dec 2024 06:56:29 +0800 Subject: [PATCH] Update checkpointing documentation to mark resume_from_checkpoint as deprecated (#20361) (#20477) * Update checkpointing documentation to mark resume_from_checkpoint as deprecated * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Update docs/source-pytorch/common/checkpointing_basic.rst Co-authored-by: Luca Antiga * Update docs/source-pytorch/common/checkpointing_basic.rst Co-authored-by: Luca Antiga * Address review comments * Address review comments * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Luca Antiga Co-authored-by: Luca Antiga --- .../common/checkpointing_basic.rst | 24 ++++++++++++++++++- 1 file changed, 23 insertions(+), 1 deletion(-) diff --git a/docs/source-pytorch/common/checkpointing_basic.rst b/docs/source-pytorch/common/checkpointing_basic.rst index 5c74178f0eaaa..1026e972849ef 100644 --- a/docs/source-pytorch/common/checkpointing_basic.rst +++ b/docs/source-pytorch/common/checkpointing_basic.rst @@ -20,6 +20,13 @@ PyTorch Lightning checkpoints are fully usable in plain PyTorch. ---- +.. important:: + + **Important Update: Deprecated Method** + + Starting from PyTorch Lightning v1.0.0, the `resume_from_checkpoint` argument has been deprecated. To resume training from a checkpoint, use the `ckpt_path` argument in the `fit()` method. + Please update your code accordingly to avoid potential compatibility issues. + ************************ Contents of a checkpoint ************************ @@ -197,16 +204,31 @@ You can disable checkpointing by passing: ---- + ********************* Resume training state ********************* If you don't just want to load weights, but instead restore the full training, do the following: +Correct usage: + .. code-block:: python model = LitModel() trainer = Trainer() # automatically restores model, epoch, step, LR schedulers, etc... - trainer.fit(model, ckpt_path="some/path/to/my_checkpoint.ckpt") + trainer.fit(model, ckpt_path="path/to/your/checkpoint.ckpt") + +.. warning:: + + The argument `resume_from_checkpoint` has been deprecated in versions of PyTorch Lightning >= 1.0.0. + To resume training from a checkpoint, use the `ckpt_path` argument in the `fit()` method instead. + +Incorrect (deprecated) usage: + +.. code-block:: python + + trainer = Trainer(resume_from_checkpoint="path/to/your/checkpoint.ckpt") + trainer.fit(model)