Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build GPU Variants of Current Images #1557

Open
sdwalker62 opened this issue Dec 24, 2021 · 24 comments
Open

Build GPU Variants of Current Images #1557

sdwalker62 opened this issue Dec 24, 2021 · 24 comments
Labels
type:Enhancement A proposed enhancement to the docker images

Comments

@sdwalker62
Copy link

Hello everyone! First, I just want to say that at work we almost exclusively use Jupyter as our data science platform and appreciate everything that the community and the Jupyter team has done to create such a wonderful tool!

I have searched through the current list of issues and pull requests open in this repository and have not found and that match this request so I am starting a new one. Apologies if I have missed any. Our work necessitates the use of cuda-enabled containers as we primarily train deep learning models. We have made a slight modification to the makefile that allows us to re-use most of the infrastructure already in place to build the various levels of notebooks.


I am aware of the fantastic work being done at https://github.com/iot-salzburg/gpu-jupyter/ and suggest anyone who needs gpu-enabled containers to check it out as a first stop. This solution will work for many, and it contains some additional features which are nice to have, but it is slightly different than what we prepose.

The current docker-stacks starts with the base-notebook which is built from an Ubuntu 20.04 layer. Our preposal is to build a separate set of notebooks based on the nvidia/cuda:11.3.0-cudnn8-runtime-ubuntu20.04 image (we chose this for compatibility with Pytorch). This is similar to what the iot-salzburg repository does in their images but maintains the minimalism found in some of the base notebooks for those who are looking for a solid foundation for their own custom images. With the change to the makefiles it would be easy to build two sets of images: one for the standard notebooks as they exist now, and another for the cuda-enabled variants.

To address the selection criteria found in the documention for new features:

  • The majority of the data science community working on neural networks uses GPUs to train their models and being able to pull down mirrors of the docker-stacks containers with cuda backed in would be tremendously useful.
  • One of the most beautiful things about the docker-stacks project is the ability to base custom images on any of the notebook images depending on the requirements of the user. Building two sets of containers would be inline with the current design philosophy of the project.
  • The complexity of this change is low. We have a simply python script that replaces lines in the base-notebook and a new section in the Makefile that is minimal and could be hooked into the current build process.
  • The build times are identical to the current build times as only the base layers changes. None of the current images will change, only new tags added.
  • Sustainment of the change should not be difficult at all as the nvidia/cuda containers are regularly updated by Nvidia. Any test that need to be written we can do to support the pull request.

I would be happy to answer any questions about this proposal!

@sdwalker62 sdwalker62 added the type:Enhancement A proposed enhancement to the docker images label Dec 24, 2021
@mathbunnyru
Copy link
Member

mathbunnyru commented Feb 11, 2022

@sdwalker62 first of all, thank you for your suggestion!
I'm sorry it took such a long time to answer you.

I see a growing interest in the GPU-related docker stacks because people ask more and more about them.

But I think right now it's not easy to add a new image. And in this case a whole set of new images.
The main reason is our complicated and slow build system.

The build times are identical to the current build times as only the base layers changes.

Right now, if we add new images, this will add time to our workflow.
Please take a look at our proposal and what our build system might look like in the future.
#1407

The complexity of this change is low. We have a simply python script that replaces lines in the base-notebook and a new section in the Makefile that is minimal and could be hooked into the current build process

I think you can actually get rid of this patching because we have a way to pass custom arguments to docker build like this:
DOCKER_BUILD_ARGS='--build-arg ROOT_CONTAINER="nvidia/cuda:12.3.1-devel-ubuntu22.04"' make build/docker-stacks-foundation.
I think it builds the image you wanted it to build.

Please, tell me, if this helps.

Update 2024.01.17: fixed the command to use more recent image and docker-stacks-foundation image (which is the root image now).

@mathbunnyru
Copy link
Member

mathbunnyru commented Feb 11, 2022

This works because we have ARG ROOT_CONTAINER in our docker-stacks-foundation image.
https://github.com/jupyter/docker-stacks/blob/main/images/docker-stacks-foundation/Dockerfile#L6

@mathbunnyru
Copy link
Member

mathbunnyru commented Jul 5, 2022

@sdwalker62 if you're still interested in this issue - we've changed our build system completely, so it should probably be easier now to add cuda based images.

@markusschmaus
Copy link

It would be awesome if such cuda based images were available.

@mathbunnyru
Copy link
Member

I think it's actually currently possible to have cuda based images and not increase build time at all.
If someone wants to do it, please, take a look here, this is where you will need to add new steps:
https://github.com/jupyter/docker-stacks/blob/main/.github/workflows/docker.yml

@mathbunnyru mathbunnyru changed the title Small Change to Makefile to facilitate GPU Variants of Current Images [ENH] - Build GPU Variants of Current Images Aug 28, 2023
@mathbunnyru mathbunnyru changed the title [ENH] - Build GPU Variants of Current Images Build GPU Variants of Current Images Sep 10, 2023
@benz0li
Copy link
Contributor

benz0li commented Oct 23, 2023

@sdwalker62 You may be interested in my/b-data's CUDA-enabled JupyterLab Python docker stack:

[...]

Similar projects

What makes this project different:

  1. Multi-arch: linux/amd64, linux/arm64/v8
  2. Derived from nvidia/cuda:11.8.0-cudnn8-devel-ubuntu22.04
    • including development libraries and headers
  3. TensortRT and TensorRT plugin libraries
    • including development libraries and headers
  4. IDE: code-server next to JupyterLab
  5. Just Python – no Conda / Mamba

See Notes for tweaks, settings, etc.

P.S.: I have whitelisted your GitHub account at https://demo.cuda.jupyter.b-data.ch so you may quickly test online.

@mathbunnyru mathbunnyru pinned this issue Jan 17, 2024
@mathbunnyru
Copy link
Member

mathbunnyru commented Jan 17, 2024

I updated some advices in my comments above, so they work with current set of images.

If someone comes across this issue and feels they need GPU images, please, upvote this issue - this will make it more clear for me and other maintainers, that community is really interested in such images.
Also, I would highly appreciate the information on how to choose the relevant nvidia/cuda base image - there are many of them and I'm not sure which version community needs and wants the most.

@mathbunnyru
Copy link
Member

New CUDA enabled pytorch-notebook images are pushed and ready to use 🎉
https://quay.io/repository/jupyter/pytorch-notebook?tab=tags

@fzyzcjy
Copy link

fzyzcjy commented Mar 12, 2024

Hi, I am interested in having docker image for jupyter + pytorch + gpu, because docker is quite convenient and deep learning really needs GPU.

Therefore, I wonder is there any shortcomings when using these? I would love to hear from you experts!

@mathbunnyru
Copy link
Member

mathbunnyru commented Mar 27, 2024

@sdwalker62 @KopfKrieg @joglekara @TylerSpears @romainrossi @ChristofKaufmann @fzyzcjy @mfreeman451 @markusschmaus @benz0li
Now we have cuda-enabled variants of jupyter/pytorch-notebook and jupyter/tensorflow-notebook.
You can find more information here: https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html#cuda-enabled-variant.

These images are still based on top of regular Ubuntu images, but install GPU versions of pytorch or tensorflow.
I am not sure builds on top of Nvidia Ubuntu images are possible (because of the licensing).
If you need such images, it would be nice to know the reasons and the use cases though.

I would like to thank @johanna-reiml-hpi for implementing general variant concept and making it work for pytorch (#2091) and @ChristofKaufmann for making it work for tensorflow (#2100).

@yuvipanda
Copy link
Contributor

@mathbunnyru do you think it would be a useful idea to make a blog post on the Jupyter blog announcing these?

@benz0li
Copy link
Contributor

benz0li commented Mar 27, 2024

I am not sure builds on top of Nvidia Ubuntu images are possible (because of the licensing).

Everything branded NVIDIA and/or CUDA comes with proprietary licenses.

The nvidia/cuda images entail the NVIDIA Deep Learning Container License.

Every nvidia-* Python package entails its own NVIDIA and/or CUDA license terms.

Whether or not you are aware of it, you agree to their End User License Agreement (EULA).

@fzyzcjy
Copy link

fzyzcjy commented Mar 27, 2024

Thanks for the image!

If you need such images, it would be nice to know the reasons and the use cases though.

My two cents: Have not tried it yet, but IIRC some pip packages require proper cuda environments to compile some C++ code that is generated on the fly to maximize performance. (Maybe https://github.com/microsoft/DeepSpeed or some other lib, I do not remember very clearly)

Whether or not you are aware of it, you agree to their End User License Agreement (EULA).

Wondering whether it is possible to mark jupyter's corresponding images with that license, while other image with the original normal license

@mathbunnyru
Copy link
Member

@mathbunnyru do you think it would be a useful idea to make a blog post on the Jupyter blog announcing these?

Thanks for the suggestion. I’ll do it!

@benz0li
Copy link
Contributor

benz0li commented Mar 29, 2024

I am not sure builds on top of Nvidia Ubuntu images are possible (because of the licensing).

Cross reference: Uploading of container developed on top of nvidia/cuda images (#224) · Issues · nvidia / container-images / cuda · GitLab

@mathbunnyru
Copy link
Member

@mathbunnyru do you think it would be a useful idea to make a blog post on the Jupyter blog announcing these?

Thanks for the suggestion. I’ll do it!

Sent for a review.

@mathbunnyru
Copy link
Member

@benz0li @yuvipanda could you please tell me, what do I need to do to make GPU work?
Simply add --gpus all when running docker image or is there something else?

I don't use GPU versions of images, so I don't have any hands-on experience, sorry for that.

@benz0li
Copy link
Contributor

benz0li commented Apr 10, 2024

@benz0li @yuvipanda could you please tell me, what do I need to do to make GPU work?

NVIDIA GPU + NVIDIA Linux driver + NVIDIA Container Toolkit

ℹ️ The host running the GPU accelerated images only requires the NVIDIA driver, the CUDA toolkit does not have to be installed.

Prerequisites: See https://github.com/b-data/jupyterlab-python-docker-stack/blob/83e8c3b830db83b8c73457487605f44be9e4e487/CUDA.md#prerequisites

Simply add --gpus all when running docker image or is there something else?

Yes.

  • Docker: --gpus all or --gpus '"device=all"'
  • Podman: --device 'nvidia.com/gpu=all'

The above information is for Linux hosts. If you are working on a Windows host:

ℹ️ Current Apple hardware: No NVIDIA GPU = No CUDA support.

@ChristofKaufmann
Copy link
Contributor

According to the nvidia doc: When setting the environment variable ENV NVIDIA_VISIBLE_DEVICES all as specified in the CUDA variants, you don't need to specify it on the command line. Haven't tested it though.

@benz0li
Copy link
Contributor

benz0li commented Apr 11, 2024

According to the nvidia doc: When setting the environment variable ENV NVIDIA_VISIBLE_DEVICES all as specified in the CUDA variants, you don't need to specify it on the command line. Haven't tested it though.

@ChristofKaufmann This only works if either

  • the default runtime is set to nvidia
    • i.e. "default-runtime": "nvidia" set in /etc/docker/daemon.json

or

  • the runtime is set to nvidia using the CLI
    • i.e. --runtime=nvidia set on the docker run command-line

ℹ️ Specifying --gpus in the CLI automatically invokes the nvidia runtime.

@ChristofKaufmann
Copy link
Contributor

Thanks for the clarification!

@mathbunnyru
Copy link
Member

@benz0li @yuvipanda could you please tell me, what do I need to do to make GPU work?

NVIDIA GPU + NVIDIA Linux driver + NVIDIA Container Toolkit

ℹ️ The host running the GPU accelerated images only requires the NVIDIA driver, the CUDA toolkit does not have to be installed.

Prerequisites: See https://github.com/b-data/jupyterlab-python-docker-stack/blob/83e8c3b830db83b8c73457487605f44be9e4e487/CUDA.md#prerequisites

Simply add --gpus all when running docker image or is there something else?

Yes.

  • Docker: --gpus all or --gpus '"device=all"'
  • Podman: --device 'nvidia.com/gpu=all'

The above information is for Linux hosts. If you are working on a Windows host:

ℹ️ Current Apple hardware: No NVIDIA GPU = No CUDA support.

Thanks, @benz0li 👍

@mathbunnyru
Copy link
Member

Posted 🎉
https://blog.jupyter.org/cuda-enabled-jupyter-docker-images-8a9f8b8f2158

@mathbunnyru
Copy link
Member

I'm unpinning this issue - I think our TensorFlow and PyTorch GPU enabled images are already quite good, and even though having general GPU enabled images would be great, it might not be relevant for all our users

@mathbunnyru mathbunnyru unpinned this issue Dec 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type:Enhancement A proposed enhancement to the docker images
Projects
None yet
Development

No branches or pull requests

7 participants