Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch the WebGPU Canvas Interop to be WebGPU first #53

Open
greggman opened this issue Nov 3, 2024 · 2 comments
Open

Switch the WebGPU Canvas Interop to be WebGPU first #53

greggman opened this issue Nov 3, 2024 · 2 comments

Comments

@greggman
Copy link

greggman commented Nov 3, 2024

IIUC, there is a proposal to allow canvas to transferToWebGPU

This seems like it has several issues.

  • The canvas might not be compatible with the WebGPU device. In this case, a copy is needed to transfer the canvas's contents into the WebGPU device and back.
  • The canvas might be the wrong texture format. (app can't choose like it can with WebGPU)
  • The canvas format might change on each transfer requiring the app to be built around checking and remaking resources on demand. (Mozilla confirmed this is an issue)
  • Setting the canvas.width,height after transferring to WebGPU is strange.
  • Texture management is too magic.

Instead of starting with canvas2d and transfering to WebGPU, would it be better to start with WebGPU and transfer to canvas 2d?

const device = await (await navigator.gpu.requestAdapter()).requestDevice({
  canvas2DCompatible: true,  // Give the implementataion a chance to
                             // add any features it needs the device to have
));

// Create a 2D context for this device
const ctx = CanvasRenderingContext2D.createFromWebGPUDevice(device);

// Make textures in WebGPU as normal
const texture = device.createTexture({
  size: [200, 300],
  format: 'rgba8unorm',
  usage: GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.TEXTURE_BINDING,
});

// Draw to a texture using the 2D context
ctx.transferWebGPUTextureToContext(texture);  // texture is now unusable by WebGPU
ctx.beginPath();
ctx.arc(100, 150, 75, 0, Math.PI * 2);
ctx.fill();
ctx.removeWebGPUTextureFromContext();  // texture is now usable by WebGPU again and 2d draw commands submitted. (maybe this should take device or device.queue)

Advantages:

  • Can draw to any texture in supported formats. Not just some magic canvas texture but any texture created by WebGPU (assuming the format is compatible with the 2d rendering context API)
  • texture ownership is clear. The WebGPU app owns the texture
  • displaying the texture works as normal. Call webgpuContext.getCurrentTexture() if you want a texture displayed in a canvas then transfer it to the context.
  • No copy is ever needed. Since the WebGPU app is making the textures they're guarenteed to be compatible with the device.
  • No sizing issues. (the ctx is not connected to a canvas so you can't change the size)
  • No magic

I'm not saying the above API is perfect. I'm just trying to throw out the ideas that it might be more useful to start at WebGPU instead of starting at Canvas2D. In particular, being able to render to any texture, not just a "canvas texture" seems useful. It also seems like a more interesting model, that a Canvas2DRenderingContext is just a API that draws to a texture, any texture, not just canvas blessed textures.

@greggman
Copy link
Author

greggman commented Nov 4, 2024

@kainino0x

@graveljp
Copy link
Collaborator

graveljp commented Nov 7, 2024

Hi,
Thanks for the feedback!

This proposal is interesting, but it does look like a mirror version of the transferToGPUTexture proposal, with both having similar limitations. The two proposal could even exists in parallel. Your proposal is nicer for users who are WebGPU first, but not for Canvas2D users. In particular, your proposal does't allow presenting the results to a normal 2D canvas. CanvasRenderingContext2D.createFromWebGPUDevice(...) would need to return a new type of offscreen canvas context which cannot be used to present frames. In contrast, the transferToGPUTexture API allows Canvas2D content to be presented by a WebGPU context, or WebGPU content to be presented by a Canvas2D context.

Regarding the issues you are bringing up:

The canvas might not be compatible with the WebGPU device. In this case, a copy is needed to transfer the canvas's contents into the WebGPU device and back.

Note that in Chromium's current implementation of transferToGPUTexture, only the first transfer might cause a copy. On this first transfer, we set a flag at the document level telling the canvases to create WebGPU-compatible textures from now on, which won't require a copy. We are thinking about adding a hint at context creation to declare that the context will transfer textures to WebGPU. This way, we can create the right Canvas2D texture from the start, therefore avoiding the copy on the initial call to transferToGPUTexture.

The canvas might be the wrong texture format. (app can't choose like it can with WebGPU)

This is an issue in both proposal. If we transfer from Canvas2D to WebGPU, we need to configure WebGPU to use the Canvas2D texture format. If we transfer from WebGPU to Canvas2D, we need to make sure the WebGPU texture was created with the right format in the first place or else the Canvas2D won't be able to use it. Because Canvas2D is the more restrictive of the two, it makes sense for the restrictive Canvas2D to create the texture and the more flexible WebGPU to accept it (Postel's law: "Be liberal in what you accept, and conservative in what you send").

The canvas format might change on each transfer requiring the app to be built around checking and remaking resources on demand. (Mozilla confirmed this is an issue)

I'm curious to hear more about this. Why would the canvas format change on each transfer? Wouldn't that make it even more of a headache if Canvas2D wasn't the one creating the texture? You'd need WebGPU to create textures with a different format on every transfer. Could you clarify?

Setting the canvas.width,height after transferring to WebGPU is strange.

Can you expand on what you find strange here? Changing the size of the canvas resets the canvas to it's default state, as if newly created. It was the de-facto and only way to reset a canvas before the reset() API was added. If the canvas is reset, it make sense to reset all states and release all resources. Thus, calling reset() aborts the transfer and destroys the GPUTexture.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants