A basic editing workflow resulting in a published asset #915
Replies: 2 comments
-
Draco is more widely supported by tools. If you only need these models to be edited and displayed by your own tools and viewer, that may not matter. Both Meshopt and Draco compress mesh geometry. Meshopt can also compress point clouds, animation, and morph targets, whereas Draco leaves these uncompressed. Meshopt tends to decompress more quickly. Compression ratios on geometry are usually pretty similar, if you gzip the entire file after applying Meshopt – Meshopt is designed with that additional compression in mind. Note that both Meshopt and Draco are "lossy" compression methods, with tunable parameters. The defaults usually work pretty well, though.
Meshopt also requires a decoder, although the Meshopt decoder is much smaller than the Draco decoder. See this three.js example for how to use it. There is an option that requires no decoder — quantization. This won't do as much compression as draco or meshopt, however. Applying meshopt implicitly also applies quantization.
Nothing to flag, except that the textureCompress step currently works only in Node.js environments, and not in a browser. There's a more advanced version of this transform flow in the CLI: glTF-Transform/packages/cli/src/cli.ts Lines 268 to 316 in aa9861f You can quickly test how that does on your assets with:
UVs are defined in the [0-1] space, as fractions of the texture's width and height, so resizing the textures does not require updating UVs. Most texture compression (MozJPEG, OxiPNG, WebP, AVIF) doesn't resize the texture unless requested, and preserves aspect ratio even then. Block-based compression (KTX2) will require power-of-two or multiple-of-four sizes, depending on your target environment, and can handle that resizing. I haven't seen issues related to resizing, but block-based compression is fairly sensitive. You'll need a bit of padding around UV islands, and probably some tuning of the parameters. I'd be hesitant to use KTX2 on users' textures without human visual review.
If you're writing scripts, then you can specify that with # will overwrite textures
gltf-transform cp in.gltf out.gltf
# will not overwrite textures
gltf-transform cp a/in.gltf b/out.gltf
The library can read from HTTP endpoints (on by default with WebIO, opt-in with NodeIO), but it cannot write to them. Per the earlier comment, if you need to pass this data into some other JS library for storage, I would suggest having the library 'write' to an in-memory JSONDocument. The resource entries will be Uint8Arrays (equivalent to Node.js Buffers), and the json entry can be serialized as a string and written to a If it's easier to upload to S3 once the files are already on disk, you could use the usual I/O methods to do that first, and then upload the files. I've worked with Google Cloud Storage more than S3, but I assume the APIs are fairly similar. |
Beta Was this translation helpful? Give feedback.
-
@donmccurdy - thank you so much for that detailed response to my questions. I really appreciate the effort you've put into all of this. I'll make some POCs over the next few weeks and see how I get on, particularly with the View package. Thanks again Don. |
Beta Was this translation helpful? Give feedback.
-
Hey Don,
Apologies if this has already been covered - I'm going blind from doc overload on a ton of stuff at the moment and getting confused by picking bits from here and there ;-)
I was hoping to get a very clear understanding of how best to use the packages to achieve a simple workflow to begin with.
The scenario is - the app is an electron app; the user loads a model from disk, performs editing tasks on the model and then we save that model to a cloud blob storage, with the intention of compressing and cleaning up the model during that process.
A few questions that I have:
File management is a really important thing for us here, especially allowing the user to manage the placement of their files for a model. So, if we performed a textureCompress-resize, does that replace the original files or create new ones and if so, where are those files placed. Is there a way to specify a filepath?
And lastly with writing the file - I see the conversion of the document to a binary file. I did see a method for including a fetch to work with URLs but I was not sure if that applied to just getting the source model file, or also applied to writing a file? Either way, understanding how to correctly write a file to both disk and to a remote location like an S3 bucket would be really helpful.
Thanks for any direction here.
Beta Was this translation helpful? Give feedback.
All reactions