Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(deps): update machine-learning #14891

Merged
merged 1 commit into from
Dec 24, 2024
Merged

fix(deps): update machine-learning #14891

merged 1 commit into from
Dec 24, 2024

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Dec 24, 2024

This PR contains the following updates:

Package Change Age Adoption Passing Confidence
huggingface-hub 0.26.5 -> 0.27.0 age adoption passing confidence
pydantic (changelog) 2.10.3 -> 2.10.4 age adoption passing confidence
pydantic-settings (changelog) 2.6.1 -> 2.7.0 age adoption passing confidence
pytest-asyncio (changelog) 0.24.0 -> 0.25.0 age adoption passing confidence
python-multipart (changelog) 0.0.19 -> 0.0.20 age adoption passing confidence
uvicorn (changelog) 0.32.1 -> 0.34.0 age adoption passing confidence

Release Notes

huggingface/huggingface_hub (huggingface-hub)

v0.27.0: [v0.27.0] DDUF tooling, torch model loading helpers & multiple quality of life improvements and bug fixes

Compare Source

📦 Introducing DDUF tooling

DDUF Banner

DDUF (DDUF's Diffusion Unified Format) is a single-file format for diffusion models that aims to unify the different model distribution methods and weight-saving formats by packaging all model components into a single file. We will soon have a detailed documentation for that.

The huggingface_hub library now provides tooling to handle DDUF files in Python. It includes helpers to read and export DDUF files, and built-in rules to validate file integrity.

How to write a DDUF file?
>>> from huggingface_hub import export_folder_as_dduf

### Export "path/to/FLUX.1-dev" folder as a DDUF file
>>> export_folder_as_dduf("FLUX.1-dev.dduf", folder_path="path/to/FLUX.1-dev")
How to read a DDUF file?
>>> import json
>>> import safetensors.torch
>>> from huggingface_hub import read_dduf_file

### Read DDUF metadata (only metadata is loaded, lightweight operation)
>>> dduf_entries = read_dduf_file("FLUX.1-dev.dduf")

### Returns a mapping filename <> DDUFEntry
>>> dduf_entries["model_index.json"]
DDUFEntry(filename='model_index.json', offset=66, length=587)

### Load the `model_index.json` content
>>> json.loads(dduf_entries["model_index.json"].read_text())
{'_class_name': 'FluxPipeline', '_diffusers_version': '0.32.0.dev0', '_name_or_path': 'black-forest-labs/FLUX.1-dev', 'scheduler': ['diffusers', 'FlowMatchEulerDiscreteScheduler'], 'text_encoder': ['transformers', 'CLIPTextModel'], 'text_encoder_2': ['transformers', 'T5EncoderModel'], 'tokenizer': ['transformers', 'CLIPTokenizer'], 'tokenizer_2': ['transformers', 'T5TokenizerFast'], 'transformer': ['diffusers', 'FluxTransformer2DModel'], 'vae': ['diffusers', 'AutoencoderKL']}

### Load VAE weights using safetensors
>>> with dduf_entries["vae/diffusion_pytorch_model.safetensors"].as_mmap() as mm:
...     state_dict = safetensors.torch.load(mm)

⚠️ Note that this is a very early version of the parser. The API and implementation can evolve in the near future.
👉 More details about the API in the documentation here.

DDUF parser v0.1 by @​Wauplin in #​2692

💾 Serialization

Following the introduction of the torch serialization module in 0.22.* and the support of saving torch state dict to disk in 0.24.*, we now provide helpers to load torch state dicts from disk.
By centralizing these functionalities in huggingface_hub, we ensure a consistent implementation across the HF ecosystem while allowing external libraries to benefit from standardized weight handling.

>>> from huggingface_hub import load_torch_model, load_state_dict_from_file

### load state dict from a single file
>>> state_dict = load_state_dict_from_file("path/to/weights.safetensors")

### Directly load weights into a PyTorch model
>>> model = ... # A PyTorch model
>>> load_torch_model(model, "path/to/checkpoint")

More details in the serialization package reference.

[Serialization] support loading torch state dict from disk by @​hanouticelina in #​2687

We added a flag to save_torch_state_dict() helper to properly handle model saving in distributed environments, aligning with existing implementations across the Hugging Face ecosystem:

[Serialization] Add is_main_process argument to save_torch_state_dict() by @​hanouticelina in #​2648

A bug with shared tensor handling reported in transformers#35080 has been fixed:

add argument to pass shared tensors keys to discard by @​hanouticelina in #​2696

✨ HfApi

The following changes align the client with server-side updates in how security metadata is handled and exposed in the API responses. In particular, The repository security status returned by HfApi().model_info() is now available in the security_repo_status field:

from huggingface_hub import HfApi

api = HfApi()

model = api.model_info("your_model_id", securityStatus=True)

### get security status info of your model
- security_info = model.securityStatus
+ security_info = model.security_repo_status

🌐 📚 Documentation

Thanks to @​miaowumiaomiaowu, more documentation is now available in Chinese! And thanks @​13579606 for reviewing these PRs. Check out the result here.

📝Translating docs to Simplified Chinese by @​miaowumiaomiaowu in #​2689, #​2704 and #​2705.

💔 Breaking changes

A few breaking changes have been introduced:

  • RepoCardData serialization now preserves None values in nested structures.
  • InferenceClient.image_to_image() now takes a target_size argument instead of height and width arguments. This is has been reflected in the InferenceClient async equivalent as well.
  • InferenceClient.table_question_answering() no longer accepts a parameter argument. This is has been reflected in the InferenceClient async equivalent as well.
  • Due to low usage, list_metrics() has been removed from HfApi.

⏳ Deprecations

Some deprecations have been introduced as well:

  • Legacy token permission checks are deprecated as they are no longer relevant with fine-grained tokens, This includes is_write_action in build_hf_headers(), write_permission=True in login methods. get_token_permission has been deprecated as well.
  • labels argument is deprecated in InferenceClient.zero_shot_classification() and InferenceClient.image_zero_shot_classification(). This is has been reflected in the InferenceClient async equivalent as well.

🛠️ Small fixes and maintenance

😌 QoL improvements
🐛 Bug and typo fixes
🏗️ internal
pydantic/pydantic (pydantic)

v2.10.4

Compare Source

GitHub release

What's Changed
Packaging
Fixes
New Contributors
pydantic/pydantic-settings (pydantic-settings)

v2.7.0

Compare Source

What's Changed

New Contributors

Full Changelog: pydantic/pydantic-settings@v2.6.1...v2.7.0

pytest-dev/pytest-asyncio (pytest-asyncio)

v0.25.0: pytest-asyncio 0.25.0

Compare Source

0.25.0 (2024-12-13)
  • Deprecated: Added warning when asyncio test requests async @pytest.fixture in strict mode. This will become an error in a future version of flake8-asyncio. #​979
  • Updates the error message about pytest.mark.asyncio's scope keyword argument to say loop_scope instead. #​1004
  • Verbose log displays correct parameter name: asyncio_default_fixture_loop_scope #​990
  • Propagates contextvars set in async fixtures to other fixtures and tests on Python 3.11 and above. #​1008
Kludex/python-multipart (python-multipart)

v0.0.20

Compare Source

  • Handle messages containing only end boundary #​142.
encode/uvicorn (uvicorn)

v0.34.0

Compare Source

Added
  • Add content-length to 500 response in wsproto implementation (#​2542)
Removed
  • Drop support for Python 3.8 (#​2543)

v0.33.0

Compare Source

Removed
  • Remove WatchGod support for --reload (#​2536)

Configuration

📅 Schedule: Branch creation - "on tuesday" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot added changelog:skip dependencies Pull requests that update a dependency file labels Dec 24, 2024
@renovate renovate bot requested a review from mertalev as a code owner December 24, 2024 00:10
@mertalev mertalev merged commit ef0070c into main Dec 24, 2024
41 checks passed
@mertalev mertalev deleted the renovate/machine-learning branch December 24, 2024 01:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog:skip dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant