Skip to content

Commit

Permalink
Merge pull request #173 from acon96/release/v0.3.3
Browse files Browse the repository at this point in the history
Release v0.3.3
  • Loading branch information
acon96 authored Jun 15, 2024
2 parents 9f7aa19 + 7393eae commit 75813de
Show file tree
Hide file tree
Showing 12 changed files with 224 additions and 118 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,6 +132,7 @@ In order to facilitate running the project entirely on the system where Home Ass
## Version History
| Version | Description |
|---------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| v0.3.3 | Improvements to the Generic OpenAI Backend, improved area handling, fix issue using RGB colors, remove EOS token from responses, replace requests dependency with aiohttp included with Home Assistant |
| v0.3.2 | Fix for exposed script entities causing errors, fix missing GBNF error, trim whitespace from model output |
| v0.3.1 | Adds basic area support in prompting, Fix for broken requirements, fix for issue with formatted tools, fix custom API not registering on startup properly |
| v0.3 | Adds support for Home Assistant LLM APIs, improved model prompting and tool formatting options, and automatic detection of GGUF quantization levels on HuggingFace |
Expand Down
8 changes: 4 additions & 4 deletions TODO.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# TODO
- [ ] support new LLM APIs
- [x] support new LLM APIs
- rewrite how services are called
- handle no API selected
- rewrite prompts + service block formats
- implement new LLM API that has `HassCallService` so old models can still work
- update dataset so new models will work with the API
- [ ] update dataset so new models will work with the API
- [ ] make ICL examples into conversation turns
- [ ] translate ICL examples + make better ones
- [ ] areas/room support
- [ ] convert requests to aiohttp
- [x] areas/room support
- [x] convert requests to aiohttp
- [x] detection/mitigation of too many entities being exposed & blowing out the context length
- [ ] figure out DPO to improve response quality
- [x] setup github actions to build wheels that are optimized for RPIs
Expand Down
5 changes: 4 additions & 1 deletion custom_components/llama_conversation/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,10 +75,13 @@ def create_agent(backend_type):

return agent_cls(hass, entry)

# load the model in an executor job because it takes a while and locks up the UI otherwise
# create the agent in an executor job because the constructor calls `open()`
backend_type = entry.data.get(CONF_BACKEND_TYPE, DEFAULT_BACKEND_TYPE)
agent = await hass.async_add_executor_job(create_agent, backend_type)

# call load model
await agent._async_load_model(entry)

# handle updates to the options
entry.async_on_unload(entry.add_update_listener(update_listener))

Expand Down
Loading

0 comments on commit 75813de

Please sign in to comment.