Skip to content

Releases: smallcloudai/refact

v1.8.0

03 Dec 15:03
1b094ba
Compare
Choose a tag to compare

Refact.ai Self-hosted:

  • CUDA and cuDNN Version Update: 11.8.0 -> 12.4.1
  • New models: llama3.1, llama3.2 and qwen2.5/coder families.
  • New providers support: 3rd party APIs for groq and cerebras.
  • Support for multiline code completion models.

v1.7.0

20 Sep 17:05
5411936
Compare
Choose a tag to compare

Refact.ai Self-hosted:

  • New models: last OpenAI models is now available in Docker.
  • Tool usage support for 3rd party models: turn on 3rd party APIs to use latest features of Refact.
  • Removed deprecated models.

v1.6.4

05 Jul 11:44
Compare
Choose a tag to compare

Refact.ai Self-hosted:

  • Claude-3.5 Sonnet Support: New model from Anthropic is now available in Docker.

Refact.ai Enterprise:

  • Llama3 vLLM Support: Added vLLM version of Llama-3-8B-Instruct for better performance.

v1.6.3

20 Jun 16:44
Compare
Choose a tag to compare

Refact.ai Self-hosted:

  • Llama3 8k Context: llama3 models now support 8k context.
  • Credentials Management: We added information about tokens and keys.
  • Deprecated Models: The models starcoder, wizardlm, and llama2 are deprecated and will be removed in the next release.

Refact.ai Enterprise:

  • Refact Model 4k Context: refact model now supports 4k context.

v1.6.2

27 May 14:33
Compare
Choose a tag to compare

Refact.ai Self-hosted

  • Models Support: We've introduced support for gated models and the new llama3 model
  • Even More Models: GPT4o and GPT4-turbo models are now available

Refact.ai Enterprise

  • VLLM Speed Improvement: You are now able to experience faster processing times with our optimized VLLM
  • VLLM LoRa-Less Mode: In cases where LoRa is not set up, VLLM will now operate 20% faster due to the new LoRa-less mode
  • Empty Prompt and OOM Handling: We've addressed issues in VLLM that caused broken generations

v1.6.1

02 May 09:04
eec8585
Compare
Choose a tag to compare

Context Switching Mechanism

We've implemented the context-switching mechanism, and it's available in our latest version of the VS Code plugin. Now, you can change the max context value for models depending on your needs — small context for less memory usage and faster operation or large context for deeper insights.

Model Deprecation

Our UI updates now flag models slated for removal. This ensures you're always working with the latest and most efficient models.

Factory Reset Fix

We've resolved issues with the factory reset process for when you need a fresh start.

v1.6.0

19 Apr 16:03
Compare
Choose a tag to compare

Refact.ai Self-hosted Updates:

Multiple Source Projects:

  • Versatile Source Handling: In what used to be a single "sources" tab, you can now create multiple projects, each with its diverse sources.
  • Fine-Tune on Demand: You can now fine-tune on your specific project.

Fine-Tune Enhancements:

  • Unified Fine-Tuning Process: We've simplified the fine-tuning process by merging the filter and fine-tuning steps into one process.
  • Multi-GPU Support: You can use multiple GPUs for faster fine-tuning!
  • Simultaneous Fine-Tuning: Execute multiple fine-tuning processes concurrently to save time.
  • Full Model Support: Load full model patched LoRA weights (note: requires substantial RAM).

Refact.ai Enterprise Updates:

  • Customization Tab: Personalize system prompts and toolbox commands specifically for your team.
  • Keycloak Integration: Secure user authentication with Keycloak, including a dedicated account page for users, ensuring both convenience and security. Check out the documentation for more information: https://docs.refact.ai/guides/keycloak/

v1.5.0

25 Mar 08:19
Compare
Choose a tag to compare
  • Fine-tune Process Enhancements: We've made the fine-tuning process for starcoder models both faster and of higher quality with new default settings
  • Fine-tune UI: The fine-tune setup has been moved to the Model Hosting tab for easier access
  • Plugin Fine-tune Switching: VS Code and JetBrains plugins now support switching between fine-tuned models
  • Chat Tab Redesign: The Chat tab has been temporarily hidden for a redesign and will be back in the next release

Compatibility Issues

  • Plugin Support: Older versions of plugins will fall back to using the base model as they do not support the new fine-tuning capabilities. Make sure to update your plugin

v1.4.0

09 Feb 10:21
Compare
Choose a tag to compare

What's New

  • WebGUI Chat: Now, we ship a chat UI with our docker image!
  • Embeddings: From now on, in our docker, by default, we are starting the model responsible for the embeddings. That is necessary for the VecDB support.
  • Shared Memory Issue Resolved: A critical performance issue related to shared memory has been fixed. For more details, check out the GitHub issue.
  • Anthropic Integration: We've implemented an ability to add API keys to use third-party models!
  • stable-code-3b: The list of available models is growing! This time, we added stabilityai/stable-code-3b!
  • Optional API Key for OSS: Refact.ai Self-hosted version can now use an optional API key for security if deployed on a cloud.
  • Build Information: In the settings, you can now find the About page, which includes information about packages that are used, versions, and commit hashes.
  • LoRa Switch Fix: The issue with switching between LoRas (didn't show information in logs) is now fixed!
  • VLLM Out-of-Memory (OOM) Fix: We've fixed an out-of-memory issue with VLLM for the Refact.ai Enterprise!

v1.3.1

16 Jan 09:53
Compare
Choose a tag to compare

Open-Source Updates:

  • Memory Consumption Fix for local Cassandra.
  • Unified Volume: One volume for all data, including the database.
  • Encodings Fix for the fine-tuning process.
  • Minor Fixes addressing various small issues.

Enterprise Updates:

  • Tag Upgrade: Transition from beta to latest in docker-compose.yml. Ensure to update your compose file.
  • Runpod Support:
    • Local database integration.
    • One storage solution for all data.
  • Minor UI Fixes: Improvements and bug fixes.