An Elixir SDK for building AI-powered applications with streaming support and easy integration with various AI providers.
The package can be installed by adding ai_sdk
to your list of dependencies in mix.exs
:
def deps do
[
{:ai_sdk, "~> 0.1.0"}
]
end
Set your OpenAI API key in your environment:
export OPENAI_API_KEY=your-api-key
Or in your config/config.exs
:
config :ai_sdk, :openai_api_key, "your-api-key"
# Simple chat completion
{:ok, response} = Ai.Providers.OpenAI.chat("What is the capital of France?")
# Streaming chat completion
Ai.Providers.OpenAI.chat("Tell me a story", %{stream: true})
|> Stream.map(fn chunk ->
chunk.choices
|> Enum.map(& &1.delta.content)
|> Enum.join("")
end)
|> Stream.each(&IO.write/1)
|> Stream.run()
# Chat with history
messages = [
%{role: "system", content: "You are a helpful assistant."},
%{role: "user", content: "What's the weather like?"},
%{role: "assistant", content: "I don't have access to real-time weather data."},
%{role: "user", content: "What can you help me with then?"}
]
Ai.Providers.OpenAI.chat(messages, %{
model: "gpt-4",
temperature: 0.7
})
# Function calling
functions = [
%{
name: "get_weather",
description: "Get the current weather in a location",
parameters: %{
type: "object",
properties: %{
location: %{
type: "string",
description: "The city and state, e.g., San Francisco, CA"
}
},
required: ["location"]
}
}
]
Ai.Providers.OpenAI.chat("What's the weather in San Francisco?", %{
functions: functions,
function_call: "auto"
})
# Simple text generation
{:ok, text} = Ai.Providers.OpenAI.generate_text("Explain quantum computing in simple terms", %{
temperature: 0.7,
max_tokens: 150
})
# Text generation with different model
{:ok, text} = Ai.Providers.OpenAI.generate_text(
"Write a haiku about programming",
%{
model: "gpt-4",
temperature: 0.9
}
)
# Define a schema for the response structure
schema = %{
"type" => "object",
"properties" => %{
"title" => %{"type" => "string"},
"summary" => %{"type" => "string"},
"key_points" => %{
"type" => "array",
"items" => %{"type" => "string"}
},
"difficulty" => %{
"type" => "string",
"enum" => ["beginner", "intermediate", "advanced"]
}
},
"required" => ["title", "summary", "key_points", "difficulty"]
}
# Generate structured data from text
{:ok, article_analysis} = Ai.Providers.OpenAI.generate_structured(
"Analyze this article about machine learning basics",
schema,
%{
temperature: 0.7,
model: "gpt-4"
}
)
# The result will be a structured map matching the schema
%{
"title" => "Introduction to Machine Learning",
"summary" => "A comprehensive overview of ML fundamentals...",
"key_points" => [
"Data preprocessing is crucial",
"Different types of learning algorithms",
"Model evaluation techniques"
],
"difficulty" => "beginner"
}
# Simple completion
{:ok, response} = Ai.Providers.OpenAI.completion("Complete this: The quick brown fox")
# Streaming completion
Ai.Providers.OpenAI.completion("Tell me a story", %{stream: true})
|> Stream.map(fn chunk ->
chunk.choices
|> Enum.map(& &1.delta.content)
|> Enum.join("")
end)
|> Stream.each(&IO.write/1)
|> Stream.run()
# Completion with options
options = %{
model: "gpt-3.5-turbo-instruct",
temperature: 0.7,
max_tokens: 100,
echo: true,
suffix: "over the lazy dog"
}
Ai.Providers.OpenAI.completion("The quick brown fox", options)
:model
- The model to use (default varies by endpoint):temperature
- Controls randomness (0-1):max_tokens
- Maximum tokens in the response:top_p
- Controls diversity via nucleus sampling:frequency_penalty
- Decreases likelihood of repeating tokens:presence_penalty
- Increases likelihood of new topics:stop
- Sequences where the API will stop generating:stream
- Whether to stream the response
:functions
- List of functions the model may call:function_call
- Controls function calling behavior
:echo
- Echo back the prompt in addition to the completion:suffix
- Text to append to the completion:logit_bias
- Modify likelihood of specific tokens
:chunk_timeout
- Timeout for receiving chunks (default: 10000ms)
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.