LLMs
Connecting Peeps AI to LLMs
Supported LLM Integrations
Peeps AI supports a wide range of LLMs to suit different requirements:
Cloud-based LLMs:
OpenAI GPT (e.g., GPT-4, GPT-3.5)
Anthropic Claude
Google PaLM
Local LLMs:
LLaMA-based models
GPT-J and GPT-NeoX
Any Hugging Face-supported transformer models
This flexibility ensures compatibility across a spectrum of projects, from cost-sensitive local deployments to high-performance cloud-based solutions.
Configuring Peeps AI to Use LLMs
By default, Peeps AI is set up to work with the OpenAI API. However, you can easily reconfigure agents to use alternative models or APIs. Follow these steps:
❖ Install Required Dependencies
Depending on the LLM you wish to use, you may need additional Python packages. For example:
OpenAI:
pip install openai
Hugging Face Transformers (for local models):
pip install transformers
Ollama (for local LLaMA-based models):
pip install ollama
❖ Configure Agents in agents.yaml
Modify the agents.yaml
file to specify the desired model and provider for each agent. For example:
# agents.yaml
researcher:
role: "AI Researcher"
goal: "Conduct in-depth research on specified topics"
provider: "openai" # Options: openai, huggingface, ollama
model: "gpt-4" # Model identifier for the provider
api_key: "OPENAI_API_KEY" # Optional: If required by the provider
reporting_analyst:
role: "Data Analyst"
goal: "Summarize findings into actionable insights"
provider: "huggingface"
model: "gpt2"
❖ Set API Keys in .env
Store sensitive information, like API keys, securely in the .env
file:
OPENAI_API_KEY=sk-...
HUGGINGFACE_API_TOKEN=hf_...
OLLAMA_API_KEY=...
The .env
file ensures secure and centralized management of credentials.
4. Update Agent Initialization in group.py
Modify the group.py
file to ensure agents are initialized with the specified provider and model:
from peepsai import Agent
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
tools=[...], # Add any required tools here
provider=self.agents_config['researcher']['provider'],
model=self.agents_config['researcher']['model']
)
Advanced LLM Connection Features
❖ Local Model Hosting
For privacy and cost-efficiency, Peeps AI supports hosting local models using tools like Ollama and Hugging Face. For example:
Use Ollama to run a LLaMA-based model:
ollama serve --model llama-2-13b
Then configure your agent to connect locally:
provider: "ollama" model: "llama-2-13b" api_url: "http://localhost:8000"
❖ Dynamic Model Switching
Peeps AI enables agents to dynamically switch between models based on task complexity. For example:
Simple summarization tasks: GPT-3.5
Complex, detailed analysis: GPT-4
This can be implemented in your group.py
logic:
def select_model(task_complexity):
if task_complexity > 7:
return "gpt-4"
return "gpt-3.5"
❖ Fine-Tuned Models
If your use case requires domain-specific knowledge, Peeps AI supports integrating fine-tuned models. Fine-tune a Hugging Face model, then specify it in agents.yaml
:
model: "path/to/fine-tuned-model"
Tips for Optimizing LLM Integration
Batch Processing: Reduce API calls and improve efficiency by batching multiple agent queries.
Context Window Management: Optimize prompt length to prevent exceeding token limits.
Caching Results: Use caching to store frequent queries and avoid redundant LLM calls.
Rate Limits: Be mindful of API rate limits and implement retry mechanisms where necessary.
Example: Connecting Peeps AI to OpenAI GPT
Below is a practical example of integrating OpenAI GPT with Peeps AI:
agents.yaml
ai_writer:
role: "Creative Writer"
goal: "Generate engaging content based on prompts"
provider: "openai"
model: "gpt-4"
main.py
from peepsai.project import PeepsBase
class WritingGroup(PeepsBase):
@group
def creative_group(self) -> Peeps:
return Peeps(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential
)
if __name__ == "__main__":
inputs = {"prompt": "Write a blog post about AI ethics."}
WritingGroup().creative_group().kickoff(inputs=inputs)
Common Issues and Troubleshooting
❖ API Connection Errors:
Ensure the correct API key and endpoint are specified in
.env
.Check your network configuration for any firewall restrictions.
❖ Model Performance Issues:
Use the latest models for improved accuracy.
Experiment with different models to find the best fit for your tasks.
❖ Local Model Deployment Errors:
Verify that the model server (e.g., Ollama) is running and accessible.
Last updated