Guides
Step-by-step guides for common workflows and advanced use cases.
Guides
These guides walk you through common workflows and advanced use cases in ProxifAI.
Setting Up a Development Workflow
A complete setup for a team using ProxifAI for the full development lifecycle:
Create an Organization
Set up your organization, invite team members, and configure roles. See Creating an Organization.
Create Projects and Repositories
Create projects for planning and repositories for code. Link them together so issues connect to PRs automatically.
Set Up CI/CD Pipelines
Define YAML-based pipelines in your repositories. Configure secrets, approval gates, and deploy environments.
Configure AI Agents
Set up AI models under Settings → AI Configuration. Configure agent images and compute resources.
Build Automation Workflows
Create workflows in the Flows section to automate repetitive tasks — issue triage, code review, testing, and more.
Common Workflows
Issue-to-PR Flow
- Create an issue describing the work
- Create a branch referencing the issue
- Push code and open a pull request
- CI pipeline runs automatically
- Team reviews and approves the PR
- Merge closes the linked issue
AI-Powered Code Review
- Open a pull request
- The AI agent is triggered via a workflow
- Agent clones the repo, reviews the diff, and leaves comments
- Developer addresses feedback
- Agent re-reviews and approves
Automated Issue Triage
- New issues arrive via the API, CLI, or web
- A workflow trigger fires on issue creation
- AI agent analyzes the issue content
- Agent assigns priority, labels, and a suggested assignee
- Team reviews and adjusts as needed
Sprint Planning
- Create a new sprint with start and end dates
- Review the backlog and drag issues into the sprint
- Assign issues to team members
- Track progress throughout the sprint
- Close the sprint and review completion stats
Integrations
Slack Integration
Connect Slack to receive notifications and interact with ProxifAI:
- Receive alerts for PR reviews, pipeline failures, and workflow approvals
- Use slash commands to create issues and check status
- Interactive buttons for quick actions
Webhook Integration
Set up outbound webhooks for custom integrations:
- Subscribe to specific event types
- View delivery logs and retry failed deliveries
- Test webhooks before going live
AI Gateway as API
Use the AI Gateway as a drop-in replacement for OpenAI-compatible APIs. The gateway is mounted at /api/v1/llm/* on your deployment host:
curl -X POST https://your-host/api/v1/llm/v1/chat/completions \
-H "Authorization: Bearer pfai_your_token" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4.6",
"messages": [{"role": "user", "content": "Hello"}]
}'
The AI Gateway ships with three provider types out of the box — openai, anthropic, gemini — plus a generic openai-compatible driver for any OpenAI-shaped endpoint (Ollama, vLLM, OpenRouter, your own proxy). See AI Gateway for the full default model catalog.