Productive Devs Need Great Tools. So Does Your AI.
The complete platform for building, testing, and scaling AI-powered software engineering products.
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
24
from openai import OpenAI
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
24
from anthropic import Anthropic
ai = Anthropic()
completion = ai.messages.create(
model="claude-3-5-sonnet-latest",
messages=[{"role": "user", "content": "Generate a python script to generate a maze!"}]
)
client = Runloop()
devbox = client.devboxes.create()
client.devboxes.write_file(
id=devbox.id,
contents=message.content,
file_path="maze_generator.py"
)
diagnostics = client.devboxes.language_server.get_diagnostics(devbox.id,file="maze_generator.py")
client.devboxes.language_server.apply_autofixes(devbox.id, diagnostics=diagnostics)
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
24
from openai import OpenAI
ai_client = OpenAI(
base_url="https://api-inference.huggingface.co/v1/", api_key=os.environ.get("HF_API_KEY")
)
completion = ai_client.completions.create(
model="meta-llama/CodeLlama-70b-hf",
messages=[{"role": "user", "content": "Generate a python script to generate a maze!"}]
)
runloop_client = Runloop()
devbox = runloop_client.devboxes.create() runloop_client.devboxes.write_file(
id=devbox.id,
contents=completion.choices[0].message,
file_path="maze_generator.py"
)
snapshot = runloop_client.devboxes.snapshot_disk(id=devbox.id)
runloop_client.devboxes.create(snapshot_id=snapshot.id)
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
24
name: 'swe-bench-scipy-1',
run_name: 'optional run name'})
const devboxId = scenarioRun.startingEnvironment;
await runloop.devboxes.awaitRunning(devboxId)
const myAgent = new MyAgent({
prompt: scenarioRun.scenario.context.problemStatement,
tools: [runloop.devboxes.shellTools(devboxId)],
})
const validateResults = await runloop.scenarioRuns.runScoring({
runId: scenarioRun.id
})
console.log(validateResults)
Observable Development Environment for AI
Advanced Code Understanding Tools
AI Performance Tracking and Improvement
Empower Your AI, Accelerate Your Innovation
Build, refine, and scale your AI-powered development solutions with confidence.
Observable Development Environment for AI
Advanced Code Understanding Tools
AI Performance Tracking and Improvement
The Platform for AI-Driven Software Engineering Tools
Discover how Runloop empowers teams of every stage to build, test, and optimize AI solutions for software engineering.
AI Pair Programming Assistant
Your company is creating an AI that provides real-time coding suggestions and assistance.
High-Performance Infrastructure
Ensure your AI responds rapidly to user inputs.
Contextual Code Analysis
Utilize deep code understanding for relevant recommendations.
Suggestion Quality Metrics
Evaluate the helpfulness and accuracy of your AI-generated code snippets and advice.


AI-Enhanced Code Review System
Your product streamlines code reviews using AI to identify issues and suggest improvements.
Parallel Processing Capabilities
Analyze multiple pull requests concurrently, enhancing scalability.
Customizable Evaluation Criteria
Adapt your AI's review standards to different coding guidelines.
Review Quality Assessments
Measure the accuracy and relevance of your AI-generated comments.
Intelligent Test Generation Platform
You're developing an AI solution that automatically generates comprehensive test coverage.
Language-Agnostic Environments
Deploy your AI across various programming languages.
Development Tool Integrations
Leverage IDE and language server connections for precise code analysis.
Test Coverage Evaluations
Quantify the comprehensiveness and effectiveness of your AI-generated tests.

Observable Development Environment for AI
Advanced Code Understanding Tools
AI Performance Tracking and Improvement
Scale your AI Infrastructure
solution faster.
Stop building infrastructure. Start building your AI engineering product.