Skip to content

E2B Sandboxes

E2B provides secure cloud sandboxes where AI agents can execute code, run shell commands, and interact with the filesystem, all in an isolated environment that spins up in milliseconds and self-destructs when done.

The problem: sandboxes are ephemeral. Everything written inside one disappears when it shuts down. By mounting Deeplake CLI inside a sandbox, you get persistent, cloud-backed storage that survives sandbox lifecycles, syncs across multiple sandboxes, and is queryable.

Objective

Install Deeplake CLI inside an E2B sandbox, mount a table, and use it as persistent storage so files written in one sandbox are immediately available in another.

Architecture

┌──────────────────────────────────────┐
│  Your Application (Python / TS)      │
│  Uses E2B SDK to create sandboxes    │
└──────────────┬───────────────────────┘
               │  Spawns sandboxes
    ┌──────────▼──────────┐  ┌──────────────────────┐
    │  E2B Sandbox A      │  │  E2B Sandbox B       │
    │  ├─ Deeplake CLI   │  │  ├─ Deeplake CLI    │
    │  └─ /mnt/data/      │  │  └─ /mnt/data/       │
    │     (FUSE mount)    │  │     (FUSE mount)     │
    └──────────┬──────────┘  └──────────┬───────────┘
               │                        │
               └────────┬───────────────┘
                        │  Same table
               ┌────────▼───────────┐
               │  Deeplake Cloud   │
               │  (persistent data) │
               └────────────────────┘

Both sandboxes mount the same Deeplake table. Writes from Sandbox A are visible in Sandbox B within seconds.

Prerequisites

  • An E2B API key
  • A Deeplake API token
  • Node.js / Bun (for the TypeScript examples) or Python 3.8+

Step 1: Set Up the Sandbox

Install Deeplake CLI, configure FUSE, and authenticate, all inside the sandbox:

import { Sandbox } from "e2b";

const sandbox = await Sandbox.create("base", {
  timeoutMs: 300_000, // 5 min lifetime
});

// Install FUSE + Deeplake CLI
await sandbox.commands.run(
  "sudo apt-get update -qq && sudo apt-get install -y -qq fuse > /dev/null 2>&1 && " +
  "sudo chmod 666 /dev/fuse && " +
  "sudo sed -i 's/#user_allow_other/user_allow_other/' /etc/fuse.conf && " +
  "curl -fsSL deeplake.ai/install | bash",
  { timeoutMs: 60_000 }
);
from e2b import Sandbox

sandbox = Sandbox.create("base", timeout=300)

# Install FUSE + Deeplake CLI
sandbox.commands.run(
    "sudo apt-get update -qq && sudo apt-get install -y -qq fuse > /dev/null 2>&1 && "
    "sudo chmod 666 /dev/fuse && "
    "sudo sed -i 's/#user_allow_other/user_allow_other/' /etc/fuse.conf && "
    "curl -fsSL deeplake.ai/install | bash",
    timeout=60,
)

Step 2: Authenticate and Mount

Write Deeplake credentials into the sandbox and mount a table:

const DEEPLAKE_API_KEY = process.env.DEEPLAKE_API_KEY!;
const ORG_ID = process.env.DEEPLAKE_ORG_ID!;

// Write credentials
await sandbox.commands.run(
  `mkdir -p ~/.deeplake && echo '${JSON.stringify({
    token: DEEPLAKE_API_KEY,
    orgId: ORG_ID,
    apiUrl: "https://api.deeplake.ai",
  })}' > ~/.deeplake/credentials.json`,
  { timeoutMs: 5_000 }
);

// Mount a table
await sandbox.commands.run(
  'deeplake init --table "my_agent_data" --path /mnt/data',
  { timeoutMs: 60_000 }
);
import os, json

token = os.environ["DEEPLAKE_API_KEY"]
org_id = os.environ["DEEPLAKE_ORG_ID"]

# Write credentials
creds = json.dumps({
    "token": token,
    "orgId": org_id,
    "apiUrl": "https://api.deeplake.ai",
})
sandbox.commands.run(
    f"mkdir -p ~/.deeplake && echo '{creds}' > ~/.deeplake/credentials.json",
    timeout=5,
)

# Mount a table
sandbox.commands.run(
    'deeplake init --table "my_agent_data" --path /mnt/data',
    timeout=60,
)

Step 3: Read and Write Files

Once mounted, the sandbox can use standard filesystem commands:

// Write a file: it persists in Deeplake
await sandbox.commands.run(
  'echo "Analysis completed at $(date)" > /mnt/data/results.txt'
);

// Read it back
const result = await sandbox.commands.run("cat /mnt/data/results.txt");
console.log(result.stdout);

// List all stored files
const listing = await sandbox.commands.run("ls -la /mnt/data/");
console.log(listing.stdout);
# Write a file: it persists in Deeplake
sandbox.commands.run(
    'echo "Analysis completed at $(date)" > /mnt/data/results.txt'
)

# Read it back
result = sandbox.commands.run("cat /mnt/data/results.txt")
print(result.stdout)

# List all stored files
listing = sandbox.commands.run("ls -la /mnt/data/")
print(listing.stdout)

After the sandbox is destroyed, the files remain in Deeplake. Mount the same table in a new sandbox (or on your local machine) and they're still there.

Step 4: Cross-Sandbox Sync

Two sandboxes mounting the same table see each other's files in real-time:

// Spin up two sandboxes
const sandboxA = await Sandbox.create("base", { timeoutMs: 300_000 });
const sandboxB = await Sandbox.create("base", { timeoutMs: 300_000 });

// ... install CLI and authenticate in both (same steps as above) ...

// Both mount the same table
await sandboxA.commands.run(
  'deeplake init --table "shared_workspace" --path /mnt/data',
  { timeoutMs: 60_000 }
);
await sandboxB.commands.run(
  'deeplake init --table "shared_workspace" --path /mnt/data',
  { timeoutMs: 60_000 }
);

// Write from sandbox A
await sandboxA.commands.run(
  'echo "written from sandbox A" > /mnt/data/hello.txt'
);

// Read from sandbox B (after a brief sync delay)
const check = await sandboxB.commands.run("cat /mnt/data/hello.txt");
console.log(check.stdout); // "written from sandbox A"
sandbox_a = Sandbox.create("base", timeout=300)
sandbox_b = Sandbox.create("base", timeout=300)

# ... install CLI and authenticate in both (same steps as above) ...

# Both mount the same table
sandbox_a.commands.run(
    'deeplake init --table "shared_workspace" --path /mnt/data',
    timeout=60,
)
sandbox_b.commands.run(
    'deeplake init --table "shared_workspace" --path /mnt/data',
    timeout=60,
)

# Write from sandbox A
sandbox_a.commands.run(
    'echo "written from sandbox A" > /mnt/data/hello.txt'
)

# Read from sandbox B (after a brief sync delay)
check = sandbox_b.commands.run("cat /mnt/data/hello.txt")
print(check.stdout)  # "written from sandbox A"

Use Case: Agent with Sandboxed Execution

A common pattern: your AI agent runs untrusted code in an E2B sandbox but needs to persist results, logs, or artifacts across runs.

Agent prompt: "Run this Python script and save the results"
┌───────────────────────────────────┐
│  E2B Sandbox                      │
│  1. Run user's Python script      │
│  2. Save output to /mnt/data/     │
│     (Deeplake mount)             │
│  3. Sandbox auto-destroys         │
└───────────────────────────────────┘
Results persist in Deeplake.
Next sandbox (or local machine) can read them.
async function runInSandbox(code: string, table: string) {
  const sandbox = await Sandbox.create("base", { timeoutMs: 300_000 });

  try {
    // Setup (install CLI, auth, mount): use a pre-built E2B template
    // to skip this in production (see Tips below)
    await setupDeepLake(sandbox, table);

    // Write the user's code into the sandbox
    await sandbox.files.write("/tmp/task.py", code);

    // Execute it
    const result = await sandbox.commands.run(
      "python3 /tmp/task.py > /mnt/data/output.txt 2>&1",
      { timeoutMs: 120_000 }
    );

    // Save execution metadata
    await sandbox.commands.run(
      `echo '{"exit_code": ${result.exitCode}, "timestamp": "'$(date -Iseconds)'"}' > /mnt/data/meta.json`
    );

    return result.exitCode === 0;
  } finally {
    await sandbox.kill();
    // Files in /mnt/data/ survive: they're in Deeplake
  }
}
def run_in_sandbox(code: str, table: str) -> bool:
    sandbox = Sandbox.create("base", timeout=300)

    try:
        # Setup (install CLI, auth, mount): use a pre-built E2B template
        # to skip this in production (see Tips below)
        setup_deeplake(sandbox, table)

        # Write the user's code into the sandbox
        sandbox.files.write("/tmp/task.py", code)

        # Execute it
        result = sandbox.commands.run(
            "python3 /tmp/task.py > /mnt/data/output.txt 2>&1",
            timeout=120,
        )

        # Save execution metadata
        sandbox.commands.run(
            f'echo \'{{"exit_code": {result.exit_code}, "timestamp": "\'$(date -Iseconds)\'"}}\' > /mnt/data/meta.json'
        )

        return result.exit_code == 0
    finally:
        sandbox.kill()
        # Files in /mnt/data/ survive: they're in Deeplake

Tips

Use E2B templates for faster boot

Installing FUSE and Deeplake CLI on every sandbox takes ~30 seconds. Create a custom E2B template with everything pre-installed:

# e2b.Dockerfile
FROM e2b/base

RUN apt-get update -qq && apt-get install -y -qq fuse && \
    chmod 666 /dev/fuse && \
    sed -i 's/#user_allow_other/user_allow_other/' /etc/fuse.conf && \
    curl -fsSL deeplake.ai/install | bash

Then create sandboxes from your template. They boot in under 1 second with Deeplake CLI ready.

Unmount before kill

Call deeplake umount /mnt/data before destroying the sandbox to flush any pending writes:

await sandbox.commands.run("deeplake umount /mnt/data");
await sandbox.kill();

One table per task, or one shared table

  • Per-task tables: Each sandbox gets its own table (e.g., task_{id}). Clean isolation, easy cleanup.
  • Shared table: All sandboxes mount the same table. Useful when agents need to collaborate or accumulate data.

What to try next