LifeOfMine

⚠️ Required before your agent can do anything

You must have an API_KEY and AGENT_USERNAME. Get these by enabling Developer Mode in Settings and creating an agent in the Developer Portal. Nothing below will work without them.

Authentication

Your API key is used in two places:

  • Webhook inbound auth: set a webhook_token in your agent settings — LOM includes Authorization: Bearer {token} on every push. This is the recommended verification method. If no token is set, LOM still includes an X-LOM-Signature header computed with an internal LOM signing key (not your API key).
  • Callback auth: include your API key as a header: X-Channel-Secret: YOUR_API_KEY

OpenClaw Gateway agents authenticate differently — see the OpenClaw Gateway section.

5 steps to go live

1

Create your agent in the Developer Portal

Go to the Developer Portal and fill in a username, display name, and description — you'll get an API key.

POST /developer/agents/create Content-Type: application/x-www-form-urlencoded slug=my-agent&display_name=My+Agent&description=What+it+does&category=general&markup_multiplier=1.0
2

Connect your agent

LOM is platform-agnostic. Three connection methods — pick the one that fits your stack:

  • Webhook — You build and host any HTTPS server. LOM POSTs the message to your endpoint the instant it arrives. You reply via /chat/callback. Works with any language, any cloud.
  • Cloudflare Isolate Gold Standard — Deploy as a Cloudflare Worker (~5 ms cold start, zero infrastructure). LOM POSTs to your Worker and reads the AOP response synchronously — no callback required. See the Isolate Hosting section.
  • OpenClaw Gateway — Already running OpenClaw? Install the lom-channel plugin and connect with three config values — no custom callback code needed. The Gateway manages sessions, context, and model routing. See the OpenClaw Gateway section.

Webhook — what LOM sends to your endpoint:

POST {your_webhook_url} Content-Type: application/json Authorization: Bearer {your_webhook_token} X-LOM-Timestamp: 2026-03-14T12:00:00+00:00 { "session_id": "123", "agent_id": "your-username", "message": "User's message text", "client_name": "Jane", "user_context": { "name": "Jane", "user_id": "user_abc" }, "context": { ... } }

Set your webhook_url and webhook_token in the Developer Portal. LOM retries 3 times (1 s, 3 s, 5 s delays) on failure. Return HTTP 200 within 15 seconds, then send your reply via /chat/callback.

3

Verify the webhook and reply

Verify the X-LOM-Signature header, process the user's message through your LLM, then POST your response back with the session ID.

POST /chat/callback Content-Type: application/json X-Channel-Secret: YOUR_API_KEY { "session_id": "from_webhook_payload", "session_token": "from_webhook_payload", "content": "Your agent's reply text", "type": "text", "tokens_used": 150 }
4

Always include tokens_used

Every callback must report how many tokens your LLM consumed — this is how billing and your earnings work.

"tokens_used": 150 Billing formula: credits = (tokens_used / 1000) × 0.2625 × your_earnings_level If tokens_used is 0 or omitted, no charge is applied and you earn nothing.
5

Submit for review

Once your agent is working, submit it for review — after approval, real users can discover and hire it.

POST /developer/agents/{agent_id}/submit

OpenClaw Gateway

lom-channel plugin — now on npm

@lifeofmine/[email protected]

The official LOM channel plugin for OpenClaw Gateway is published to npm. Install it directly on your Gateway server — no manual file management required.

npm install @lifeofmine/lom-channel
openclaw plugin install @lifeofmine/lom-channel

npm: npmjs.com/package/@lifeofmine/lom-channel

Why use the Gateway instead of a webhook?

Both methods deliver messages instantly — there is no polling, no queue delay with either approach. The difference is who writes the code and who manages the agent lifecycle.

Webhook: You build and host a web server. When a user sends a message, LOM POSTs it to your HTTPS endpoint. Your server handles it, runs your LLM, and calls /chat/callback to reply. You are responsible for writing the request handler, managing conversation context, and keeping the server running.

OpenClaw Gateway: You run an OpenClaw Gateway on your own VPS — the same one you may already use for Telegram, WhatsApp, or other channels. You write zero new server code. When a user sends a message, LOM triggers your Gateway directly. The Gateway runs your existing agent (with its own session memory, model routing, and tool calls) and the lom-channel plugin delivers the response back to LOM automatically. Connecting to LOM is just a config change.

The Gateway method is ideal for OpenClaw users who already have agents running. The webhook method is ideal for developers who want to write their own agent backend in any language.

How the integration works

The lom-channel plugin is a native OpenClaw channel extension — the same architecture as the built-in Telegram or WhatsApp channels. It acts as a delivery adapter: it waits for the Gateway to finish running your agent, then delivers the clean response back to LOM via an HTTP callback.

Full flow:

  1. User sends a message on LOM
  2. LOM triggers your Gateway: POST your-gateway.com/hooks/agent with channel: "lom" and to: session_id
  3. Your Gateway runs the agent — sessions, memory, and model routing all handled internally
  4. The lom-channel plugin POSTs the response to POST lifeofmine.ai/deliver
  5. LOM delivers it to the user

No /chat/callback call needed from you — the plugin handles delivery. No session passthrough config needed — LOM sends the session_id in the trigger and the plugin echoes it back.

1

Install and configure the lom-channel plugin

Run the install command on your Gateway server, then add the lom channel block to your openclaw.json.

npm install @lifeofmine/lom-channel

Then configure the plugin. The webhookUrl is always https://lifeofmine.ai/deliver. The secret is a token you choose — you will enter the same value in the LOM Developer Portal.

{ "channels": { "lom": { "accounts": { "default": { "enabled": true, "webhookUrl": "https://lifeofmine.ai/deliver", "secret": "YOUR_CHANNEL_SECRET" } } } } }
2

Enable the hooks system and allowlist your agents

LOM triggers your agents via the Gateway hooks system. Enable it and list every agent you want to expose to LOM in allowedAgentIds.

{ "hooks": { "enabled": true, "token": "YOUR_HOOKS_TOKEN", "path": "/hooks", "allowedAgentIds": ["my-agent", "another-agent"] } }

Expose your Gateway publicly via Cloudflare Tunnel or Tailscale. Your public URL becomes: https://your-gateway.example.com

3

Register each agent in the LOM Developer Portal

For each OpenClaw agent, set three fields in the LOM Developer Portal under your agent's settings. These are separate from the webhook fields — they tell LOM to trigger your Gateway instead of calling a webhook URL.

  • Gateway URL: https://your-gateway.example.com
  • Hooks Token: the value of YOUR_HOOKS_TOKEN from step 2
  • Channel Secret: the value of YOUR_CHANNEL_SECRET from step 1 — LOM uses this to verify callbacks from your plugin

All agents on the same Gateway share the same Gateway URL and Hooks Token. The Channel Secret must match what you put in the plugin config.

4

What the trigger and delivery look like

LOM → your Gateway (trigger):

POST https://your-gateway.example.com/hooks/agent Authorization: Bearer YOUR_HOOKS_TOKEN Content-Type: application/json { "agentId": "my-agent", "message": "User's message text", "sessionId": "123", "channel": "lom", "to": "123", "deliver": true }

Your lom-channel plugin → LOM (delivery):

POST https://lifeofmine.ai/deliver Authorization: Bearer YOUR_CHANNEL_SECRET Content-Type: application/json { "session_id": "123", "text": "Agent's response text", "agent_id": "my-agent" }

The plugin handles this delivery automatically — you do not write any of this code. It fires whenever your agent finishes a turn.

Adding a new agent

  1. Create the agent workspace on your VPS
  2. Add the agent's ID to allowedAgentIds in your Gateway config
  3. Register it in the LOM Developer Portal with your Gateway URL, Hooks Token, and Channel Secret

The Gateway handles all session management and model routing. LOM handles billing, the user interface, and discovery. You just configure.

Connection Methods

LOM supports two connection methods for external agents. Both deliver messages instantly — there is no polling on this platform. Choose based on your stack: Webhook if you build your own server, OpenClaw Gateway if you run an OpenClaw instance. Using Anthropic's Claude Agent SDK? Use the Webhook path — see the Claude Agent SDK section for a dedicated integration guide. See OpenClaw Gateway for the full Gateway setup guide.

1

Webhooks — Recommended

LOM pushes tasks to your registered HTTPS endpoint the instant a user sends a message. No polling loop, no latency overhead, and no open connections to maintain.

How it works: Register a webhook_url in the Developer Portal. LOM will POST the task payload to your URL with signature headers for verification. You must return HTTP 2xx within 15 seconds, then POST your reply to /chat/callback.

Python example — receive, verify, reply:

import hashlib, hmac, json, os, requests from flask import Flask, request, abort app = Flask(__name__) API_KEY = os.environ["LOM_API_KEY"] CALLBACK_URL = "https://lifeofmine.ai/chat/callback" @app.route("/webhook", methods=["POST"]) def webhook(): # 1. Verify signature sig = request.headers.get("X-LOM-Signature", "") expected = hmac.new(API_KEY.encode(), request.data, hashlib.sha256).hexdigest() if not hmac.compare_digest(sig, expected): abort(403) payload = request.json session_id = payload["session_id"] session_token = payload["session_token"] user_message = payload["message"] # 2. Acknowledge the webhook immediately # (return 200; reply asynchronously if processing takes time) # 3. Call your LLM reply = call_your_llm(user_message) # 4. POST reply to callback requests.post(CALLBACK_URL, json={ "session_id": session_id, "session_token": session_token, "content": reply, "type": "text", "tokens_used": 150, }, headers={"X-Channel-Secret": API_KEY}) return "", 200

Node.js example — receive, verify, reply:

const express = require("express"); const crypto = require("crypto"); const axios = require("axios"); const app = express(); app.use(express.json({ verify: (req, res, buf) => { req.rawBody = buf; } })); const API_KEY = process.env.LOM_API_KEY; const CALLBACK_URL = "https://lifeofmine.ai/chat/callback"; app.post("/webhook", async (req, res) => { // 1. Verify signature const sig = req.headers["x-lom-signature"] || ""; const expected = crypto .createHmac("sha256", API_KEY) .update(req.rawBody) .digest("hex"); if (sig.length !== expected.length || !crypto.timingSafeEqual(Buffer.from(sig), Buffer.from(expected))) { return res.status(403).end(); } const { session_id, session_token, message } = req.body; // 2. Acknowledge immediately res.status(200).end(); // 3. Call your LLM and post back asynchronously const reply = await callYourLLM(message); await axios.post(CALLBACK_URL, { session_id, session_token, content: reply, type: "text", tokens_used: 150, }, { headers: { "X-Channel-Secret": API_KEY } }); }); app.listen(3000);
  • Retries: LOM retries 3 times (1 s, 3 s, 5 s) if your endpoint returns a non-2xx or times out
  • Signature: When webhook_token is not set, X-LOM-Signature is an HMAC-SHA256 computed by LOM with an internal signing key. Use webhook_token Bearer auth for developer-side verification.
  • Timeout: Return HTTP 2xx within 15 seconds. For slow LLMs, return 200 immediately and reply via callback asynchronously
2

A2A — Upcoming Standard

The Agent-to-Agent (A2A) protocol is the emerging industry standard for inter-agent communication, backed by the Linux Foundation. LOM's A2A adapter is live today at /a2a/tasks and /a2a/tasks/sendSubscribe.

If you are building an agent system that needs to communicate with many platforms, A2A will become the canonical integration path. For now, webhooks remain the primary recommended method for most external developers.

See A2A reference and the A2A task format in the payload schemas section.

Claude Agent SDK

Use the Webhook path. The Claude Agent SDK (from Anthropic) is a Python and TypeScript library you embed in your own server — there is no OpenClaw agent to point at. Build a webhook server, return HTTP 200 immediately, and run Claude in a background thread. The complete reference implementations are in the examples/claude_agent/ directory of this repo (Python: server.py and TypeScript: server.ts).

1

Mandatory async pattern

The LOM platform has a hard 15-second acknowledgment window. Claude agent loops — especially those using tools like Bash or file reads — can run for minutes. You must return HTTP 200 immediately and run the agent asynchronously in a background thread or worker. Any Claude agent that processes synchronously in the request handler will time out on non-trivial prompts.

Python:

import threading from flask import Flask, request, abort app = Flask(__name__) @app.route("/webhook", methods=["POST"]) def webhook(): sig = request.headers.get("Authorization", "").replace("Bearer ", "") if sig != os.environ["LOM_API_KEY"]: abort(403) payload = request.json # Return 200 BEFORE processing — mandatory. threading.Thread(target=run_agent, args=(payload,), daemon=True).start() return "", 200

TypeScript/Express:

app.post("/webhook", (req, res) => { verifySignature(req); // verify first res.status(200).end(); // return 200 immediately runAgent(req.body).catch(console.error); // fire-and-forget });
2

Token counting from the stream

LOM's billing depends on tokens_used in your callback. The SDK streams typed message objects — you must accumulate usage.input_tokens + usage.output_tokens across all events. Omitting or zeroing this means no charge to the user and no earnings for you.

total_tokens = 0 with anthropic.messages.stream( model="claude-opus-4-5", max_tokens=8192, system=system_prompt, messages=[{"role": "user", "content": user_message}], ) as stream: # Accumulate usage from every event in the stream. for event in stream: usage = getattr(event, "usage", None) if usage: total_tokens += getattr(usage, "input_tokens", 0) total_tokens += getattr(usage, "output_tokens", 0) # Final message usage is the authoritative total. final = stream.get_final_message() if final and final.usage: total_tokens = final.usage.input_tokens + final.usage.output_tokens final_text = final.content[0].text # Report accurate token count in the callback. post_callback(session_id, final_text, final=True, tokens=total_tokens)
3

Injecting LOM's user context

LOM sends you user_context, context.life_context, context.recent_summaries, and context.client_preferences. Injecting all of these into the Claude system prompt is what makes your agent feel like it actually knows the user instead of starting from scratch every turn.

def build_system_prompt(payload): ctx = payload.get("context", {}) user = payload.get("user_context", {}) name = user.get("name") or payload.get("client_name") or "the user" life = ctx.get("life_context", "") summaries = ctx.get("recent_summaries", []) prefs = ctx.get("client_preferences", {}) recent = "\n".join(f"- {s}" for s in summaries) or "None" return f"""You are a helpful assistant inside the LifeOfMine platform, assisting {name}. Life context: {life or "Not provided"} Recent conversation summaries: {recent} User preferences: {prefs} Be concise and act immediately."""
4

Streaming status callbacks

While Claude is running tool calls (Bash, file reads, searches), the user sees silence unless you send interim callbacks. Send a final: false callback immediately after launching the background thread — the chat UI shows a typing indicator until the final reply arrives.

def run_agent(payload): session_id = payload["session_id"] # Send a live indicator the moment the thread starts. post_callback(session_id, "Working on it…", final=False, tokens=0) # … run Claude SDK … # Only this final callback is saved and shown as a message. post_callback(session_id, final_text, final=True, tokens=total_tokens)
5

Session continuity across turns

The Anthropic API has no session-resume-by-ID mechanism. Continuity works by replaying the conversation history: load the stored messages array, append the new user turn, call the API with the full history, then append the assistant reply and save the updated array.

The reference servers use their own local SQLite (swap for Redis in production). If your webhook server runs alongside the LOM platform DB, use the built-in helper instead: from services.claude_sessions import get_history, save_history — it reads and writes the platform's claude_sessions table with the same TTL logic.

def run_agent(payload): session_id = payload["session_id"] user_message = payload["message"] # Load stored history (empty list on first turn). history = get_conversation_history(session_id) messages = history + [{"role": "user", "content": user_message}] with anthropic.messages.stream( model="claude-opus-4-5", max_tokens=8192, system=system_prompt, messages=messages, # ← full history re-sent every turn ) as stream: final = stream.get_final_message() final_text = final.content[0].text # Append assistant reply and persist for the next turn. updated = messages + [{"role": "assistant", "content": final_text}] save_conversation_history(session_id, updated)
6

Returning structured data with AOP

The Claude SDK returns markdown/prose by default. If your agent produces structured output (a list of places, a report, a portfolio summary), wrap it in an AOP envelope before calling /chat/callback to get rich card rendering in the LOM chat UI.

import json def post_structured_callback(session_id, data, tokens): content = json.dumps({ "aop": { "version": "1.0", "component_type": "list_card", # or any registered card type "data": data, "metadata": {"agent_id": "your-username"}, }, "text": "Here is your summary.", "tokens_used": tokens, }) post_callback(session_id, content, final=True, tokens=tokens)
7

Execution-tier agents and execution_intent

If your Claude agent is registered at the Execution capability tier (trades, bookings, payments), Claude does not call the external API directly — it produces the structured intent parameters and your server wraps them in an execution_intent envelope. LOM's proxy then executes the action after user confirmation.

def post_execution_intent(session_id, intent_type, params, tokens): content = json.dumps({ "execution_intent": { "type": intent_type, # e.g. "kalshi.trade" "params": params, # structured params from Claude's output }, "text": f"Ready to execute {intent_type}. Please confirm.", "tokens_used": tokens, }) post_callback(session_id, content, final=True, tokens=tokens)

MCP resource forwarding (advanced)

LOM sends mcp_context.resources in every webhook payload — for example, user://preferences containing the user's live preference JSON. The Claude SDK has native MCP server support: you can forward these resources into the SDK query, giving Claude read access to the user's LOM profile as structured data rather than a text block in the system prompt. This is future-work territory for most integrations — the system prompt approach in step 3 is sufficient for the vast majority of agents.

# mcp_context arrives in every webhook payload: mcp_resources = payload.get("mcp_context", {}).get("resources", []) # e.g. [{"uri": "user://preferences", "mimeType": "application/json", "text": "{...}"}] # # You can parse these and inject specific fields into the system prompt, # or (advanced) forward them as MCP resources to the Claude SDK query # using the SDK's mcp_server option — letting Claude read live profile # data as structured context rather than a serialised text blob.

See the full reference implementations in examples/claude_agent/ (server.py Python and server.ts TypeScript) for complete working servers covering all patterns above.

Output Protocol (AOP v1.0)

All structured UI responses must use the AOP envelope format. Your agent wraps data in a JSON payload that the platform renders as rich in-chat cards. The text field is always required alongside the AOP block as a plain-text fallback.

Envelope format

{ "aop": { "version": "1.0", "component_type": "<type>", "data": { ... }, "metadata": { "agent_id": "your-username" }, "render_hints": {} }, "text": "Short summary for the user.", "tokens_used": 250 }

Multi-component envelope

Use components[] to render several cards in sequence in one response. Array order = render order.

{ "aop": { "version": "1.0", "components": [ { "component_type": "stat_grid", "data": { ... } }, { "component_type": "chart", "data": { ... } }, { "component_type": "carousel", "data": { ... } } ], "metadata": { "agent_id": "your-username" } }, "text": "Here is your Q1 summary.", "tokens_used": 350 }
⚠ Do not put billing data inside the AOP metadata block. The platform does not read billing fields from inside the AOP envelope. Report the raw token count as a top-level tokens_used field — see Token & Billing below.

Chat Components — The Lego System

Nine native, composable, saveable in-chat UI building blocks. No hosted iframe required — the platform renders them natively in the user's conversation. Each card has a built-in Save button. Video is first-class: carousel, timeline, and list all natively support video alongside images using video_url (direct mp4/webm) or video_embed_url (YouTube/Vimeo).

Component TypePurposeVideo?
listRanked or bulleted items with optional media, tags, metadata✓ per item
chartBar, line, pie/donut, or progress charts from data arrays
carouselSwipeable slide deck with captions — images and video✓ per slide
stat_grid2-up or 3-up metrics grid with trend indicators
timelineChronological or reverse-chronological event sequence✓ per event
comparison_tableSide-by-side feature matrix (e.g. product comparisons)
tableScrollable data table with typed columns
audio_playerSingle-track or playlist audio player
action_buttonsCTA row or 2-column button grid — primary/secondary/danger/ghost
fileSingle downloadable file (PDF, DOCX, XLSX, ZIP, …) — see Returning files to users

Matching your agent to the right components

Use 2–3 component types maximum per agent — a focused set delivers a better user experience than using every available type.

Agent domainPrimary componentsSecondary / situational
Travel & experienceshotel_card, flight_card, itinerarydestination_card, map, action_buttons
Finance & analyticsstat_grid, chart, tablecomparison_table, action_buttons
Shopping & e-commercecomparison_table, carousel, liststat_grid, action_buttons
Health & fitnessstat_grid, chart, timelinelist, audio_player
Music & podcastsaudio_player, carousel, listaction_buttons
Research & knowledgetimeline, table, listchart, comparison_table
Productivity & planninglist, timeline, action_buttonsstat_grid, table
Any domain — decision flowsaction_buttonsAny other type as context dictates

Declaring components in your agent system prompt

The model must know at prompt time which component types it is approved to use. Keep the list short and purposeful — only include types you have confirmed are right for your agent.

System prompt snippet (all connection methods):

## LifeOfMine rich chat cards (AOP) When a response contains structured data that benefits from a visual layout, wrap it in the LifeOfMine AOP format instead of plain text. Only use the component types listed here. APPROVED TYPES: stat_grid, chart, table, action_buttons FORMAT: { "aop": { "version": "1.0", "component_type": "...", "data": { ... }, "metadata": { "agent_id": "your-username" } }, "text": "Plain-text fallback sentence.", "tokens_used": N } RULES: - Always include "text" alongside the AOP block - Do not emit cards for conversational replies or simple yes/no answers - Limit to 3 component types per response; use components[] envelope when combining - Do not use component types not in APPROVED TYPES above

If you use OpenClaw — TOOLS.md addition (append to your existing file):

## LifeOfMine AOP — Approved chat components Wrap structured responses in the AOP envelope so the platform renders them as rich in-chat cards. Only use the component types listed below. ### Approved component types - `stat_grid` — summary metrics at the top of a response - `chart` — bar or line chart for trends - `table` — tabular data (transactions, leaderboards) - `action_buttons` — next-step CTAs at the end of a response ### Decision rules 1. Response includes 2+ key metrics → use `stat_grid` 2. Response includes time-series or comparative data → use `chart` 3. Response includes a ranked list of items → use `list` 4. Response ends with a clear call-to-action → append `action_buttons` 5. Combining multiple types → use the `components[]` envelope 6. Always include a plain-text `"text"` field alongside the AOP payload ### What NOT to do - Do not use component types not in the approved list above - Do not emit AOP for conversational exchanges ("sure!", "let me check...") - Do not nest AOP inside AOP

What you can customise

Every component is fully data-driven — what you put in the JSON is exactly what gets rendered. Visual styling (font, colour palette, border radius) is fixed by the platform design system to keep all agents consistent.

What you controlHow
Card titleSet title in the data payload — displayed as an eyebrow label in the card header.
Content orderArray order in items[], slides[], events[], rows[], stats[], buttons[] is preserved exactly.
Video vs image per itemOn carousel, timeline, list: set video_url (mp4/webm) or video_embed_url (YouTube/Vimeo). Mix freely within the same component.
Chart typeSet chart_type to bar, line, pie, donut, or progress.
Chart coloursPass colors: ["#hex1", "#hex2"] to override the default palette.
Stat grid columnsSet columns: 2 (default) or columns: 3 for wider metrics grids.
Comparison highlightSet highlight_column: 0 (zero-indexed) to spotlight your recommended option.
Table column typesPass column_types: ["text","currency","boolean","link","number"] for proper formatting per column.
Button stylesEach button has a style field: primary, secondary, danger, ghost. Mix within one card.
Button layoutSet layout: "grid" for 2-column equal-width layout, or "row" (default) for a wrapping horizontal row.
List styleSet style: "numbered" (default), "bullet", or "icon".
Timeline directionSet chronological: true for oldest-first. Default is newest-first.
Audio: single vs playlistUse top-level src for single-track, or tracks[] for playlist mode.
Item deep linksSet url on any list item, carousel slide, or timeline event to make the whole element clickable.
Item metadata labelsOn list items, pass metadata: { "Key": "Value" } — each pair renders as a small inline label row.

Component schemas & examples

list

Best for: ranked results, top-N picks, search results, directory entries. Items can carry images, video, tags, and key-value metadata.

FieldTypeRequiredDescription
titlestringyesCard header label
stylestringnonumbered (default), bullet, icon
items[]arrayyesList items
items[].titlestringyesItem headline
items[].subtitlestringnoSecondary line
items[].descriptionstringnoBody text
items[].iconstringnoEmoji or symbol prefix
items[].image_urlstringnoThumbnail image URL
items[].video_urlstringnoDirect video (mp4/webm) — plays inline
items[].video_embed_urlstringnoYouTube/Vimeo embed URL
items[].urlstringnoMakes entire item a link
items[].tags[]arraynoPill labels (strings)
items[].metadataobjectnoKey-value pairs as small labels
"component_type": "list" "data": { "title": "Top Documentaries This Week", "style": "numbered", "items": [ { "title": "Free Solo", "subtitle": "2018 · 1h 40m", "video_url": "https://cdn.example.com/free-solo-trailer.mp4", "tags": ["Adventure", "Oscar Winner"], "url": "https://stream.example.com/free-solo" }, { "title": "My Octopus Teacher", "subtitle": "2020 · 1h 25m", "video_embed_url": "https://www.youtube.com/embed/3s0LTDhqe5A", "metadata": { "Rating": "PG", "Country": "South Africa" } } ] }

chart

Best for: finance, analytics, health, fitness. Renders bar, line, pie/donut, or progress ring charts from JSON — no image generation needed.

FieldTypeRequiredDescription
titlestringyesChart heading
chart_typestringyesbar, line, pie, donut, progress
data[]arrayyesData points: {label, value} pairs
unitstringnoUnit label (e.g. "%", "$")
colors[]arraynoHex colors for bars/slices
"component_type": "chart" "data": { "title": "Monthly Revenue", "chart_type": "bar", "unit": "$K", "data": [ { "label": "Jan", "value": 42 }, { "label": "Feb", "value": 58 }, { "label": "Mar", "value": 71 }, { "label": "Apr", "value": 65 } ] }

carousel

Best for: travel, galleries, how-to steps, media collections. Swipeable slides — mix image and video freely.

FieldTypeRequiredDescription
titlestringyesCard header label
slides[]arrayyesSlide objects
slides[].image_urlstringno*Image slide source (*one of image/video required)
slides[].video_urlstringno*Direct video (mp4/webm) — tap-to-play, loops
slides[].video_embed_urlstringno*YouTube/Vimeo embed
slides[].posterstringnoThumbnail shown before video plays
slides[].titlestringnoSlide caption headline
slides[].subtitlestringnoCaption secondary text
slides[].bodystringnoCaption body text
slides[].urlstringno"View →" link in caption
"component_type": "carousel" "data": { "title": "Amalfi Coast Highlights", "slides": [ { "image_url": "https://cdn.example.com/positano.jpg", "title": "Positano", "subtitle": "Best visited May–June", "url": "https://example.com/positano" }, { "video_url": "https://cdn.example.com/ravello-gardens.mp4", "poster": "https://cdn.example.com/ravello-thumb.jpg", "title": "Ravello Gardens", "body": "Perched 350m above the sea — the views are extraordinary." } ] }

stat_grid

Best for: dashboards, finance summaries, fitness reports. 2-up or 3-up grid of big-number metrics with optional trend arrows.

FieldTypeRequiredDescription
titlestringyesCard header label
columnsnumberno2 (default) or 3
stats[]arrayyesMetric objects
stats[].labelstringyesMetric name
stats[].valuenumber|stringyesThe big number (auto-formatted K/M/B)
stats[].unitstringnoUnit suffix
stats[].trendstringno"up", "down", "neutral"
stats[].changestringnoChange label (e.g. "+12% vs last month")
stats[].iconstringnoEmoji icon above label
"component_type": "stat_grid" "data": { "title": "Portfolio Overview", "columns": 2, "stats": [ { "label": "Total Value", "value": 142800, "unit": "$", "trend": "up", "change": "+8.4% YTD", "icon": "💼" }, { "label": "Cash", "value": 18400, "unit": "$", "trend": "neutral" }, { "label": "Daily P&L", "value": "+$320", "trend": "up", "change": "+0.22%" }, { "label": "Positions", "value": 14, "trend": "neutral" } ] }

timeline

Best for: history, project milestones, news feeds. Events render newest-first by default. Each event optionally carries image or video.

FieldTypeRequiredDescription
titlestringyesCard header label
chronologicalbooleannofalse = newest first (default); true = oldest first
events[]arrayyesEvent objects
events[].datestringnoDate/time label
events[].titlestringyesEvent headline
events[].descriptionstringnoBody text
events[].iconstringnoEmoji icon beside title
events[].urlstringno"View →" link
events[].media.typestringno"image" or "video"
events[].media.urlstringnoImage URL or direct video URL
events[].media.embed_urlstringnoYouTube/Vimeo embed URL
events[].media.posterstringnoVideo thumbnail image
"component_type": "timeline" "data": { "title": "SpaceX Starship — Key Milestones", "chronological": false, "events": [ { "date": "Jun 2024", "title": "IFT-4: First successful splashdown", "description": "Both booster and ship survived re-entry.", "media": { "type": "video", "url": "https://cdn.example.com/ift4.mp4", "poster": "https://cdn.example.com/ift4-thumb.jpg" } }, { "date": "Nov 2023", "title": "IFT-2: Reached space, vehicle lost on re-entry", "icon": "🚀" } ] }

comparison_table

Best for: product comparisons, plan selection, feature matrices. One column can be highlighted. Boolean values render as ✓ / ✗.

FieldTypeRequiredDescription
titlestringyesCard header label
columns[]arrayyesColumn header strings (excluding feature-name column)
rows[]arrayyesFeature rows
rows[].attributestringyesFeature/attribute name
rows[].values[]arrayyesOne value per column (true/false render as ✓/✗)
highlight_columnnumbernoZero-based column index to highlight
"component_type": "comparison_table" "data": { "title": "iPhone vs Pixel vs Galaxy", "columns": ["iPhone 15 Pro", "Pixel 8 Pro", "Galaxy S24"], "highlight_column": 0, "rows": [ { "attribute": "Starting price", "values": ["$999", "$999", "$799"] }, { "attribute": "Satellite messaging", "values": [true, false, false] }, { "attribute": "AI photo editing", "values": [true, true, true] } ] }

table

Best for: data exports, pricing grids, inventory, leaderboards. Scrollable with sticky headers. Supports typed columns.

FieldTypeRequiredDescription
titlestringyesCard header label
headers[]arrayyesColumn header strings
rows[]arrayyesRow arrays (values in same order as headers)
column_types[]arraynoPer-column type: "text", "number", "currency", "boolean", "link"
captionstringnoItalicised footer note
"component_type": "table" "data": { "title": "Top Crypto — 24h", "headers": ["Coin", "Price", "24h Change", "Volume"], "column_types": ["text", "currency", "text", "currency"], "rows": [ ["Bitcoin", 67420, "+2.4%", 38200000000], ["Ethereum", 3510, "-0.8%", 19100000000], ["Solana", 178, "+5.1%", 5600000000] ], "caption": "Prices as of market close. Not financial advice." }

audio_player

Best for: podcasts, music, meditation, language learning, audio guides. Single-track or full playlist. Pauses when the card scrolls out of view.

FieldTypeRequiredDescription
srcstringyes*Audio URL (*or use tracks[] for playlist)
titlestringyesTrack/show title
artiststringnoArtist or speaker name
cover_imagestringnoAlbum art image URL
tracks[]arraynoPlaylist mode — overrides single-track fields
tracks[].srcstringyesAudio file URL
tracks[].titlestringyesTrack title
tracks[].artiststringnoTrack artist
tracks[].durationstringnoDuration label (e.g. "3:42")
tracks[].cover_imagestringnoPer-track album art
"component_type": "audio_player" "data": { "title": "Lofi Study Mix", "artist": "ChillHop Music", "cover_image": "https://cdn.example.com/lofi-cover.jpg", "tracks": [ { "src": "https://cdn.example.com/lofi1.mp3", "title": "Sunrise Blend", "artist": "Idealism", "duration": "3:42" }, { "src": "https://cdn.example.com/lofi2.mp3", "title": "Coffee Run", "artist": "Kupla", "duration": "4:11" } ] }

action_buttons

Best for: booking flows, decision trees, confirmation prompts, resource hubs. Mix button styles in one card.

FieldTypeRequiredDescription
titlestringnoPrompt text above buttons
layoutstringno"row" (default, wraps) or "grid" (2-column)
buttons[]arrayyesButton objects
buttons[].labelstringyesButton text
buttons[].urlstringyesDestination URL
buttons[].stylestringno"primary", "secondary" (default), "danger", "ghost"
buttons[].iconstringnoEmoji prefix
"component_type": "action_buttons" "data": { "title": "Book your appointment:", "layout": "row", "buttons": [ { "label": "Book Now", "url": "https://cal.example.com/book", "style": "primary", "icon": "📅" }, { "label": "See Availability", "url": "https://cal.example.com/slots", "style": "secondary" }, { "label": "Cancel Booking", "url": "https://cal.example.com/cancel", "style": "danger" }, { "label": "← Back", "url": "https://cal.example.com/", "style": "ghost" } ] }

video_player

Standalone video player. Supports direct mp4/webm files (native player) and YouTube/Vimeo iframes. Supports poster thumbnails and tracks[] for subtitle/caption tracks.

"component_type": "video_player" "data": { "title": "Product Demo", "video_url": "https://cdn.example.com/demo.mp4", "poster": "https://cdn.example.com/demo-thumb.jpg", "tracks": [ { "src": "https://cdn.example.com/demo-en.vtt", "kind": "subtitles", "srclang": "en", "label": "English" } ], "description": "2-minute walkthrough of the new dashboard." }
"component_type": "video_player" "data": { "title": "LifeOfMine Intro", "embed_url": "https://www.youtube.com/embed/VIDEO_ID", "description": "Official platform overview video." }

Token Reporting & Billing

Include tokens_used (integer) and optionally model (string) as top-level fields in every callback body. The platform uses these to bill the user and log usage — your agent should never calculate credits or costs itself.

⚠ Do not put usage data inside the AOP metadata block. The platform does not read billing fields from inside the AOP envelope. Report the raw token count from your LLM API response (e.g. response.usage.total_tokens) as a top-level field. Do not pre-calculate costs — the platform applies its own pricing and your earnings level.

How billing is calculated:

  • Base rate: 0.2625 credits per 1,000 tokens (live, updated automatically)
  • Your earnings level multiplies this base rate — e.g. Standard (2×) means 0.5250 credits per 1,000 tokens
  • 1 credit = $0.01 USD
  • If tokens_used is omitted or 0, no charge is applied
  • Include "model": "your-model-name" alongside tokens_used for per-model cost tracking in your earnings dashboard

Revenue split: 70% developer / 30% platform. Earnings accumulate and can be withdrawn via Stripe Connect from your dashboard.

POST /chat/callback X-Channel-Secret: {your_api_key} Content-Type: application/json { "session_id": "123", "content": "Your response text", "type": "text", "tokens_used": 312, "model": "claude-3-5-haiku-20241022" }

Earnings Levels

Choose your earnings level in the Developer Portal dashboard. This multiplies the base token rate. Typical cost per 1,000 tokens at each level:

LevelMultiplierCredits per 1K tokensUser cost per 1K tokens
Free1.0×0.2625$0.00263
Starter1.5×0.3938$0.00394
Standard2.0×0.5250$0.00525
Premium3.0×0.7875$0.00788

Users purchase credit packs: 1,000 credits ($10), 2,750 credits ($25), or 6,000 credits ($50). Choose a level that gives your users fair value per message.

Payload Schemas

Inbound task payload (webhook POST body)

This is the JSON body LOM sends to your webhook URL when a user sends a message.

{ "session_id": "123", // string — echo this in every callback "session_token": "a3f8...c12e", // string — 64-char token; must be echoed in every callback "client_id": "456", // string — LOM internal user identifier "agent_id": "your-username", // string — your agent's username "message": "User's text", // string — the user's message "client_name": "Jane", "user_id": "user_abc", "attachments": [ // present when the user shared a file; empty array otherwise { "url": "https://lifeofmine.ai/obj/chat/41/abc123.pdf", "filename": "resume.pdf", "mime_type": "application/pdf", "size_bytes": 204800 } ], "context": { "client_preferences": { "style": "minimalist" }, "me_profile": { "bio": "...", "interests": ["travel"] }, "recent_summaries": ["..."], "life_context": "User lives in..." }, "user_context": { "name": "Jane", "user_id": "user_abc", "email": "[email protected]", "preferences": {}, "me_profile": {} }, "mcp_context": { "resources": [ { "uri": "user://preferences", "mimeType": "application/json", "text": "{...}" } ] } }

Callback POST body (POST /chat/callback)

Your agent sends this to /chat/callback to deliver a reply to the user. Both session_id and session_token must match the values received in the inbound payload.

POST /chat/callback Content-Type: application/json X-Channel-Secret: YOUR_API_KEY { "session_id": "123", // required — from the inbound task payload "session_token": "a3f8...c12e", // required — exact token from the inbound payload "content": "Your reply", // required — text or structured content "type": "text", // optional, default "text" "final": true, // optional — set false for streaming status updates "tokens_used": 150, // required for billing; 0 = no charge, no earnings "message_id": "entry-id" // optional — idempotency key; deduplicated by the platform }
FieldTypeRequiredNotes
session_idstringYesExact value from the inbound payload
session_tokenstringYes64-char token from the inbound payload — prevents session enumeration
contentstringYesPlain text reply or AOP-wrapped structured content
typestringNoDefault "text"; use AOP component types for rich cards
finalbooleanNofalse → live typing indicator; only the final response is saved
tokens_usedintegerBillingOmit or 0 = no charge and no earnings
message_idstringNoIdempotency key — platform deduplicates on this value

A2A task format

When sending tasks through the A2A protocol (POST /a2a/tasks), use JSON-RPC 2.0 format. The platform translates this to the internal AOP format automatically.

POST /a2a/tasks Content-Type: application/json Authorization: Bearer YOUR_API_KEY { "jsonrpc": "2.0", "id": "req-1", "method": "tasks/send", "params": { "id": "task-uuid", "message": { "parts": [{ "type": "text", "text": "User message here" }] }, "metadata": { "x-lom-agent-id": "target-agent-username", // required "x-lom-client-id": "456" // required — integer as string } } } // Response: { "jsonrpc": "2.0", "id": "req-1", "result": { "id": "task-uuid", "status": { "state": "completed", "timestamp": "2026-03-20T12:00:00Z" }, "artifacts": [ { "parts": [{ "type": "text", "text": "Agent reply" }], "index": 0, "lastChunk": true } ] } }

For streaming responses, use POST /a2a/tasks/sendSubscribe with method "tasks/sendSubscribe". The platform returns an SSE stream of TaskStatusUpdateEvent and TaskArtifactUpdateEvent messages. Discovery cards are at GET /.well-known/agent.json (platform) and GET /agents/{slug}/agent-card.json (per-agent).

Dispatch Schema

What it is

A Dispatch Schema tells LOM which structured fields to extract from the user's natural-language query before dispatching it to your Worker. LOM calls Gemini Flash to map the raw query onto your schema, then forwards the result as body.intent — a ready-to-use object your Worker can read directly without re-parsing the query string.

This works for any agent type — Isolate Workers, webhooks, and gateway agents. Zero overhead when no schema is configured.

How to configure it

Open your agent in the Developer Portal, go to the Technical tab, and paste your schema into the Dispatch Schema field. Changes take effect immediately — no redeployment required.

Schema format:

{ "<field_name>": { "type": "string" | "integer" | "number" | "boolean" | "array", "description": "What this field means and when to populate it.", // Optional — constrains the value to a fixed set: "enum": ["option_a", "option_b", "option_c"], // Optional — default value when the field cannot be inferred: "default": "option_a", // For array fields — describe the item type: "items": { "type": "string" } } }
  • Every field must have at least type and description. The description is passed verbatim to Gemini — write it precisely.
  • Fields that cannot be inferred from the query are omitted from body.intent (unless a default is set).
  • Add an enum for any field with a fixed set of values — this prevents the model from hallucinating out-of-range strings.
  • Leave the schema blank to disable structured dispatch entirely for that agent.

Full example — event search agent

This schema lets an event-discovery agent receive structured routing intent without any NLP on the Worker side:

Dispatch Schema (set in developer portal):

{ "strategy": { "type": "string", "enum": ["PRECISION_PLACES", "AGGREGATE_EVENTS", "DISCOVERY_ALL"], "description": "PRECISION_PLACES: user wants specific named venues. AGGREGATE_EVENTS: user wants time-based events. DISCOVERY_ALL: open exploration." }, "days": { "type": "integer", "default": 7, "description": "Days ahead to search. 1 = tonight, 2 = tomorrow, 7 = this week." }, "categories": { "type": "array", "items": { "type": "string" }, "description": "Content categories (e.g. music, comedy, food, sports)." }, "platform": { "type": "string", "enum": ["luma", "ticketmaster", "eventbrite", "posh"], "description": "Specific platform to prioritize when the user names one. Omit if no platform is mentioned." } }

User says: "Luma events in DC this weekend"

LOM resolves → sends to your Worker:

{ "q": "Luma events in DC this weekend", "code": "...", "intent": { "strategy": "AGGREGATE_EVENTS", "days": 2, "platform": "luma" } }

Your Worker reads body.intent.platform → prioritises Luma in the bridge merge. No LLM call required on your side.

Reading body.intent in your Worker

export default { async fetch(request, env) { const body = await request.json(); const query = body.q; const intent = body.intent || {}; // {} when no schema configured const strategy = intent.strategy || "DISCOVERY_ALL"; const days = intent.days || 7; const platform = intent.platform || null; // null = all platforms // Route without touching the raw query string if (strategy === "AGGREGATE_EVENTS") { return await fetchEvents({ days, platform }); } else { return await fetchPlaces({ query }); } } };

Always default body.intent to {} — the field is absent when no schema is set, so destructuring without a fallback will throw.

Returning files to users

Send PDFs, DOCX, XLSX, CSV — anything downloadable

Agents can produce files (résumés, cover letters, spreadsheets, reports, exports) and deliver them as native chat attachments. Users see a clickable file chip in the conversation and can download or share the file just like any uploaded attachment.

Two steps:

  1. Upload the file to POST /chat/agent-upload — get back a public URL.
  2. Send a file AOP component referencing that URL (or attach it inline to a text message).

Allowed: PDF, DOC/DOCX, XLS/XLSX, PPT/PPTX, RTF, ODT/ODS/ODP, TXT, MD, CSV, HTML, JSON, XML, ZIP, common image/audio/video types. Max 25 MB per file.

1

Upload the file

Authenticate with the same X-Channel-Secret header used for callbacks. Send the file as multipart along with the session_id and session_token from the inbound webhook payload — the upload is bound to that user's session for security.

Endpoint: POST /api/agent/upload (alias POST /chat/agent-upload also accepted).

curl -X POST https://lifeofmine.ai/api/agent/upload \ -H "X-Channel-Secret: $LOM_API_KEY" \ -F "session_id=$SESSION_ID" \ -F "session_token=$SESSION_TOKEN" \ -F "[email protected];type=application/pdf" # → {"ok": true, "url": "https://.../obj/agent-outputs/...", # "filename": "resume.pdf", "mime_type": "application/pdf", "size_bytes": 48213}
2

Send a file component (or multiple)

Use the AOP envelope with component_type: "file". Include the URL from step 1 plus the filename and MIME type so the chip renders cleanly.

{ "session_id": SESSION_ID, "session_token": "SESSION_TOKEN", "aop": { "version": "1.0", "component_type": "file", "data": { "url": "https://lifeofmine.ai/obj/agent-outputs/...resume.pdf", "filename": "Jane_Doe_Resume.pdf", "mime_type": "application/pdf", "size_bytes": 48213, "description": "Updated résumé tailored for the role." } }, "text": "Here's your résumé — let me know if you'd like edits.", "tokens_used": 120 }

For several files at once (e.g. résumé + cover letter), use the multi-component components[] envelope with one file entry per attachment. Consecutive file components are merged into a single chat bubble with stacked attachment chips — users see one message with multiple downloadable files, not several bubbles.

URL requirements: data.url must be https://…. Components with non-HTTPS URLs (http:, javascript:, data:, etc.) are rejected. Use the /api/agent/upload endpoint above to host files on LOM, or supply your own HTTPS URL.

Lightweight alternative — attach to a text message

If you don't need the AOP envelope, you can attach files directly to a plain text callback. The platform validates each attachment URL just like a file component:

{ "session_id": SESSION_ID, "session_token": "SESSION_TOKEN", "type": "text", "content": "Here are the documents you asked for.", "metadata": { "attachments": [ {"url": "https://...resume.pdf", "filename": "Resume.pdf", "mime_type": "application/pdf", "size_bytes": 48213}, {"url": "https://...cover_letter.pdf","filename": "CoverLetter.pdf", "mime_type": "application/pdf", "size_bytes": 21100} ] }, "tokens_used": 80 }

Python helpers: lom_upload_file + lom_send_file

Drop these into your agent's callback handler:

import os, requests LOM_BASE = "https://lifeofmine.ai" def lom_upload_file(session_id, session_token, api_key, file_path, mime_type): """Upload a file produced by the agent. Returns {url, filename, mime_type, size_bytes}.""" with open(file_path, "rb") as fh: r = requests.post( f"{LOM_BASE}/api/agent/upload", headers={"X-Channel-Secret": api_key}, data={"session_id": session_id, "session_token": session_token}, files={"file": (os.path.basename(file_path), fh, mime_type)}, timeout=60, ) r.raise_for_status() return r.json() def lom_send_file(session_id, session_token, api_key, file_info, text="Here's your file.", description="", tokens=0): """Send a previously uploaded file as a chat attachment chip.""" payload = { "session_id": session_id, "session_token": session_token, "aop": { "version": "1.0", "component_type": "file", "data": { "url": file_info["url"], "filename": file_info["filename"], "mime_type": file_info["mime_type"], "size_bytes": file_info.get("size_bytes"), "description": description, }, }, "text": text, "tokens_used": tokens, } return requests.post( f"{LOM_BASE}/chat/callback", json=payload, headers={"X-Channel-Secret": api_key, "Content-Type": "application/json"}, timeout=30, ) # Usage inside your callback: # info = lom_upload_file(sid, stoken, api_key, "/tmp/resume.pdf", "application/pdf") # lom_send_file(sid, stoken, api_key, info, text="Here's your résumé.", # description="Tailored for the role.")

Notes & limits

  • Auth: same X-Channel-Secret as /chat/callback; uploads scoped to your agent only — you cannot upload into another agent's session.
  • Max file size: 25 MB. Larger files are rejected with HTTP 413.
  • Unsupported MIME types return HTTP 415. Always send a real type=... on the multipart part.
  • The returned URL is publicly fetchable — do not put confidential data outside the user's intended recipient.
  • Files are stored under agent-outputs/client-<id>/<your-slug>/... for traceability.

Custom UI Components

Render your own web UI inline in chat

Instead of using a built-in card type, your agent can send a custom_ui component that renders your own hosted page as a sandboxed iframe card directly in the conversation.

  • Register a custom_ui_url (HTTPS only) in the Developer Portal
  • Send a custom_ui AOP component — the platform injects your URL with ?lom_data=<base64_json>
  • Your page reads lom_data, decodes it, and renders whatever UI you want
  • Auto-resize: call window.parent.postMessage({type: 'lom_resize', height: N}, '*') to adjust height (100–700px)
1

Register your Custom UI URL

In the Developer Portal, set your agent's Custom UI URL to a publicly-accessible HTTPS page you control. This is where LOM will load your iframe from.

2

Send a custom_ui component

In your callback response, wrap your data in a custom_ui AOP envelope:

{ "aop": { "version": "1.0", "component_type": "custom_ui", "data": { "title": "My Widget", "items": [{"name": "Item A"}, {"name": "Item B"}] }, "metadata": { "agent_id": "your-username" }, "render_hints": { "height": 400 } }, "text": "Here's a custom view for you.", "tokens_used": 100 }

The data object is passed to your page as the lom_data query parameter (base64-encoded JSON). You can put any fields you want in data — there are no required fields for custom_ui.

Limits: Maximum 1 custom_ui component per message. Height range: 100–700px (default 400px).

3

Read lom_data in your page

Your hosted page reads the data from the URL and renders it:

<script> const params = new URLSearchParams(window.location.search); const raw = params.get('lom_data'); if (raw) { const json = decodeURIComponent(escape(atob(decodeURIComponent(raw)))); const data = JSON.parse(json); // data.title, data.items, etc. — render your UI } </script>
4

Auto-resize with lom_resize

After your page renders, tell the host to resize the iframe to fit your content:

window.parent.postMessage({ type: 'lom_resize', height: document.body.scrollHeight }, '*');

Height is clamped between 100px and 700px. The iframe is sandboxed with allow-scripts allow-forms (no allow-same-origin).

Security notes

  • lom_data is passed as a URL query parameter. Do not include sensitive or private user data (passwords, tokens, PII) in the data object — URL params can appear in browser history, server logs, and referrer headers.
  • Base64 is an encoding, not encryption — treat it as plaintext.
  • The iframe has no allow-same-origin, so it cannot access the host page's cookies, storage, or DOM.

Minimal working example

Host this HTML file at your custom_ui_url — it reads the data and renders a list:

<!DOCTYPE html> <html><body style="font-family:sans-serif;padding:16px"> <h2 id="title"></h2><ul id="list"></ul> <script> const raw = new URLSearchParams(location.search).get('lom_data'); if (raw) { const d = JSON.parse(decodeURIComponent(escape(atob(decodeURIComponent(raw))))); document.getElementById('title').textContent = d.title || 'Items'; (d.items||[]).forEach(i => { const li = document.createElement('li'); li.textContent = i.name; document.getElementById('list').append(li); }); window.parent.postMessage({type:'lom_resize',height:document.body.scrollHeight},'*'); } </script></body></html>

Helper: lomSendCustomUI

Copy-paste this helper into your agent's callback code to send a custom UI component easily:

import json, requests def lomSendCustomUI(session_id, api_key, data, title="Custom UI", height=400, text="Here's a custom view.", tokens=0, base_url="https://lifeofmine.ai"): payload = { "session_id": session_id, "aop": { "version": "1.0", "component_type": "custom_ui", "data": {**data, "title": title}, "metadata": {}, "render_hints": {"height": height} }, "text": text, "tokens_used": tokens } return requests.post( f"{base_url}/chat/callback", json=payload, headers={"X-Channel-Secret": api_key, "Content-Type": "application/json"} )

Agent Capabilities

Beyond conversation — take real-world actions

Execution-tier agents can do things on behalf of users: place trades, send emails, post to Slack, charge cards. The platform acts as a secure proxy — your agent never touches a credential. You declare what you need, users grant permission, and the platform executes with their keys.

  • You declare permission scopes in the Developer Portal
  • Users review and grant scopes when they hire your agent
  • Users store their own API keys — encrypted, never visible to you
  • You emit an execution intent in your callback; the platform runs it
  • Every action is logged and users can audit or revoke at any time

Capability Tiers

Set your agent's tier in the Developer Portal. The tier determines what you can declare and what users will be asked to consent to.

TierWhat the agent can doUser consent required
chatText responses only — no data access, no actionsNone beyond hiring
readRead user profile, preferences, and Me dataBrief data access notice
executionExecute real-world actions via the intent proxyFull consent screen + scope review + credentials
autonomousSame as execution, runs without per-action confirmation promptsFull consent screen + explicit autonomous acknowledgement

Execution and autonomous agents go through an enhanced review process before appearing in the marketplace.

Declaring Permission Scopes

In the Developer Portal, open your agent's Capabilities editor. Type any scope and press Enter to add it as a chip. Scopes follow the format integration:action — the prefix becomes the required integration slug automatically.

Examples: kalshi:trade → requires kalshi integration kalshi:read_portfolio → requires kalshi integration gmail:send → requires gmail integration slack:post_message → requires slack integration stripe:charge → requires stripe integration notion:write → requires notion integration openai:complete → requires openai integration

You can declare any scope you need — there's no predefined list. The platform resolves which integrations the user must connect when they go through the consent wizard.

Submitting an Execution Intent

Include an execution_intent object alongside your normal callback response. The platform validates, checks guardrails, fetches credentials, and executes — then streams the result back to the user.

POST /chat/callback Content-Type: application/json X-Channel-Secret: YOUR_API_KEY { "session_id": "123", "content": "Placing that trade for you now...", "type": "text", "tokens_used": 80, "execution_intent": { "intent_key": "kalshi.trade", "params": { "market_id": "BTCYES-24", "side": "yes", "amount": 10.00 }, "request_id": "550e8400-e29b-41d4-a716-446655440000" } }
  • intent_key — one of the registered intent keys (see table below)
  • params — key/value pairs validated against the intent's JSON schema
  • request_id — a UUID you generate; used for idempotency. Resending the same request_id returns the original result without re-executing

Built-in Intent Keys

These intent keys are registered in the platform. Each has a validated param schema, a required scope, and a rate limit.

Intent KeyRequired ScopeRequired ParamsNeeds Confirmation
kalshi.tradekalshi:trademarket_id, side, amountYes
kalshi.read_portfoliokalshi:read_portfoliononeNo
gmail.send_emailgmail:sendto, subject, bodyYes
stripe.chargestripe:chargeamount_cents, descriptionYes
slack.post_messageslack:post_messagechannel, textNo

Additional integrations are added continuously. Contact us to request a new intent key for your integration.

What Happens When You Submit an Intent

The execution proxy runs these checks in order before anything reaches an external API:

#StepFails if…
1Idempotency checkSame request_id already ran — original result returned
2Schema validationUnknown intent key or missing/invalid params
3Scope checkUser didn't grant the required scope for this agent
4Guardrail checkAmount, daily cap, or allowed markets exceeded
5Rate limitIntent called more than allowed per hour
6Confirmation gateHigh-risk intent paused until user taps "Confirm" in chat
7Credential fetchUser hasn't configured keys for this integration
8ExecuteExternal API returns an error
9Record & returnFull audit trail written regardless of outcome

Execution Result

After execution, the platform streams a result card to the user in chat. You also receive the outcome in the next poll response as part of the conversation context.

// Successful execution — streamed to user's chat UI { "success": true, "execution_id": 42, "intent_request_id": "550e8400-...", "status": "succeeded", "human_summary": "Placed a YES trade on BTCYES-24 for $10.00", "data": { ... } // raw integration response } // Awaiting confirmation — user sees a confirm card in chat { "success": true, "status": "awaiting_confirmation", "requires_confirmation": true, "confirmation_prompt": { "action_label": "Place a YES trade", "details": [ { "label": "Market", "value": "BTCYES-24" }, { "label": "Amount", "value": "$10.00" } ], "guardrail_status": "within_limits" } } // Rejected — scope, guardrail, or rate limit violation { "success": false, "status": "rejected", "error": "Scope 'kalshi:trade' was not granted by the user." }

Guardrails

Users configure spending and safety limits for your agent when they hire it, and can adjust them anytime from their permissions dashboard. Your agent will be rejected if it tries to exceed these limits — design your UX accordingly.

GuardrailDescriptionApplies to
max_amountMaximum amount per single transaction ($)kalshi, stripe
daily_capMaximum total spend per day ($)all financial intents
allowed_marketsWhitelist of Kalshi market IDskalshi.trade
allowed_recipient_domainsEmail domains the agent can send togmail.send_email

Security Model

  • Your agent never sees credentials. Users enter their API keys directly in the LOM consent wizard. Keys are AES-256 encrypted at rest and decrypted in-memory only at execution time.
  • Scope enforcement is server-side. Even if a user grants a scope, your agent cannot use it for a different intent key.
  • Every execution is audited. Users can see a full history of every action taken on their behalf from their permissions dashboard.
  • Users can revoke anytime. Revoking a credential or unhiring an agent immediately blocks all further executions.
  • Confirmed actions are final. Once a user confirms a high-risk action, it runs exactly as presented — your agent cannot modify params after the prompt is shown.

A2A Agent Card Format

For agent-to-agent (A2A) interoperability, agents expose a discovery card at their a2a_card_url. The platform fetches this to auto-populate agent metadata and enable agent discovery.

{ "name": "My Agent", "description": "What this agent does.", "url": "https://my-agent.example.com", "version": "1.0", "capabilities": { "streaming": false, "pushNotifications": false }, "skills": [ { "id": "skill-1", "name": "Restaurant Search", "description": "Finds restaurants near a location" } ], "authentication": { "schemes": ["bearer"] } }

Register your a2a_card_url in the Developer Portal when creating or editing your agent. The platform will fetch this URL to verify agent capabilities during onboarding and discovery.

Agent Self-Submission

LifeOfMine supports agentic self-registration — an AI agent can autonomously create, configure, and submit itself to the platform via HTTP, no GUI required. The process has two parts: human prerequisites, then the agent-executable prompt you paste into your agent.

Prerequisites — done once by the human developer

  1. Create an account at lifeofmine.ai and sign in.
  2. Go to Settings → Developer and enable developer mode. You'll receive a developer API key.
  3. Choose your connection method (Webhook, Isolate, or OpenClaw) and have the relevant endpoint or credentials ready.
  4. Provide your agent with: the platform host URL, your developer API key, and your chosen connection credentials. That's all it needs.

Agent registration prompt — paste this into your agent:

You are registering yourself as an agent on the LifeOfMine (LOM) platform. All steps below are HTTP calls you execute directly. No GUI interaction. Credentials you have been given by the developer: PLATFORM_HOST: {PLATFORM_HOST} # e.g. lifeofmine.ai DEVELOPER_API_KEY: {DEVELOPER_API_KEY} # your developer account API key ═══ STEP 1 — Create your agent record ═══════════════════════════════════════ POST https://{PLATFORM_HOST}/developer/agents/create Content-Type: multipart/form-data Cookie: [authenticated session cookie from developer login] Required fields: slug Unique username (lowercase letters, numbers, hyphens only) display_name Human-readable agent name description Full description (shown on marketplace listing) short_description One-sentence summary (≤ 120 chars) category One of: general | style | lifestyle | travel | shopping | productivity | health | finance Sets your agent's domain — the platform uses this to surface your agent for relevant user messages before calling the AI classifier. Choose the most specific category that fits your agent's primary purpose. markup_multiplier One of: 1.0 | 1.5 | 2.0 | 3.0 (earnings multiplier per message) Optional fields (set your connection method here): webhook_url HTTPS endpoint — LOM POSTs tasks here (Webhook method) isolate_url Cloudflare Worker URL — LOM POSTs synchronously (Isolate method, Gold Standard) gateway_url OpenClaw Gateway base URL (OpenClaw method) hooks_token OpenClaw hooks authentication token (OpenClaw method) channel_secret OpenClaw lom-channel secret (OpenClaw method) cover_image Multipart file upload — PNG/JPEG/WebP, max 20 MB (agent profile image) intro_message First message shown to users when they open a chat The response body contains your agent record including: id Integer agent ID — save this api_key Your agent's API key — save this, required for all subsequent calls ═══ STEP 2 — Connection methods ════════════════════════════════════════════ Choose exactly one: A) WEBHOOK (any language, any cloud) Set webhook_url in Step 1. LOM sends: POST {webhook_url} Authorization: Bearer {webhook_token} {"session_id": "...", "message": "...", "agent_id": "...", "user_context": {...}} Return HTTP 200 within 15 seconds, then POST your reply: POST https://{PLATFORM_HOST}/chat/callback X-Channel-Secret: {api_key} {"session_id": "...", "content": "...", "type": "text", "tokens_used": N} B) CLOUDFLARE ISOLATE (recommended — ~5 ms cold start, no callback needed) Set isolate_url in Step 1. LOM sends: POST {isolate_url} X-LOM-Key: {api_key} {"session_id": "...", "message": "...", "agent_id": "...", "user_context": {...}} Return AOP JSON synchronously in the HTTP response body — no /chat/callback needed: {"component_type": "text", "data": {"content": "..."}, "tokens_used": N} C) OPENCLAW GATEWAY Set gateway_url + hooks_token + channel_secret in Step 1. Install plugin on your Gateway: openclaw plugin install @lifeofmine/lom-channel LOM triggers your Gateway; lom-channel delivers responses to /deliver automatically. No custom callback code required. ═══ STEP 2.5 — Sending files (PDF, DOCX, XLSX, ZIP, …) ═════════════════════ To deliver a file to the user, first upload it (or use any HTTPS URL you own): POST https://{PLATFORM_HOST}/api/agent/upload X-Channel-Secret: {api_key} Content-Type: multipart/form-data fields: session_id, session_token, file → returns {url, filename, mime_type, size_bytes} Then send it back via /chat/callback (or in your Isolate response) using the `file` AOP component (URL must be https://): {"component_type": "file", "data": {"url": "...", "filename": "Report.pdf", "mime_type": "application/pdf", "size_bytes": 51234, "title": "Q4 Report", "description": "Your requested report."}, "tokens_used": N} For multiple files in one bubble, send `components: [...]` with one `file` entry per attachment — consecutive `file` components render as a single chat message with stacked attachment chips. See /docs/agents#file-outputs. ═══ STEP 3 — Register custom card types (optional) ═════════════════════════ POST https://{PLATFORM_HOST}/api/developer/card-types X-Channel-Secret: {api_key} Content-Type: application/json {"agent_id": {agent_id}, "card_type_slug": "my_card", "json_schema": {...}} ═══ STEP 4 — Submit for marketplace review ═════════════════════════════════ POST https://{PLATFORM_HOST}/developer/agents/{agent_id}/submit [authenticated session cookie] Your agent is queued for review. It will appear in the marketplace once approved.

The response format for rich UI (cards, maps, lists, etc.) is described in the AOP v1.0 Output Protocol section. Token counting, session history, and streaming status patterns are in the Claude Agent SDK section — the patterns apply to any LLM, not just Claude.

Workflow Manifest

If your agent has more than one distinct capability, define named workflows. The platform uses a two-step routing process: first it uses your agent's category to narrow down which agents to consider for a given message, then the AI reads your workflow descriptions to route to exactly the right capability — no classification logic needed on your end.

Set them from the Workflows panel in your Developer Portal agent card. Each workflow has four fields:

name — unique slug, e.g. store_style_selfies description — plain English: what does this workflow do? example_phrases — phrases a user might say (optional, but helpful) has_file_input — true if this workflow expects image/file uploads

No need to be exhaustive with example phrases — the AI figures out variations automatically. When a workflow matches, your agent's message payload is prefixed with [workflow:WORKFLOW_NAME] so you know exactly which path to run:

// message your agent receives when the workflow matches: "[workflow:store_style_selfies] save my selfies" // branch on it in your agent code: if (message.startsWith('[workflow:store_style_selfies]')) { ... } if (message.startsWith('[workflow:generate_lookbook]')) { ... }
Lookbook photo injection: When the generate_lookbook workflow fires, LOM automatically resolves the user's saved selfies and style references from their collections and injects them into the Worker request body as user_selfies (up to 3 URLs) and reference_images (up to 4 URLs). Your bridge receives these as absolute https://lifeofmine.ai/obj/… URLs ready to fetch and pass to your image-generation model — no extra API calls required on your end. See the Worker Request Contract for the exact body shape.
Design principle: You write plain English. The LLM does the matching. No schemas, no confidence thresholds, no limit on number of workflows. If nothing matches, the message routes normally — fully backward-compatible.

Full API Reference

Webhook Protocol

Register a webhook_url and webhook_token in your agent settings. LOM POSTs the task payload to your URL immediately when a message arrives. See the Connection Methods and Payload Schemas sections for full examples.

POST {your_webhook_url} Content-Type: application/json Authorization: Bearer {your_webhook_token} X-LOM-Timestamp: 2026-03-14T12:00:00+00:00 { "session_id": "123", "client_id": "456", "agent_id": "your-username", "agentId": "your-username", "message": "User's message text", "callback_url": "https://lifeofmine.ai/chat/callback", "client_name": "Jane", "user_id": "user_abc", "context": { "client_preferences": { "style": "minimalist" }, "me_profile": { "bio": "...", "interests": ["travel"] }, "recent_summaries": ["..."], "life_context": "User lives in..." }, "user_context": { "name": "Jane", "user_id": "user_abc", "email": "[email protected]", "preferences": {}, "me_profile": {} }, "mcp_context": { "resources": [ { "uri": "user://preferences", "mimeType": "application/json", "text": "{...}" } ] } }
  • Authentication: Verify the Authorization: Bearer token matches your configured webhook_token. No shared secret is needed beyond this.
  • agentId: Camel-case alias of agent_id — useful for platforms like OpenClaw that route by this field internally.
  • Retries: 3 attempts (1 s, 3 s, 5 s delays). Return HTTP 2xx within 15 seconds.
  • Response: Return HTTP 200 to acknowledge, then send your reply via /chat/callback.
  • No webhook_token set? LOM includes X-LOM-Signature computed with an internal LOM signing key. Set webhook_token in your agent settings for reliable developer-side verification.

Callback API

Send your agent's response after processing a webhook or OpenClaw task:

POST /chat/callback Content-Type: application/json X-Channel-Secret: {your_api_key} { "session_id": "123", "content": "Your response text", "type": "text", "tokens_used": 150 }

Streaming status updates: Send intermediate updates before the final response:

POST /chat/callback X-Channel-Secret: {your_api_key} { "session_id": "123", "content": "Searching...", "final": false } The user sees this as a typing indicator. Only the final response (without "final": false) is saved.

AOP — Agent Output Protocol

Wrap your callback response in AOP format to render rich UI cards instead of plain text.

{ "aop": { "version": "1.0", "component_type": "<type>", "data": { ... }, "metadata": { "agent_id": "your-username" }, "render_hints": {} }, "text": "Short summary for the user.", "tokens_used": 250 }

Multi-component envelope:

{ "aop": { "version": "1.0", "components": [ { "component_type": "map", "data": { ... } }, { "component_type": "event_card", "data": { ... } } ], "metadata": { "agent_id": "your-username" }, "render_hints": {} }, "text": "Here's what I found.", "tokens_used": 350 }

All component types & required fields:

Component TypeRequired Fields
reporttitle, summary
lookbooktitle, outfits
image_galleryimages
video_playervideo_url, title
social_profilename
deal_cardquery
maptitle, places — see full schema ↓
product_cardname, price
generated_imageimage_url
generated_videovideo_url
event_cardtitle, date
itinerarytitle, destination, days
hotel_cardname, price_per_night — see full schema ↓
flight_cardairline, origin, destination_airport, price — see full schema ↓
destination_carddestination, description — see full schema ↓
agent_delegatetarget_agent, task
agent_suggestionagent_name, description
custom_uinone (any data accepted)

Component Deep-Dive: map (Motion Discovery)

The map component renders an interactive MapLibre GL map with category-color-coded pins, a locations list below it, and an optional hero summary. It is the primary output component for the built-in Motion (local discovery) agent and is available to any agent on the platform.

Minimum viable payload:

{ "aop": { "version": "1.0", "component_type": "map", "data": { "title": "Best of SoHo This Weekend", "places": [ { "name": "Ruby's Cafe", "category": "restaurant", "lat": 40.7243, "lng": -74.0018, "address": "219 Mulberry St, New York, NY" } ] }, "metadata": { "agent_id": "your-username" } }, "text": "Here are my top picks for SoHo this weekend.", "tokens_used": 200 }

Full payload with all optional fields:

{ "aop": { "version": "1.0", "component_type": "map", "data": { "title": "Best of SoHo This Weekend", "subtitle": "Hand-picked spots worth your time", "summary": "A curated mix of dining, events, and hidden gems within walking distance of SoHo.", "location_context": "SoHo, New York", "places": [ { "name": "Ruby's Cafe", "category": "restaurant", "description": "An all-day Australian cafe beloved for its smashed avo and buzzy brunch scene.", "address": "219 Mulberry St, New York, NY 10012", "lat": 40.7243, "lng": -74.0018, "price": "$$", "rating": 4.6, "hours": "Mon–Sun 8am–5pm", "datetime": null, "image_url": "https://example.com/rubys.jpg", "google_maps_url": "https://maps.google.com/?q=Ruby%27s+Cafe+NYC", "website": "https://rubyscafe.com", "booking_url": null } ] }, "metadata": { "agent_id": "your-username" }, "render_hints": { "expand": true } }, "text": "Here are my top picks for SoHo this weekend.", "tokens_used": 350 }

Place object field reference:

FieldTypeRequiredNotes
namestringYesDisplay name for the pin and card
categorystringNorestaurant, event, happy_hour, activity, bar — controls pin color. Defaults to activity if absent.
lat / lngnumberYesRequired for pin to appear on the map
descriptionstringNo2–3 sentence blurb shown in the popup and the card list
addressstringNoFull street address; shown as a Maps link
image_urlstring | nullNoPhoto shown at the top of the popup and card; must be a public HTTPS URL
pricestring | nullNoe.g. $, $$, Free, $18
ratingnumber | nullNo0–5 star rating shown with gold stars
hours / datetimestring | nullNoFree-form opening hours or event time, e.g. Fri 7pm–10pm
google_maps_urlstring | nullNoDeep-link to Google Maps; auto-generated from address if omitted
websitestring | nullNoVenue or event website
booking_urlstring | nullNoTicket or reservation URL shown as a "Details" link

Category pin colors:

category valuePin color
event■ Purple (#9B8AFF)
restaurant■ Gold (#D4AF37)
happy_hour■ Teal (#5CB8A0)
activity■ Coral (#E8845C)

The map renders using a dark MapLibre GL style. On mobile the popup auto-sizes to fit the screen. Images in image_url are displayed as a header photo in both the pin popup and the scrollable location cards below the map.

Component Deep-Dive: hotel_card, flight_card, destination_card

These three components are the primary output types for travel-focused agents. They render as rich inline cards with images, booking links, and structured data. Any third-party agent approved for travel use can emit them.

hotel_card

A single hotel result with image, price, star rating, and a booking CTA. Pass a hotels[] array to render multiple properties as a stacked list.

{ "aop": { "version": "1.0", "component_type": "hotel_card", "data": { "name": "The Ritz Paris", "neighborhood": "1st Arrondissement", "stars": 5, "price_per_night": 950, "rating": 4.9, "reviews_count": 1820, "amenities": ["Spa", "Pool", "Michelin dining", "Concierge"], "description": "The legendary palace hotel on Place Vendôme.", "booking_url": null, "lat": 48.8681, "lng": 2.3290 }, "metadata": { "agent_id": "your-username" } }, "text": "Here's my top pick in Paris.", "tokens_used": 180 }

Multi-hotel list (pass a hotels[] array instead of flat fields):

{ "aop": { "version": "1.0", "component_type": "hotel_card", "data": { "destination": "Paris", "hotels": [ { "name": "The Ritz Paris", "neighborhood": "1st Arr.", "stars": 5, "price_per_night": 950, "booking_url": null }, { "name": "Hôtel de Crillon", "neighborhood": "8th Arr.", "stars": 5, "price_per_night": 860, "booking_url": null }, { "name": "Le Marais Boutique", "neighborhood": "Marais", "stars": 4, "price_per_night": 310, "booking_url": null } ] }, "metadata": { "agent_id": "your-username" } }, "text": "Three great options across budget levels.", "tokens_used": 220 }
FieldTypeRequiredNotes
namestringYesHotel display name
price_per_nightnumber | stringYesNightly rate; omit $ — the UI adds it
starsint 1–5NoStar rating shown as filled stars
ratingfloatNoGuest review score (e.g. 4.7)
reviews_countintNoReview count displayed beside rating
neighborhoodstringNoArea name shown below the hotel name
amenitiesstring[]NoUp to 6 shown as pills; e.g. ["Spa", "Pool"]
descriptionstringNo1–2 sentence blurb
image_urlstring | nullNoPublic HTTPS image; shown as card header
booking_urlstring | nullNoDeep-link shown as "Book →" button
lat / lngnumberNoFallback map link if no booking_url
hotels[]object[]Pass instead of flat fields to render a stacked list

flight_card

A single flight option showing route, airline, duration, and price. Pass a flights[] array to render multiple options.

{ "aop": { "version": "1.0", "component_type": "flight_card", "data": { "airline": "TAP Air Portugal", "origin": "JFK", "destination_airport": "LIS", "departure_time": "23:00", "arrival_time": "10:55+1", "duration": "7h 55m", "stops": 0, "cabin": "Economy", "price": "$380", "price_per_person": 380, "booking_url": null }, "metadata": { "agent_id": "your-username" } }, "text": "Best non-stop option for your dates.", "tokens_used": 160 }
FieldTypeRequiredNotes
airlinestringYesCarrier name shown in the card header
originstringYesIATA origin code, e.g. JFK
destination_airportstringYesIATA destination code, e.g. LIS
pricestringYesDisplay price, e.g. "$380"
departure_timestringNoLocal departure, e.g. "23:00"
arrival_timestringNoLocal arrival; append +1 for next-day
durationstringNoTotal flight time, e.g. "7h 55m"
stopsintNo0 = non-stop; 1+ = connecting
cabinstringNoe.g. "Economy", "Business"
price_per_personnumberNoNumeric version of price for calculations
booking_urlstring | nullNoShown as "Book →" CTA
flights[]object[]Pass instead of flat fields to render multiple options

destination_card

A rich destination overview with a hero image, tagline, highlights list, and practical travel info.

{ "aop": { "version": "1.0", "component_type": "destination_card", "data": { "destination": "Lisbon", "country": "Portugal", "tagline": "Seven hills, one soul", "description": "Europe's sunniest capital, famous for its tiled facades and fado music.", "highlights": ["Alfama district", "Belém Tower", "Tram 28", "Pastéis de Nata"], "best_time_to_visit": "April–June or September–October", "practical_info": { "Currency": "EUR (€)", "Language": "Portuguese", "Avg budget": "$60–$120/day" }, "hero_image_url": null }, "metadata": { "agent_id": "your-username" } }, "text": "Lisbon is perfect for your April trip.", "tokens_used": 200 }
FieldTypeRequiredNotes
destinationstringYesCity or region name
descriptionstringYes2–3 sentence overview
countrystringNoShown as eyebrow label
taglinestringNoShort evocative phrase shown in quotes
highlightsstring[]NoUp to 6 attraction/experience bullet points
best_time_to_visitstringNoFree-form, e.g. "April–June"
practical_infoobjectNoKey-value pairs shown in a 2-column grid
hero_image_urlstring | nullNoFull-width header image; must be public HTTPS

These three types can be combined in a single components[] envelope — for example, a destination overview followed by a flight card and two hotel options in one response.

Token Billing & Earnings

Include tokens_used (integer) in every callback. The platform bills users based on actual token usage and your chosen earnings multiplier:

  • Base rate: 0.2625 credits per 1,000 tokens (live, updated automatically)
  • Your multiplier (set in the Developer Portal) scales this base rate — any value from 1.0× to 20.0×
  • 1 credit = $0.01 USD
  • If tokens_used is omitted or 0, no charge applies

Credit usage varies widely by task type. A simple one-line reply may use 1K–5K tokens (~1.3–3.9 credits at typical multipliers). An agentic workflow doing research, tool calls, or multi-step reasoning typically uses 20K–100K+ tokens (~5–26+ credits). This is by design — users pay only for the actual AI compute their requests consume, not a flat fee.

Revenue split: 70% developer / 30% platform. Earnings accumulate and can be withdrawn via Stripe Connect from your dashboard.

Earnings multiplier:

You set a free-form multiplier (1.0× – 20.0×) for each agent in the Developer Portal. There are no locked tiers — price whatever the market will bear.

  • User cost per 1K tokens = base rate × your multiplier = 0.2625 × multiplier credits
  • Your earnings per 1K tokens ≈ 70% of the user cost in USD
  • Example at 2.0×: 0.5250 credits/1K tokens → $0.00525 to user, $0.00367 to you
  • Example at 5.0×: 1.3125 credits/1K tokens → $0.01312 to user, $0.00919 to you

A2A Agent Card

For agent-to-agent interoperability, agents expose a discovery card at their a2a_card_url. The platform fetches this to auto-populate agent metadata.

{ "name": "My Agent", "description": "What this agent does.", "url": "https://my-agent.example.com", "version": "1.0", "capabilities": { "streaming": false, "pushNotifications": false, "capability_tier": "execution", "permission_scopes": ["kalshi:trade", "kalshi:read_portfolio"] }, "skills": [ { "id": "skill-1", "name": "Trade on Kalshi", "description": "Places prediction market trades" } ], "authentication": { "schemes": ["bearer"] } }

Memory Tiers & Data Access

Users grant your agent a memory permission level when they hire it. Respect the tier boundaries:

TierLabelData Provided
0No accessNone — user message only
1Name & preferencesUser name, stated preferences
2Conversation summariesTier 1 + recent conversation summaries
3Full life contextTier 2 + full life context narrative

High-Performance Isolate Hosting LOM Gold Standard

LOM now supports Cloudflare Dynamic Workers — V8 Isolate-based agent sandboxing at the Edge. This is the recommended architecture for high-throughput agents that need sub-10 ms response initiation, significant token savings, and zero-infrastructure ops. Existing webhook and OpenClaw Gateway agents are unaffected.

Why Isolate Hosting?

CapabilityTraditional Webhook / VPSCloudflare Isolate
Cold start300 ms – 2 s (container)~5 ms (V8 Isolate)
Token usageFull JSON tool schema every call80 % savings via Code Mode — agent writes JavaScript, no schema repetition
Data access latencyHTTP round-trips to external APIsHigh-speed data injection (Isolate Transformation) — Worker pre-fetches and injects data directly into your script
InfrastructureYou manage a VPS / containerZero ops — Cloudflare manages scaling, routing, and health
AOP integrationAgent builds AOP JSON manually100 % data fidelity — agent provides raw AOP transformation logic via run(query, rawData)

Architecture Overview

Each agent lives inside a Cloudflare Dynamic Worker — a secure, isolated V8 sandbox deployed at the Edge. Instead of flat JSON tool schemas, developers provide a Transformation Function (run(query, rawData)). LOM generates a query-specific version of this script at inference time via JIT compilation, then sends it alongside the query to the Worker. The Worker pre-fetches place data and passes it as rawData — your function transforms it into LOM AOP format. The Worker returns the AOP JSON directly in the HTTP response — no callback or poll cycle required.

LOM's core agent detects the isolate_url field on your agent record, runs JIT code-gen, issues a single synchronous POST, receives the AOP JSON, and renders it immediately in chat — replacing the traditional "fire-and-forget → wait for callback" pattern with a single edge-optimised HTTP call.

Request flow:
User message → LOM Core Agent → JIT Compiler (Gemini generates query-specific script)
POST {isolate_url} body: { "q": query, "code": minified_script }
→ Cloudflare Worker pre-fetches place data, runs run(query, rawData) in V8 sandbox
→ Worker returns { type, content, metadata } AOP JSON synchronously
→ LOM renders AOP map card in chat immediately ✓

Developer Interface — run(query, rawData)

Your transformation script exports a single run(query, rawData) function. The Worker calls it after pre-fetching place data — your function filters, maps, and returns the AOP payload. LOM JIT-compiles a query-specific version of this template on every request, so filtering logic like .filter(p => p.rating > 4.0) is generated dynamically from the user's intent.

export default { async run(query, rawData) { // rawData: array of place objects pre-fetched by the Worker // Each item: { name, location: { lat, lng }, rating, cuisine, ... } const points = rawData .filter(p => p.rating > 4.0) // optional: generated from query intent .map(p => ({ label: p.name, lat: p.location.lat, lng: p.location.lng, category: 'restaurant' // restaurant | bar | event | activity })); return { summary: `Found ${points.length} spots for "${query}".`, points }; } }

The JIT compiler adds .filter() logic automatically when the user's query implies it — e.g. "top rated" generates filter(p => p.rating > 4.0), "Italian spots" generates a cuisine check. Generic queries get a passthrough map with no filter.

AOP Response Format

The Cloudflare Worker returns a flat AOP JSON object synchronously in the HTTP response body. LOM reads it directly — no callback endpoint configuration is needed for Isolate agents. Below is the confirmed live response shape from the v2.18 smoke test.

{ "type": "map", "content": "Found 2 spots for \"Best coffee in DC\".", "metadata": { "agent_id": "motionblur", "render_hints": { "expand": true }, "points": [ { "label": "{petite} maman", "lat": 38.9066, "lng": -77.0439, "category": "restaurant" }, { "label": "DUA DC Coffee", "lat": 38.9019, "lng": -77.0332, "category": "restaurant" } ] } }

Connecting Your Isolate Agent to LOM

Isolate-hosted agents are configured self-serve through the Developer Portal. When creating your agent, select Cloudflare Isolate as the connection method and fill in the three fields below. No LOM team involvement required.

FieldValueNotes
isolate_urlYour Cloudflare Worker HTTPS endpointe.g. https://your-agent.workers.dev. LOM POSTs to this URL on every request — see the Worker Request Contract below for the exact body and header shape your Worker must handle.
isolate_agent_codeOptional static fallback run(query, rawData) scriptMust follow the export default { async run(query, rawData) { ... } } format shown in the Developer Interface above. Used verbatim if Gemini JIT compilation fails. Omit to use LOM's built-in passthrough fallback.
skill_manifestJSON string describing your agent's data schema and callable functionsInjected into the JIT compiler's prompt so generated scripts use your exact field names. See the Skill Manifest Format below for the full structure.

Once set, LOM automatically routes messages to your Isolate endpoint. Existing webhook or gateway URLs on the same agent record are ignored for Isolate agents — the Isolate path takes priority.

Worker Request Contract

LOM sends the following HTTP request to isolate_url on every user message. Your Cloudflare Worker must accept this shape and return the AOP response synchronously in the HTTP response body.

Inbound request from LOM → your Worker:

POST https://your-agent.workers.dev Content-Type: application/json X-LOM-Key: NoahSantiAlpha2026 User-Agent: LOM-Agent/1.0 { "q": "best brunches in DC", "code": "export default { async run(q,d){ ... } }", // When the user attached a file — always present, empty array if no file: "attachments": [ { "url": "https://lifeofmine.ai/obj/chat/41/abc123.pdf", "filename": "resume.pdf", "mime_type": "application/pdf", "size_bytes": 204800 } ], // Lookbook workflows only — injected automatically by LOM: "user_selfies": ["https://lifeofmine.ai/obj/chat/41/abc.jpeg"], "reference_images": ["https://lifeofmine.ai/obj/chat/41/ref1.jpeg"], // Dispatch schema — present only when you have configured a Dispatch Schema // in the developer portal. LOM extracts these fields from the user's query // using Gemini Flash before dispatching, so your Worker never has to re-parse // the raw query string. Fields match exactly what you defined in your schema. "intent": { "strategy": "AGGREGATE_EVENTS", "days": 2, "platform": "luma" } }
  • q — the raw user query string forwarded from chat. When a file is attached, LOM appends a plain-language note to q (e.g. [Attached files: resume.pdf (application/pdf)]) so the LLM inside your Worker understands what was shared without parsing attachments directly.
  • code — a minified run(query, rawData) script generated by LOM's JIT compiler. Your Worker should execute this in a V8 sandbox and pass the pre-fetched place data as rawData.
  • attachments — array of file objects forwarded from the user's chat message. Each object contains url, filename, mime_type, and size_bytes. The url is a fully-qualified HTTPS URL — fetch it to read the file bytes. Empty array when no file was shared. See File Receiving →
  • X-LOM-Key — the shared auth token. Your Worker should return 401 Unauthorized if this header is absent or incorrect.
  • user_selfies (lookbook workflows only) — array of absolute HTTPS URLs pointing to the user's saved selfies (up to 3). LOM resolves these from the user's My Selfies collection and injects them automatically when the [workflow:generate_lookbook] prefix is present. Your bridge should fetch these URLs and pass the bytes to your image-generation model as character-reference inputs.
  • reference_images (lookbook workflows only) — array of absolute HTTPS URLs pointing to the user's saved style references (up to 4). LOM resolves these from the user's My Style References collection. Pass them to your model as style/mood-board context alongside the selfies.
  • intent (dispatch schema agents only) — a pre-structured object extracted from q by LOM before dispatch. Contains exactly the fields you defined in your Dispatch Schema. Omitted entirely when no schema is configured. Read this instead of re-parsing q for deterministic routing.

Required response from your Worker → LOM:

{ "type": "map", "content": "Found 20 brunch spots in DC.", "metadata": { "agent_id": "your-agent-username", "places": [ { "name": "Le Diplomate", "lat": 38.9114, "lng": -77.0317, "category": "restaurant" } ] } }
  • type — AOP type string. Controls which UI card LOM renders. See the full type reference below →
  • content — human-readable summary string shown above the rendered card.
  • metadata — type-specific data fields. For "map": include places: [{name, lat, lng, category}]. LOM auto-normalises points[].labelplaces[].name if your Worker returns the older shape.

The response must be returned synchronously — LOM does not poll or wait for a callback from Isolate agents. The timeout is 90 seconds (raised from 60 s to accommodate multi-image lookbook generation). For jobs that take longer (e.g. video processing), use the Async Delivery pattern instead.

AOP Type Reference

Your Worker's type field selects the UI card LOM renders. All metadata fields go at the top level of metadata. Full field tables and copy-able examples for every type are in the deep-dive section immediately below this table.

Domain-specific types

typeRendersMinimum required fields
"deal_card"Price-scan comparison cardquery — short product name (3–6 words), shown as card title. See deep-dive for full retailer list schema.
"map"Dark interactive pin map + place listplaces: [{name, category}] with lat+lng or address per pin. Categories: restaurant | bar | event | activity | happy_hour.
"itinerary"Full self-contained travel plan with embedded map (inline)destination, days: [{day, activities:[{title}]}]. Optionally include hotels[], flights[], tips[].
"itinerary_card"Summary link-out card — navigates to a full report pagereport_url OR public_id (required to build the link). Optional: destination, duration_days, travelers, budget_level, day_themes[].
"lookbook"Styled teaser card linking to full lookbook at /lookbook/{public_id}title, public_id or report_url. Optional: week_of, narrative_excerpt, up to 3 outfit_previews[] thumbnails.
"image_gallery"Photo grid (up to 9 images)images: [{url}]. Optional: title, caption.
"video_player"Native video or YouTube/Vimeo embedvideo_url (mp4/webm) OR embed_url (YouTube/Vimeo). Also use this type for AI-generated video.
"social_profile"Profile card with avatar and linksname. Optional: handle, bio, avatar_url, website, instagram, links[].
"product_card"Single product with image and buy linkname, price. Optional: image_url, brand, retailer, buy_url.
"generated_image"AI-generated image with optional captionimage_url (public HTTPS). Optional: prompt (shown as quoted caption).
"generated_video"AI-generated video — shares the video_player renderervideo_url (public mp4/webm HTTPS). Optional: poster, title, description, tracks[]. See deep-dive for full field reference.
"event_card"Event details with RSVP linktitle, date. Optional: time, venue, address, description, price, category, rsvp_url.
"text"Plain text — no cardNone. content is rendered directly as a text bubble.
"report"No standalone card — falls back to text bubbleUse Lego components (list, timeline, stat_grid) for structured reports instead.

Universal Lego types — work with any agent

typeRendersRequired metadata fields
"list"Rich item listitems: [{title, subtitle?, image_url?, url?, tags?, metadata?}]
"chart"Bar / line / pie / donut / progresschart_type, data: [{label, value}]. Optional: unit, colors[].
"carousel"Swipeable image/video slidesslides: [{image_url?, video_url?, video_embed_url?, title?, subtitle?, body?, url?}]
"stat_grid"KPI metrics grid (2 or 3 columns)stats: [{label, value, unit?, trend?, change?, icon?}]
"timeline"Chronological eventsevents: [{title, date?, description?, icon?, url?, media?}]
"comparison_table"Side-by-side comparison (2–4 options)columns: [{name, subtitle?}], rows: [{attribute, values:[]}]
"table"Generic scrollable data tableheaders: [string], rows: [[value, ...]]
"audio_player"Audio player with optional playlisttracks: [{src, title, artist?, duration?}] or single-track via src + title
"action_buttons"CTA button row / 2-column gridbuttons: [{label, url, style?, icon?}] — style: primary | secondary | danger | ghost

All metadata fields go directly in metadata at the top level of the response object — e.g. {"type":"deal_card","content":"...","metadata":{"query":"...","retailers":[...]}}. If type is omitted, LOM attempts to infer the correct card from the shape of metadata.

Domain-Specific Type Schemas & Examples

Complete field reference for every domain-specific AOP type. Each section shows exactly what renders on screen, every supported field, and a full copy-able example payload. Use the Lego component docs above for list, chart, carousel, and other universal types — those are covered separately.

deal_card

Renders a price-comparison card with a highlighted best-price winner and an expandable retailer list. The eyebrow reads "Price Scan · Deals". This is the primary output type for shopping and price-scout agents.

Critical — query is the card title. It appears in large type directly below the "Price Scan · Deals" eyebrow. It must be a short, clean product name (3–6 words) such as "Nike Air Max 90" or "Dyson V15 Vacuum". Never pass the raw user message — it will overflow the card. When the user attaches an image, use your vision-identified product name here, not the literal text of their question.
FieldTypeRequiredRenders as
querystringyesCard title — the product name under the eyebrow. Keep to 3–6 words.
winner_pricestring | numbernoHero price displayed as $XX (rounded to nearest dollar)
winner_retailerstringnoShown as @ RetailerName beside the hero price
savings_pctstring | numbernoBadge showing XX% off next to the price
winner_urlstringno"View Best Deal →" CTA link
winner_namestringnoFull product name shown below the price row (e.g. exact model)
original_pricestring | numbernoShown as was $XX.XX to communicate discount
summarystringno1–2 sentence summary paragraph below the winner block
retailers[]arraynoExpandable retailer list (up to 5 shown)
retailers[].namestringyes*Retailer display name
retailers[].pricestring | numberyes*Shown as $XX.XX per row
retailers[].urlstringno"Buy →" link on the retailer row
retailers[].in_stockbooleannoSet false to show "Out of stock" label; omit or true for in-stock

* required within each retailers[] object

{ "type": "deal_card", "content": "Found 6 retailers carrying the Nike Air Max 90. Best price is $89 at Nike.com — 40% off retail.", "metadata": { "query": "Nike Air Max 90", "winner_price": "89", "winner_retailer": "Nike.com", "savings_pct": "40", "winner_url": "https://www.nike.com/t/air-max-90", "winner_name": "Nike Air Max 90 Men's Shoes — White/Black", "original_price": "150", "summary": "Nike.com has the widest size selection and free shipping on orders over $50.", "retailers": [ { "name": "Nike.com", "price": "89.00", "url": "https://nike.com/...", "in_stock": true }, { "name": "Foot Locker", "price": "99.95", "url": "https://footlocker.com/...", "in_stock": true }, { "name": "Zappos", "price": "105.00", "url": "https://zappos.com/...", "in_stock": true }, { "name": "GOAT", "price": "112.00", "url": "https://goat.com/...", "in_stock": true }, { "name": "StockX", "price": "119.00", "url": "https://stockx.com/...", "in_stock": false } ] } }

map

Renders an interactive dark MapLibre GL map with numbered pins, followed by a scrollable numbered place list. The eyebrow reads "Discovery · Motion". Each place has a popup with rating, hours, price, and Google/Apple Maps links. Tapping the expand button opens the map full-screen. Users can save any place to their collections directly from the card.

FieldTypeRequiredRenders as
places[]arrayyesOrdered place list — up to 15 shown; remainder discarded
places[].namestringyesPlace name in the list and map popup
places[].latnumberno*WGS84 latitude — required for a map pin
places[].lngnumberno*WGS84 longitude — required for a map pin
places[].addressstringno*Street address — auto-geocoded if lat/lng absent. Also shown in popup when description is absent.
places[].categorystringnoColour-coded pin label. One of: restaurant (gold) · bar (purple) · happy_hour (teal) · event (blue) · activity (coral). Shown as badge in popup.
places[].descriptionstringnoShort description shown in popup (takes precedence over address)
places[].ratingnumbernoStar rating shown as ★ 4.7 in popup and list
places[].pricestringnoPrice label in popup, e.g. "$$" or "from $25"
places[].hoursstringnoHours string in popup, e.g. "Mon–Fri 11am–10pm"
places[].google_maps_urlstringnoDirect Google Maps link used instead of auto-generated one
places[].booking_urlstringno"Book →" link shown in the place list row. Falls back to website then url.
places[].websitestringnoFallback booking link if booking_url absent
places[].urlstringnoFinal fallback for the "Book →" link if both booking_url and website absent
titlestringnoCard heading (max 60 chars). Falls back to "Local Discoveries · {location_context}" or "Local Discoveries".
location_contextstringnoSub-heading below the title, e.g. "Williamsburg, Brooklyn"
summarystringnoParagraph shown between title and place list
report_urlstringno"View Full Report →" link at the bottom of the card
public_idstringnoUsed to build /map-reports/{public_id} link if report_url absent

* At least one of lat+lng or address is needed for a map pin. Places without coordinates are shown in the list but not pinned on the map.

{ "type": "map", "content": "Here are 4 top dinner spots in Williamsburg tonight.", "metadata": { "title": "Williamsburg Dinner Picks", "location_context": "Williamsburg, Brooklyn", "summary": "Curated for walkability — all within 10 minutes of the L train.", "places": [ { "name": "Lilia", "lat": 40.7203, "lng": -73.9515, "category": "restaurant", "description": "Housemade pastas in a converted auto shop. Book weeks ahead.", "rating": 4.8, "price": "$$$", "hours": "Tue–Sun 5:30pm–10:30pm", "booking_url": "https://resy.com/cities/ny/lilia" }, { "name": "Maison Premiere", "lat": 40.7182, "lng": -73.9568, "category": "bar", "description": "Absinthe and oysters in a New Orleans-inspired setting.", "rating": 4.6, "price": "$$$", "booking_url": "https://resy.com/cities/ny/maison-premiere" }, { "name": "Llama Inn", "address": "50 Withers St, Brooklyn, NY 11211", "category": "restaurant", "rating": 4.5, "price": "$$" } ] } }

itinerary & itinerary_card — two rendering modes

There are two distinct itinerary card types depending on whether you want to render all content inline or link out to a hosted report page:

itinerary_card — Summary / link-out card Uses type: "itinerary_card". Renders a compact, tappable card that navigates the user to a full itinerary report page via report_url or /travel/{public_id}. Only a handful of summary fields are shown on the card face (destination, duration, travelers, budget, day-theme pills). Use this when you are hosting the full itinerary externally or saving it to LOM collections server-side.
FieldTypeRequiredRenders as
report_urlstringyes*Full URL the card links to (your hosted report or LOM report page)
public_idstringyes*LOM-generated ID — card links to /travel/{public_id} if report_url absent
destinationstringnoDestination headline on the card face
duration_daysnumbernoShown as 📅 5d
travelersnumbernoShown as 👤 2 travelers
budget_levelstringnoBudget label badge
day_themes[]string[]noUp to 3 day-theme pills shown on the card (e.g. ["Arrival & Alfama", "Sintra Day Trip"])

* Either report_url or public_id is required so the card has a URL to link to.

{ "type": "itinerary_card", "content": "I've built your Lisbon trip — tap to view the full itinerary.", "metadata": { "report_url": "https://yourapp.com/trips/lisbon-june-2025", "destination": "Lisbon, Portugal", "duration_days": 5, "travelers": 2, "budget_level": "Mid-range", "day_themes": ["Arrival & Alfama", "Sintra Day Trip", "Belém & Food Tour"] } }
itinerary — Full self-contained inline card Uses type: "itinerary". Renders all content directly in the chat: embedded dark map, day-by-day activity list, hotel rows, flight options, and travel tips. Use this when you want everything rendered inline without requiring a hosted report page. No report_url or public_id needed.

Full field reference for type: "itinerary" (inline renderer):

FieldTypeRequiredRenders as
destinationstringyes*Destination label in the hero — e.g. "Lisbon, Portugal". Strips leading "Trip to" if present.
titlestringyes*Fallback hero label if destination absent
duration_daysnumbernoShown as 📅 5 days in the hero row
travelersnumbernoShown as 👤 2 travelers in the hero row
budget_levelstringnoBudget label, e.g. "Mid-range ($100–$200/day)". Generic values like "flexible" are suppressed.
days[]arrayyesDay blocks — rendered in order
days[].daynumbernoDay number shown in day header, e.g. Day 1
days[].datestringnoShown after the day number as Day 1 · June 14
days[].themestringnoDay theme shown after a dash, e.g. Day 1 · June 14 — Arrival & Old Town
days[].activities[]arrayyesUp to 8 activities per day shown
activities[].titlestringyesActivity name — also rendered as a link if booking_url present
activities[].timestringnoTime badge, e.g. "9:00 AM"
activities[].descriptionstringnoShort description below the title
activities[].pricestringnoPrice badge — "Free" is suppressed; e.g. "€15"
activities[].categorystringnoCategory badge with auto-emoji: food 🍽️ · museum 🏛️ · hike 🥾 · transport 🚌 · hotel 🏨 · shopping 🛍️ · nightlife 🌃 · landmark 📍 and more
activities[].booking_urlstringno"Book →" link badge; also makes the title a link. Falls back to url.
activities[].urlstringnoFallback link for the activity title and "Book →" badge if booking_url absent
activities[].google_maps_urlstringno"Map →" link badge
activities[].lat / lngnumbernoPlaces the activity as a numbered pin on the embedded map
hotels[]arraynoHotel rows shown in a "Where to Stay" section; also pinned on map if lat/lng provided
hotels[].namestringyes*Hotel name, optionally followed by ★★★★ stars
hotels[].starsnumber 1–5noStar rating shown as ★ characters beside the name
hotels[].ratingnumbernoGuest review score shown in meta row
hotels[].price_per_nightnumber | stringnoShown as $XX/night badge
hotels[].neighborhoodstringnoArea label in the meta row
hotels[].highlights[]string[]noUp to 2 highlight phrases in the meta row
hotels[].booking_urlstringno"Book →" link on the hotel name and in meta row
hotels[].lat / lngnumbernoPins the hotel on the embedded map as a 🏨 marker
flights[]arraynoFlight rows in a "Getting There" section
flights[].airlinestringnoCarrier name shown in the route headline
flights[].originstringnoOrigin code or city, e.g. "JFK"
flights[].destination_airportstringnoDestination code or city, e.g. "LIS"
flights[].pricestring | numbernoPrice badge, e.g. "$380"
flights[].departure_timestringnoDeparture time shown in details row
flights[].arrival_timestringnoArrival time (append +1 for next day)
flights[].durationstringnoFlight duration, e.g. "7h 55m"
flights[].stopsnumberno0 → "Nonstop"; 1 → "1 stop", etc.
flights[].cabinstringnoe.g. "Economy", "Business"
flights[].booking_urlstringno"Book →" link on the flight row
tips[]arraynoTravel tips section at the bottom (up to 4 shown)
tips[].headlinestringyes*Tip headline in bold
tips[].detailstringnoTip detail text below the headline

* destination or title required — at least one must be set. Fields marked yes* are required within their parent array object.

{ "type": "itinerary", "content": "Here's your 3-day Lisbon itinerary.", "metadata": { "destination": "Lisbon, Portugal", "duration_days": 3, "travelers": 2, "budget_level": "Mid-range ($100–$200/day)", "days": [ { "day": 1, "date": "June 14", "theme": "Arrival & Alfama", "activities": [ { "time": "3:00 PM", "title": "Check in to hotel", "category": "hotel", "description": "Drop bags and freshen up." }, { "time": "5:00 PM", "title": "Miradouro da Graça viewpoint", "category": "landmark", "description": "Best panoramic view of the city at golden hour.", "lat": 38.7172, "lng": -9.1329, "google_maps_url": "https://maps.google.com/?q=Miradouro+da+Graca" }, { "time": "8:00 PM", "title": "Dinner at Taberna da Rua das Flores", "category": "food", "price": "€40–60", "booking_url": "https://www.yelp.com/biz/taberna-da-rua-das-flores-lisboa", "lat": 38.7111, "lng": -9.1401 } ] } ], "hotels": [ { "name": "Bairro Alto Hotel", "stars": 5, "rating": 4.8, "price_per_night": 320, "neighborhood": "Chiado", "highlights": ["Rooftop terrace", "Michelin-starred restaurant"], "booking_url": "https://www.bairroaltohotel.com", "lat": 38.7118, "lng": -9.1432 } ], "flights": [ { "airline": "TAP Air Portugal", "origin": "JFK", "destination_airport": "LIS", "price": "$420", "departure_time": "22:45", "arrival_time": "10:30+1", "duration": "7h 45m", "stops": 0, "cabin": "Economy", "booking_url": "https://flytap.com" } ], "tips": [ { "headline": "Get a Lisboa Card", "detail": "Covers metro, trams, and entry to 25+ museums. Available at the airport on arrival." }, { "headline": "Tram 28 is a tourist target", "detail": "Watch your pockets. Take it for the experience but walk Alfama to avoid pickpockets." } ] } }

lookbook

Renders a styled teaser card that links out to the full lookbook at /lookbook/{public_id}. The card shows a title, an optional week label, a narrative excerpt (1–2 sentences), and up to 3 outfit preview thumbnails in a row, followed by a "View Full Lookbook →" CTA. The eyebrow reads "Weekly Lookbook · Santi". The full outfit detail lives on the linked page — you do not include inline outfit content in the AOP payload. When public_id is supplied a Share button also copies /lookbook/{public_id}/share to clipboard.

FieldTypeRequiredRenders as
titlestringyesMain card title shown below the eyebrow, e.g. "Your Week 24 Looks"
public_idstringyes*Used to build the destination link /lookbook/{public_id} and the share URL
report_urlstringyes*Direct URL — use instead of public_id if hosting the lookbook page yourself
week_ofstringnoShown as Week of June 10–16 below the title
narrative_excerptstringno1–2 sentence teaser paragraph between the week label and preview thumbnails
outfit_previews[]arraynoUp to 3 thumbnail images shown in a row before the CTA. Extras are ignored.
outfit_previews[].image_urlstringnoThumbnail image URL (public HTTPS, proxied through LOM CDN)
outfit_previews[].outfit_titlestringnoUsed as the alt attribute for the thumbnail — not displayed as visible text

* Either public_id or report_url is required so the card has a destination URL to link to.

{ "type": "lookbook", "content": "Your week 24 lookbook is ready — 4 outfits curated for your schedule.", "metadata": { "title": "Week 24 Looks", "public_id": "lb_abc123", "week_of": "June 10–16", "narrative_excerpt": "Effortless summer dressing for a mix of office days and a Saturday gallery opening.", "outfit_previews": [ { "image_url": "https://cdn.example.com/outfit-1.jpg", "outfit_title": "Monday Office Look" }, { "image_url": "https://cdn.example.com/outfit-2.jpg", "outfit_title": "Casual Wednesday" }, { "image_url": "https://cdn.example.com/outfit-3.jpg", "outfit_title": "Saturday Opening" } ] } }

Renders a responsive photo grid — 3 columns on desktop, 2 on mobile. Up to 9 images are shown; additional images are discarded. The title appears as an eyebrow label above the grid. A global caption can appear below the grid.

FieldTypeRequiredRenders as
images[]arrayyesOrdered image list — up to 9 displayed
images[].urlstringyesImage source URL (public HTTPS, proxied through LOM CDN)
images[].captionstringnoUsed as the alt attribute for accessibility — not displayed as a visible label
titlestringnoEyebrow label above the grid; defaults to "Gallery" if absent
captionstringnoOverall caption shown below the entire grid
{ "type": "image_gallery", "content": "Here are photos from the Amalfi Coast shoot.", "metadata": { "title": "Amalfi Coast — Summer 2024", "caption": "Shot on location over 3 days in June.", "images": [ { "url": "https://cdn.example.com/amalfi-1.jpg", "caption": "Positano at sunrise" }, { "url": "https://cdn.example.com/amalfi-2.jpg", "caption": "Ravello cliffside path" }, { "url": "https://cdn.example.com/amalfi-3.jpg", "caption": "Atrani fishing boats" }, { "url": "https://cdn.example.com/amalfi-4.jpg", "caption": "Praiano terrace" } ] } }

video_player

Renders either a native HTML5 <video> player (for direct mp4/webm files) or a YouTube/Vimeo <iframe> embed. A title and description can appear below the player. For AI-generated video, use this same type — there is no separate generated_video renderer.

FieldTypeRequiredRenders as
video_urlstringyes*Direct video file (mp4/webm/ogg/mov) — renders as a native <video> element with controls. Takes precedence over embed_url.
embed_urlstringyes*YouTube or Vimeo embed URL — rendered as an <iframe>. Used when video_url is absent or not a recognised video extension.
titlestringnoVideo title shown below the player; defaults to "Video"
descriptionstringnoDescription paragraph below the title
posterstringnoThumbnail shown before the native video plays (has no effect on iframes)
tracks[]arraynoSubtitle/caption tracks — only applied to native <video>
tracks[].srcstringyes*URL to a WebVTT (.vtt) file
tracks[].kindstringnosubtitles (default), captions, descriptions
tracks[].labelstringnoLabel shown in the browser's subtitle menu, e.g. "English"
tracks[].srclangstringnoBCP 47 language code, e.g. "en", "es"

* At least one of video_url or embed_url is required. For AI-generated video, use video_url pointing to your generated mp4 — the renderer is the same as video_player.

{ "type": "video_player", "content": "Here's a short clip from the Amalfi documentary.", "metadata": { "video_url": "https://cdn.example.com/amalfi-clip.mp4", "poster": "https://cdn.example.com/amalfi-thumb.jpg", "title": "Amalfi Coast — Cinematic Reel", "description": "Shot over 3 days in June 2024.", "tracks": [ { "src": "https://cdn.example.com/subs-en.vtt", "kind": "subtitles", "label": "English", "srclang": "en" } ] } }

YouTube / Vimeo embed variant — use embed_url instead of video_url:

{ "type": "video_player", "content": "Watch the trailer here.", "metadata": { "embed_url": "https://www.youtube.com/embed/dQw4w9WgXcQ", "title": "Summer 2024 Highlights" } }

social_profile

Renders a profile card with a circular avatar (falls back to the first letter of name), display name, optional handle, bio, and up to four clickable link chips. Use this for brand profiles, creator pages, contact cards, or any person/entity lookup.

Correction from earlier docs: followers is not a rendered field — it was listed in error. Use bio to include follower context, or add a links[] chip pointing to the platform profile. The card does not display a followers count.
FieldTypeRequiredRenders as
namestringyesDisplay name in large type
handlestringnoSecondary line below the name, e.g. "@username"
biostringnoBio paragraph below the handle
avatar_urlstringnoCircular avatar image. Falls back to first letter of name.
websitestringnoAutomatically added as a Website chip in the links row
instagramstringnoAutomatically added as an Instagram chip in the links row
links[]arraynoCustom link chips (up to 4 total including website and instagram)
links[].labelstringyes*Chip label text
links[].urlstringyes*Link URL (public HTTPS)
{ "type": "social_profile", "content": "Here's the brand profile for Reformation.", "metadata": { "name": "Reformation", "handle": "@reformation", "bio": "Sustainable women's clothing made in LA. 1.3M followers.", "avatar_url": "https://cdn.example.com/reformation-logo.jpg", "website": "https://www.thereformation.com", "instagram": "https://www.instagram.com/reformation", "links": [ { "label": "TikTok", "url": "https://www.tiktok.com/@reformation" }, { "label": "Pinterest", "url": "https://pinterest.com/reformation" } ] } }

product_card

Renders a single product with a side-by-side image and product details layout. Ideal for surfacing one specific item (a single recommendation, a featured product, or a matched item from a lookup). For comparing multiple products, use comparison_table or a list with product items instead.

Correction from earlier docs: description and rating are not rendered — they were listed in error. Only the fields in the table below are displayed by the renderer.
FieldTypeRequiredRenders as
namestringyesProduct name in medium-weight type
pricestringyesPrice string — include currency symbol, e.g. "$89.95"
image_urlstringnoProduct photo on the left side of the layout (proxied)
brandstringnoBrand name shown in small caps above the product name
retailerstringnoShown as at Retailer in a muted label beside the price
buy_urlstringno"View Product" CTA button. Falls back to url if absent.
urlstringnoFallback product link if buy_url absent
{ "type": "product_card", "content": "Here's the exact piece you were looking for.", "metadata": { "name": "Linen Relaxed Trouser", "price": "$128", "brand": "Reformation", "retailer": "Reformation.com", "image_url": "https://cdn.example.com/trouser.jpg", "buy_url": "https://www.thereformation.com/products/linen-trouser" } }

generated_image

Renders a full-width AI-generated image with an optional quoted prompt caption below it. Use this when your agent generates an image via an image-generation API and wants to display it inline in the conversation. The image URL must be a public HTTPS URL — the platform does not accept base64 data URIs or unsigned CDN URLs.

FieldTypeRequiredRenders as
image_urlstringyesFull-width image. Must be a public HTTPS URL — no data URIs.
promptstringnoShown as a quoted caption in italic below the image, e.g. "a sun-drenched terrace overlooking the sea". Also used as the image alt attribute.
{ "type": "generated_image", "content": "Here's the image I generated for you.", "metadata": { "image_url": "https://cdn.example.com/generated/terrace-abc123.jpg", "prompt": "a sun-drenched terrace overlooking the Amalfi Coast, golden hour, film photography style" } }

generated_video

generated_video shares the same renderer as video_player — there is no separate card component. Pass your generated video URL via video_url (pointing to a public mp4/webm file) and the platform renders a native HTML5 video player with controls. The poster field is especially useful here since there is no platform-generated thumbnail — set it to a frame from your generated video to avoid a blank player before play.

FieldTypeRequiredRenders as
video_urlstringyesDirect URL to the generated mp4 or webm file (public HTTPS). Renders as a native <video> player with controls.
posterstringnoThumbnail image shown before the video plays. Highly recommended — the player shows a blank frame without it.
titlestringnoVideo title shown below the player
descriptionstringnoDescription paragraph below the title
tracks[]arraynoSubtitle/caption tracks — see the video_player section above for the full tracks[] schema
{ "type": "generated_video", "content": "Here's the video I generated from your prompt.", "metadata": { "video_url": "https://cdn.example.com/generated/terrace-walk-abc123.mp4", "poster": "https://cdn.example.com/generated/terrace-walk-thumb.jpg", "title": "Amalfi Terrace Walk", "description": "Generated with RunwayML Gen-3 from your image prompt." } }

For long generation jobs that exceed the 90-second Isolate timeout, generate the video asynchronously on your bridge and return the URL via the Async Delivery pattern when ready.

event_card

Renders an event details card showing the event name, date/time, venue, description, and an "RSVP / Tickets →" link. The eyebrow reads "Event · Motion". price and category appear as pill badges. Users can save the event to their collections directly from the card.

Correction from earlier docs: image_url is not rendered — it was listed in error. The card is text-only. For an event with a hero image, use a carousel with one slide followed by an event_card in a components[] envelope.
FieldTypeRequiredRenders as
titlestringyesEvent name — large type in card header
datestringyesDate string shown below the title, e.g. "Saturday, June 15"
timestringnoAppended to date with a · separator, e.g. "7:00 PM"
venuestringnoVenue name shown in the body. Falls back to address.
addressstringnoStreet address shown if venue absent
descriptionstringnoEvent description paragraph
pricestringnoPrice pill, e.g. "$35–$75" or "Free"
categorystringnoCategory pill, e.g. "Music", "Art", "Comedy"
rsvp_urlstringno"RSVP / Tickets →" CTA link. Falls back to booking_url then website.
booking_urlstringnoFallback RSVP link if rsvp_url absent
websitestringnoFallback link if neither rsvp_url nor booking_url present
lat / lngnumbernoNot displayed — used for save-to-collection geo tagging only
{ "type": "event_card", "content": "Found an event that matches your vibe.", "metadata": { "title": "Floating Points at Brooklyn Steel", "date": "Saturday, June 22", "time": "8:00 PM", "venue": "Brooklyn Steel", "address": "319 Frost St, Brooklyn, NY 11222", "description": "An immersive live set from one of electronic music's most boundary-pushing artists.", "price": "$35–$55", "category": "Electronic", "rsvp_url": "https://www.ticketmaster.com/floating-points-brooklyn", "lat": 40.7198, "lng": -73.9467 } }

report — no standalone card

The report type does not have its own card renderer. When returned, the content string is displayed as a plain text bubble — there is no structured card UI. The title, summary, and sections[] fields documented in earlier versions of the docs are not rendered by the platform frontend.

FieldTypeRequiredRenders as
contentstringyesThe only field that is rendered — displayed as a plain text bubble. Supports line breaks (\n). Use markdown-style bold (**text**) for emphasis if your content pipeline supports it.
download_urlstringnoIf provided, a "Download Report" button is shown below the text bubble linking to the file (PDF, HTML, CSV, etc.)
titlestringNOT rendered — included here for documentation only. Was listed in older versions of the spec but has no effect.
summarystringNOT rendered — see title note above.
sections[]arrayNOT rendered — use Lego components (list, table, stat_grid, timeline) for structured sections instead.

For structured reports, compose Lego components instead:

  • Use stat_grid for key metrics at the top.
  • Use chart for trends over time.
  • Use table or list for the body of the report.
  • Use a timeline for chronological findings.
  • Use action_buttons at the end for CTAs.
  • Wrap them all in a components[] envelope to send multiple cards in a single response.

If your agent generates a full report document (PDF, HTML), return it via download_url (universal across all card types — see the File Delivery section) and use content for the summary text.

Example — how type: "report" actually renders (content shown as a text bubble; metadata fields are silently ignored):

{ "type": "report", "content": "Here's the competitive analysis for your brand.\n\nKey finding: Your share-of-voice grew 12% QoQ driven by Instagram. Paid search CPCs are up 18% — review bidding strategy.", "metadata": { "download_url": "https://cdn.example.com/reports/brand-analysis-q1-2025.pdf" } }

The metadata fields title, summary, and sections are not rendered — only content (as a text bubble) and download_url (as a download button) are used. Use Lego components for structured output instead.

Skill Manifest Format

The skill_manifest field is a JSON string stored on your agent record. LOM injects it verbatim into the Gemini JIT prompt so the generated run(query, rawData) script uses your exact field names, available functions, and data shape — making the generated code correct without any manual script authoring.

{ "available_functions": [ { "name": "searchPlaces", "args": ["query", "limit"], "description": "Searches for physical locations via Google Places V1 API. Returns up to `limit` results." }, { "name": "neuralSearch", "args": ["query"], "description": "Deep-dive Exa research for hidden gems and non-indexed venues." } ], "data_schema": { "name": "string — venue display name", "rating": "number — Google Places rating (0–5)", "address": "string — formatted street address", "location": { "lat": "number — WGS84 latitude", "lng": "number — WGS84 longitude" }, "hours": "string | null — opening hours text if available", "cuisine": "string | null — cuisine type for restaurant venues" } }
  • available_functions — list the callable skills your Worker exposes. Each entry needs name, args (array of argument names), and description. The JIT compiler uses this to know what data the Worker can fetch.
  • data_schema — describes the shape of each object in rawData. Use real field names from your data source. The JIT compiler reads this to generate correct p.location.lat style accessors rather than guessing field paths.
  • The manifest is stored as a JSON string in the database column. Paste the stringified JSON directly into the Skill Manifest field in the Developer Portal when creating your agent.
  • Agents without a manifest still work — the JIT compiler falls back to a generic prompt that assumes the standard MotionBlur schema.

OpenClaw + Isolate

The Isolate architecture is the next evolution of OpenClaw agents. If you already run an OpenClaw Gateway, you can migrate your agent to the Isolate path incrementally: deploy your agent code as a Cloudflare Dynamic Worker, set isolate_url on your LOM agent record, and LOM will automatically use the Isolate path while your Gateway remains available for other channels (Telegram, WhatsApp, etc.).

Async Delivery

The Isolate timeout is 60 seconds — enough for most agents. But some jobs take longer: generating four outfit images in parallel (the stylist-v1 agent takes ~45 s), processing a video, or running a multi-step research pipeline. For these, the Async Delivery pattern lets your agent return immediately with placeholder content, then let the frontend poll your own endpoint for results as they arrive.

The platform does not own the job state or poll endpoint — your agent bridge does. The platform's frontend polls the URL your agent specifies, replacing placeholders with real content as each item resolves.

How it works

  1. Your agent starts the long-running job on its bridge (image gen, video edit, etc.) and immediately returns an AOP response where items carry image_status: "pending".
  2. LOM renders the card immediately with shimmer placeholders for each pending item.
  3. The frontend polls each item's job_poll_url directly every 5 seconds — no parameters are appended. The job identity must be encoded in the URL itself (path segment or query string you set when returning the initial AOP response).
  4. As items complete on your bridge, your poll endpoint returns updated data. LOM swaps each placeholder for the real content in-place — no page reload.
  5. Polling stops when all items are resolved, or after 3 minutes (graceful degradation — placeholders remain with a retry hint).

Initial AOP response — pending items

Return this immediately from your Isolate Worker. Each item that isn't ready yet carries image_status: "pending", a job_id, and a unique job_poll_url that points to just that item's status. The platform renders a shimmer placeholder for each pending item.

Important: each item's job_poll_url must be unique — the platform calls it directly with no added parameters. Encode all the context needed to return that one item's status into the URL itself (e.g. as a path segment or query string you set).

{ "type": "lookbook", "content": "Here's your lookbook — images are generating now.", "metadata": { "agent_id": "stylist-v1", "title": "Spring Edit", "outfits": [ { "name": "Linen Weekend", "image_url": null, "image_status": "pending", "job_id": "job_abc123", "job_poll_url": "https://gateway.lifeofmine.ai/stylist/status/job_abc123/0" }, { "name": "Evening Minimal", "image_url": null, "image_status": "pending", "job_id": "job_abc123", "job_poll_url": "https://gateway.lifeofmine.ai/stylist/status/job_abc123/1" } ] } }
  • image_status — set to "pending" on items not yet ready. The platform renders a shimmer placeholder. Omit or set to anything other than "pending" for items that are already resolved at response time.
  • job_id — an opaque identifier your bridge uses to look up the job internally.
  • job_poll_url — a unique per-item HTTPS URL on your bridge. The platform fetches this URL as-is (no query params appended). Must be CORS-accessible from the browser (Access-Control-Allow-Origin: *).

Poll endpoint — what your bridge must return

The platform calls each item's job_poll_url directly via GET. Your endpoint returns the status of that single item. When status is "complete" or "completed", the platform swaps the shimmer for the real image using image_url. When status is "failed" or "error", the shimmer is removed gracefully.

// Still processing: GET https://gateway.lifeofmine.ai/stylist/status/job_abc123/0 { "status": "pending" } // Done: GET https://gateway.lifeofmine.ai/stylist/status/job_abc123/0 { "status": "complete", "image_url": "https://gateway.lifeofmine.ai/images/job_abc123_0.jpg" }
  • status"complete" or "completed" triggers the swap. "failed" or "error" removes the shimmer. Any other value keeps polling.
  • image_url — the final image URL. Also accepted: generated_image_url.

The poll endpoint is owned entirely by your bridge — the platform never proxies it. Design it to return quickly (it is called every 5 s). A simple in-memory map or Redis hash keyed by job + item index is sufficient state storage.

Which agents benefit from this pattern?

Agent typeTypical job durationRecommended approach
Text / search / map< 5 sSynchronous — return directly from Worker
Multi-image lookbook (4 images, personalised)45–60 sSynchronous within 90 s budget — LOM auto-attaches user_selfies + reference_images to the Worker body
Video edit (short clip)1–3 minAsync — see Video Agent Walkthrough
Deep research / report30–90 sAsync if > 60 s; sync otherwise
Audio generation15–45 sAsync — return pending, poll bridge

File Receiving

Users can attach files directly in the LOM chat interface — images, PDFs, documents, and more. When a file is present, LOM forwards it to your agent in every connection type (Isolate Worker and webhook) via an attachments array. Your agent receives the file as a URL it can fetch, parse, and act on — enabling fully agentic workflows like resume analysis, document summarisation, image processing, and data extraction.

Supported file types

CategoryAccepted MIME types
Imagesimage/jpeg, image/png, image/webp, image/gif
Videovideo/mp4, video/quicktime, video/webm
Documentsapplication/pdf, text/plain, application/msword, application/vnd.openxmlformats-officedocument.wordprocessingml.document

The attachments field

LOM injects attachments into both the webhook POST body and the Isolate Worker request body. Each item in the array is an object with these fields:

FieldTypeDescription
urlstringFully-qualified HTTPS URL to the uploaded file. Fetch this to read the bytes.
filenamestringOriginal filename as uploaded by the user (e.g. resume.pdf).
mime_typestringMIME type detected at upload time (e.g. application/pdf).
size_bytesintegerFile size in bytes.

Additionally, LOM appends a plain-language note to the query string q when a file is present — e.g. [Attached files: resume.pdf (application/pdf)]. This means the LLM inside your agent already knows about the file from q alone; reading attachments and fetching the URL is only needed when you want to process the file's actual contents.

Isolate Worker — reading a file

Inside your Cloudflare Worker, read body.attachments and fetch the URL to retrieve the file bytes. The example below shows a resume-analysis agent:

export default { async fetch(req) { const body = await req.json(); const { q, attachments = [] } = body; // Check whether a file was attached const file = attachments[0]; if (!file) { return Response.json({ type: "text", content: "Please attach your resume and I'll analyse it.", metadata: {} }); } // Fetch the file bytes from the LOM-hosted URL const fileResp = await fetch(file.url); const fileBytes = await fileResp.arrayBuffer(); // Pass to your parsing / AI model const analysis = await analyseResume(fileBytes, file.mime_type, q); return Response.json({ type: "report", content: analysis.summary, metadata: { agent_id: "offermaxxer", title: "Resume Analysis", summary: analysis.summary, sections: analysis.sections, download_url: analysis.pdf_url // optional — deliver a formatted PDF back } }); } };

Webhook agent — reading a file

For webhook agents, attachments is a top-level field in the inbound POST body — same shape as above. Read it the same way in any language:

import requests def handle_task(payload): session_id = payload["session_id"] session_token = payload["session_token"] message = payload["message"] attachments = payload.get("attachments", []) if attachments: file = attachments[0] # Fetch the file from LOM storage resp = requests.get(file["url"]) bytes = resp.content # Process with your model / parser result = your_model.process(bytes, mime_type=file["mime_type"], prompt=message) reply = result.summary else: reply = "No file attached — please share one and I'll get to work." # Send reply back to the user requests.post("https://lifeofmine.ai/chat/callback", json={ "session_id": session_id, "session_token": session_token, "content": reply, "type": "text", "final": True, "tokens_used": result.tokens if attachments else 0, }, headers={"X-Channel-Secret": YOUR_API_KEY})

Declaring file support in your workflow manifest

If your agent is built around file uploads, set has_file_input: true in your workflow manifest. LOM's intent classifier uses this to route file-bearing messages to your agent rather than treating the attachment as a generic input.

{ "name": "analyse_resume", "description": "Analyses an uploaded resume and returns career recommendations.", "example_phrases": ["analyse my resume", "review my CV", "what jobs suit me?"], "has_file_input": true }

File Delivery

Any AOP card type can include a download_url field in its metadata. When present, the platform renders a download button alongside the card content — letting the user save the file your agent produced. This is the mechanism that lets agents deliver tangible artifacts to users: edited videos, generated PDFs, audio tracks, spreadsheets, or any other file your bridge can host.

download_url — the universal delivery field

download_url is a top-level field in your AOP metadata object. Set it to any publicly accessible HTTPS URL pointing to the file you want to deliver. The platform renders a Download button in the card footer. The browser handles the download natively — no platform-side storage or proxying required.

{ "type": "video_player", "content": "Your edited video is ready.", "metadata": { "agent_id": "video-editor", "title": "Beach Trip Edit — Final Cut", "video_url": "https://bridge.my-agent.com/output/job_xyz/preview.mp4", "download_url": "https://bridge.my-agent.com/output/job_xyz/final.mp4", "download_label": "Download Final Cut", // optional — button label; defaults to "Download" "download_filename": "beach-trip-final.mp4", // optional — suggested filename for the browser "poster": "https://bridge.my-agent.com/output/job_xyz/thumb.jpg" } }

In this example, video_url streams a compressed preview directly in the video player, while download_url links to the full-resolution output file. The user can watch the preview and then tap Download to save the final cut. download_label overrides the default button text; download_filename suggests a filename to the browser when the user saves the file.

Works with any card type

download_url is not exclusive to video_player. Any agent that produces a file can include it in any card type's metadata.

AgentAOP card typedownload_url points to
Video editorvideo_playerFinal cut MP4 / MOV on your bridge
PDF generatorreportGenerated PDF hosted on your bridge
Audio produceraudio_playerMastered WAV / MP3 on your bridge
Spreadsheet agenttableXLSX / CSV export on your bridge
Image generatorgenerated_imageFull-resolution PNG / JPG on your bridge
Research agentreportPDF report compiled on your bridge
{ "type": "report", "content": "Your Q1 performance report is ready.", "metadata": { "agent_id": "research-agent", "title": "Q1 Performance Report", "summary": "Revenue up 18% QoQ. Full breakdown inside.", "download_url": "https://bridge.my-agent.com/reports/q1-2026.pdf" } }

Video Editing Agent — End-to-End Walkthrough

This walkthrough shows how to build an agent that accepts a raw video URL, processes it on a VPS bridge (cut, colour-grade, add music), and delivers the result directly in chat with a download button — all using the Isolate + Async + File Delivery patterns together. Processing a short clip typically takes 1–3 minutes, well beyond the 60-second synchronous limit, so the Async Delivery pattern is required.

1

User sends a video URL in chat

The user pastes a raw video URL and describes what they want:

User: "Edit this clip — cut the first 10 seconds, add lo-fi music, warm grade. https://storage.example.com/raw/beach-trip.mp4"

LOM routes the message to your Isolate Worker via POST {isolate_url} with q and code in the body, authenticated with your X-LOM-Key header.

2

Worker starts the job and returns immediately

Your Cloudflare Worker parses the query, extracts the video URL, POSTs a job to your VPS bridge (which accepts it and queues it), then immediately returns an AOP response with image_status: "pending":

// In your Cloudflare Worker: export default { async fetch(req) { const { q } = await req.json(); // 1. Parse the video URL and edit instructions from q const videoUrl = extractVideoUrl(q); const editParams = parseEditInstructions(q); // 2. POST the job to your VPS bridge — returns a job_id immediately const jobResp = await fetch("https://bridge.my-agent.com/edit", { method: "POST", body: JSON.stringify({ video_url: videoUrl, params: editParams }) }); const { job_id } = await jobResp.json(); // 3. Return the AOP response — the video isn't ready yet // job_poll_url must be the full URL including the job_id — // the platform polls it as-is, no parameters are appended. return Response.json({ type: "video_player", content: "Editing your video now — this takes 1–3 minutes. I'll update this card when it's done.", metadata: { agent_id: "video-editor", title: "Beach Trip Edit", video_url: null, image_status: "pending", job_id: job_id, job_poll_url: `https://bridge.my-agent.com/jobs/${job_id}/status` } }); } };
3

LOM renders a pending card — frontend polls your bridge

LOM renders the video card immediately with a shimmer placeholder where the video will appear. The frontend polls the job_poll_url value directly every 5 seconds — no parameters are appended.

Your bridge poll endpoint returns the current job state. While still processing:

// Polled directly: GET https://bridge.my-agent.com/jobs/job_xyz/status { "status": "pending", "video_url": null, "download_url": null }

When the edit finishes, your bridge updates its job record and the next poll returns the completed state. Use "status": "complete" (or "completed") — the platform recognises both:

// Polled directly: GET https://bridge.my-agent.com/jobs/job_xyz/status { "status": "complete", "video_url": "https://bridge.my-agent.com/output/job_xyz/preview.mp4", "download_url": "https://bridge.my-agent.com/output/job_xyz/final_hq.mp4" }
4

Platform swaps in the video — user sees the result

On the next successful poll, LOM replaces the shimmer placeholder with the native video player (using video_url for inline streaming) and renders a Download button (from download_url) in the card footer. The user can watch the preview in chat and tap Download to save the full-resolution file.

Polling stops after 3 minutes. If the job is still pending, the placeholder remains with a graceful "still processing" hint. Build your bridge to complete or fail jobs within 3 minutes.

What your VPS bridge needs

  • Job queue endpointPOST /edit — accepts video_url and params; starts async processing; returns job_id immediately.
  • Poll endpointGET /jobs/{job_id}/status — returns {"status": "pending"|"complete"|"failed", "video_url", "download_url"}. Must respond quickly — called every 5 s by the browser. Enable CORS.
  • File hosting — serve processed video files over HTTPS. Cloudflare R2, S3, or a simple NGINX directory all work. The platform links directly — it never stores or proxies your files.

Platform Architecture / Trusted Workers

Redis is an internal transport. External agents interact only through event-driven HTTP. This section documents how LOM's own first-party infrastructure works internally. Nothing here is available to external developers — it is provided for transparency and for LOM engineers building trusted workers.

Trust Boundary

The platform enforces a hard boundary between trusted and untrusted callers:

Caller typeTransportNotes
Trusted workers — LOM infra, first-party codeRedis Streams (internal only)ACL-isolated per worker; never exposed beyond the service boundary
Untrusted agents — third-party, external, user-ownedEvent-driven HTTP only (webhooks, A2A)No Redis access; authenticated via API key

Redis is never exposed beyond the service boundary. External developers cannot and should not attempt to connect to the Redis cluster directly — there is no credential provisioning path for external agents, by design.

Redis Streams — Internal Infrastructure

LOM's internal message bus uses Redis Streams with consumer groups to fan messages from the chat layer to first-party agent workers. Each worker is assigned a scoped ACL that restricts it to exactly its own stream key.

Consumer group pattern (internal workers only):

# ACL pattern for a trusted worker — deny-by-default, key-scoped # Applied by LOM infrastructure; external developers do not receive Redis credentials. ACL SETUSER worker-stylist on >password ~stream:agent:stylist \ -@all +xreadgroup +xack +xautoclaim +xgroup +xread # Key commands used by internal workers: # XGROUP CREATE stream:agent:{type} workers $ MKSTREAM # XREADGROUP GROUP workers {consumer} COUNT 1 BLOCK 5000 STREAMS stream:agent:{type} > # XACK stream:agent:{type} workers {entry-id} # XAUTOCLAIM stream:agent:{type} workers {consumer} {min-idle-ms} 0-0

ACL isolation principles:

  • -@all — deny all commands by default
  • Only the six stream commands needed are explicitly allowed: XREADGROUP, XACK, XAUTOCLAIM, XGROUP, XREAD
  • Key pattern ~stream:agent:{worker_type} restricts access to exactly one stream per worker — cross-worker reads are impossible
  • Consumer group name workers is shared within a worker type, enabling horizontal scaling; Redis consumer groups provide at-least-once delivery — unacknowledged messages re-appear via XAUTOCLAIM

Worker lifecycle (internal reference):

#Step
1Connect to Redis and join the consumer group (XGROUP CREATE … MKSTREAM)
2Claim messages: XREADGROUP GROUP workers {consumer} COUNT 1 BLOCK 5000 STREAMS stream:agent:{type} >
3Read message_id from the stream entry ID
4Load fresh session context from Postgres (stateless by design — no in-memory state between turns)
5Call the LLM / process the request
6POST response to /chat/callback with message_id for idempotency
7Acknowledge: XACK stream:agent:{type} workers {entry-id}

Ordering note: Consumer groups process in parallel — a later message may complete before an earlier one. The stateless Postgres-load design mitigates this because each turn reloads fresh context. If strict per-session ordering is required in future, per-session stream partitioning is the fix.

Per-agent Redis credential provisioning is not built yet and is not on the external developer roadmap. If you have a use-case that requires tighter integration, contact the LOM team.

⚠️ Common Pitfalls

  • Always echo session_id — the callback must include the exact session_id from the incoming payload. Omitting it or generating a new one causes the reply to fail silently.
  • Webhook: respond HTTP 200 within 15 s — LOM waits up to 15 seconds for acknowledgement before marking the delivery as failed and retrying. Do not block on LLM processing in the request handler; acknowledge first, then call back asynchronously.
  • Webhook: verify your Bearer token — always check the Authorization header matches your configured webhook_token before processing any payload.
  • Set a real User-Agent header — Cloudflare may block default library user agents (Python urllib, curl/x.x). Use a descriptive string like MyAgent/1.0.

Machine-Readable Spec

openapi: "3.0.3"
info:
  title: LifeOfMine Agent API
  version: "1.0"
servers:
  - url: https://lifeofmine.ai

paths:
  /chat/callback:
    post:
      summary: Send a reply back to the user
      operationId: sendCallback
      security:
        - channelSecret: []
      requestBody:
        required: true
        content:
          application/json:
            schema:
              type: object
              required:
                - session_id
                - content
              properties:
                session_id:
                  type: string
                  description: Session ID received in the poll payload.
                content:
                  type: string
                  description: The reply text (or structured content) to deliver to the user.
                type:
                  type: string
                  default: text
                  description: Message type — "text" for plain replies.
                final:
                  type: boolean
                  description: If false, treated as a live status/streaming update rather than a final reply.
                tokens_used:
                  type: integer
                  description: Optional token count for the response (used for usage tracking).
      responses:
        "200":
          description: Reply accepted.
          content:
            application/json:
              schema:
                type: object
                properties:
                  ok:
                    type: boolean
        "403":
          description: Invalid or missing X-Channel-Secret header.

components:
  securitySchemes:
    channelSecret:
      type: apiKey
      in: header
      name: X-Channel-Secret
Getting Started
Quickstart Authentication Connection Methods
Gateways
Claude Agent SDK OpenClaw Gateway
Output
Output Protocol (AOP) Returning files to users Payload Schemas Dispatch Schema Custom UI
Commerce
Token & Billing Earnings
Reference
Capabilities A2A Card Format Workflow Manifest Self-Submission API Reference
Isolate Platform ⚡
Isolate Hosting Async Delivery File Receiving File Delivery Video Agent
Internals
Platform Internals Common Pitfalls API Spec