MetaGrid Pro and Claude API

For a while now we’ve been turning over an idea: what if AI could help users build buttons in MetaGrid Pro — fast, smart, and with zero friction?

We think we cracked it. :tada:

By combining the right architecture with ready-made datasets — like dynamically fetched keyboard shortcut databases for each app — MetaGrid Pro and the Claude API can now generate beautifully structured, genuinely useful button sets at incredible speed.

There’s still some polish left before it lands in your hands, but it’s working — and we are seriously excited about it. :blush:

:backhand_index_pointing_down: Watch the video to see MetaGrid Pro spin up a full Figma button set in seconds.

1 Like

How AI Works in MetaGrid Pro — Under the Hood

We’ve been building a new AI-powered button configuration feature for MetaGrid Pro and wanted to share the thinking behind the key technical and design decisions. This is aimed at power users who are curious about how it actually works, not just what it does.


The core idea

The feature — currently called AI Button Wizard — lets you select buttons on your grid, describe what you want them to do in plain language, and have them configured automatically. The goal was to make this genuinely reliable, not just impressive in a demo. That drove almost every decision below.

One important thing to understand upfront: the buttons are created and applied entirely on your device. Claude’s role is purely decision-making — which actions to assign, which SF Symbols to use as icons, which colours and theme to apply. The actual button construction happens locally in MetaGrid Pro using the same engine that powers manual configuration. This is why applying feels instant once Claude has made its choices — there’s no server-side button building, just a fast local apply of Claude’s instructions. The AI call returns a set of decisions; MetaGrid Pro executes them.


Why we don’t just let Claude guess

The first instinct with any AI feature is to throw a prompt at a model and trust it to figure things out. We tried this. The results were plausible-sounding but often wrong — keyboard shortcuts that don’t exist, wrong modifier keys, commands that aren’t available in the app you’re targeting.

MetaGrid Pro is a precision tool. If a shortcut is wrong, your button doesn’t work. That’s worse than no button at all.

So the system is built around a principle: Claude only suggests from real data. It doesn’t guess. Here’s how that works in practice.


The data source cascade

When you describe what you want, the system resolves each action by working through a priority chain:

1. Keyboard shortcut catalog The first check is always the shortcut catalog for your target app. If it’s cached, the relevant shortcuts are filtered and passed to Claude silently — you never see this happen. If the catalog hasn’t been fetched yet, Claude asks you to load it with a single tap before proceeding.

2. DAW commands (for supported DAWs) If the app is a DAW — Logic Pro, Cubase, Ableton, Pro Tools, Reaper, and others — and no keyboard shortcut is found, the system checks the DAW command list. Same logic: cached data is injected silently, uncached data triggers a one-tap load prompt.

3. Menu commands If neither a shortcut nor a DAW command is found, Claude asks whether you’d like to use a menu command instead. One tap loads the live menu structure from the running app via MetaServer. Claude then assigns the exact menu path.

4. Claude’s own knowledge — last resort If all three sources are exhausted or unavailable, Claude falls back to its own training knowledge. When this happens, the suggestion is clearly marked as unverified in the UI — amber badge, “AI suggested — may not be accurate.” You can still use it, but you know to double-check.

This cascade means that for well-supported apps with catalog data, you get verified results every time. For obscure apps or edge cases, you get honest uncertainty rather than silent errors.


Verification and auto-correction

When catalog data is available, every suggestion is verified against it:

  • Key combo found in catalog → green dot, confirmed accurate

  • Action name found but with a different key combo → Claude’s combo is silently replaced with the catalog’s version, green dot. This happens automatically and transparently — the catalog is always the source of truth.

  • Neither name nor combo found in catalog → amber dot, “Not in catalog”

  • No catalog available → amber dot, “No catalog”

The silent auto-correction is deliberate. Claude knows action names reliably but key combos vary by app version, locale, and custom remapping. The catalog wins, always.


User-owned data: KM macros and Apple Shortcuts

Keyboard Maestro macros and Apple Shortcuts are a different category entirely. Claude has no idea what’s in your personal library — it can’t suggest macro names it’s never seen.

For these, the flow is explicit: if you ask for KM or Shortcuts buttons, Claude prompts you to load your full library. Once loaded, the list is passed to Claude as context and it picks the most relevant items based on your description. Because Claude is selecting from your actual data, the results are always correct — no verification step needed.



How the API works — your key, your usage

MetaGrid Pro connects directly to the Anthropic API using your own API key. There’s no middleware, no MetaGrid Pro server in the middle, no subscription tier on our end for AI features. When you configure a button using the wizard, the request goes from your device straight to Anthropic’s API.

This means a few things practically:

  • You control costs — usage is billed directly by Anthropic to your account at their standard API rates. For typical use (a few wizard sessions per day), the cost is negligible — a full 30-button generation with Sonnet costs a few cents at most.

  • Your prompts stay between you and Anthropic — we don’t log, store, or process your prompts on our servers.

  • You need an Anthropic account — get your API key at platform.claude.com. The key is stored securely in your device’s keychain via our existing MGAnthropicKeychain infrastructure and never leaves the device except to authenticate with Anthropic directly.

  • Auto-reload is recommended — if your Anthropic credit balance runs out mid-session, API calls will fail. Set up auto-reload in your Anthropic billing settings to avoid interruptions during active use.


Model choices

We use two different Claude models depending on the task:

Conversation (Haiku) — the back-and-forth chat on the entry screen uses claude-haiku-4-5. It’s fast and cheap, and the conversational task doesn’t require deep reasoning — just understanding what app and workflow you’re targeting.

Generation (Sonnet or Haiku) — the final button configuration call uses claude-sonnet-4-6 via a batched request that handles all selected buttons in a single API round trip. Claude returns a structured set of decisions — action assignments, SF Symbol choices, colours, layout — and MetaGrid Pro applies them locally in milliseconds. We’re currently testing whether Haiku produces acceptable quality for this step too — if it does, the API call itself will be noticeably faster.

The batch approach is important for performance. Rather than one API call per button (which would take seconds per button for large selections), all buttons are configured in a single round trip. At 16,384 tokens by default — scaling to 49,152 for large selections — this comfortably handles 30+ buttons in one call.


What happens if generation fails

The entire generate step is wrapped in a single undo group. If the API call fails, times out (60 second limit), or returns a malformed response, any buttons that were partially configured are automatically rolled back. The grid is always left in a clean state — either everything was applied or nothing was. There’s no Cancel button during generation for this reason.


Dictation

Because describing complex workflows by typing on an iPad is awkward, we’ve added dictation support to the prompt field. It uses Apple’s native SFSpeechRecognizer stack — no third-party service, no data leaving your device beyond what Apple’s speech recognition normally does. Just tap the mic, describe your workflow, tap again to stop, then send.


Manual configuration path

The AI chat is the fast path, but it’s not the only one. For users who prefer full control, there’s a manual configuration path that skips the conversation entirely.

You select your buttons, tap “Configure manually,” and work through a step-by-step flow:

  • Intent — choose between App commands (DAW, menu, keyboard, Keyboard Maestro, Apple Shortcuts) or MIDI

  • App — select your target app and load the command sources you want to use

  • Actions — browse and pick commands from each available source (DAW commands, menu commands, shortcut catalog, KM macros, Apple Shortcuts) with a global search across all sources. The selection counter enforces the constraint — you need exactly as many actions as buttons selected before you can proceed

  • MIDI (if you chose MIDI) — configure CC messages, notes, program changes, or increment/decrement with full parameter control

Once you’ve selected your actions manually, the wizard hands off to the same AI-powered Appearance step — Claude chooses appropriate SF Symbols for each button based on the action it’s been assigned, applies your chosen theme and layout, and configures everything in one batch call. The difference is that you’ve done the command selection yourself rather than describing it in natural language..

This path is particularly useful for precise MIDI assignments, or when you want to hand-pick specific menu commands or KM macros that AI might not prioritise the same way you would.

Both paths — AI chat and manual — converge at the same Appearance step and use the same generation infrastructure underneath.


What’s still ahead

The feature is functional but not finished. The Appearance step (theme, accent colour, icon style, layout) is the last piece of the wizard flow. We’re also iterating on the system prompt — getting Claude to behave consistently across all the edge cases (obscure apps, ambiguous descriptions, requests it can’t map to any supported action type) takes more tuning than you’d expect.

Happy to answer questions about any of the above.

1 Like

This has such great potential. It would be really good to have an illustrated example to study - or a video when you have the time. Will this be dual platform? You are obviously working this up on Mac & ipad.

Today we were fine tuning the AI integration with Cubase and the results are fantastic. Just look how easy it is to create buttons for adding tracks. It is all there - macros, icons - in several seconds.

Super excited about this.

yes, we will create the rough walkthrough video next week. And it will be dual platform.

Great feature! Are we going to need a paid Claude subscription to use it?

That’s not a Claude subscription - it is pay by usage for Claude API - we optimized the engine to use the older models and the costs for tokens are really acceptable:

  • configuring 12 buttons with Haiku 4.5 is about ~$0.02

And we want to keep it that way.