Open-source prompt infrastructure

Centralize AI behavior without leaving your codebase.

Turn hardcoded prompt strings, system instructions, tools, and model settings into reusable, environment-aware assets — rendered for OpenAI, Anthropic, Gemini, and OpenRouter, and shipped with your app.

npm install promptopskit

Your prompts are already in Git. The problem is how they live there.

Prompts are already in your codebase — as scattered strings and provider-specific glue. Model parameters live in separate config objects. Tool definitions drift independently. Environment differences are handled with ad-hoc if/else branches. Shared instructions get copy-pasted across features. Every change is buried in application commits with no clear ownership or review path.

Changing a model touches everything

Swapping from gpt-4o to gpt-5.4 means finding every file that references the model name, the temperature, the max tokens. One missed spot and you're running two models in production without knowing it.

Settings, tools, and prompts drift apart

The prompt string lives in one file. Model config lives in another. Tool definitions live in a third. When someone updates the prompt but forgets the tool list, the model hallucinates tool calls that don't exist.

Every environment is a fork

Dev uses a cheaper model. Prod uses a bigger one. Free-tier users get lower output limits. Without structured overrides, each combination becomes another copy of the same prompt with slightly different settings.

A control layer for prompt-driven applications

PromptOpsKit separates prompt operations from application code — without taking them out of source control. Each prompt becomes a structured, reusable asset that captures model settings, tools, context rules, overrides, and composable instructions in one file.

One file. Everything that matters.

Prompt text, model name, sampling parameters, reasoning effort, tool definitions, and context rules live together in a single .md file with YAML front matter. Change the model in one line — it applies everywhere that prompt is used.

Environment and tier overrides

Define environments and tiers in front matter. Dev gets gpt-5.4-mini with low temperature. Prod gets gpt-5.4. Free-tier caps output tokens. One prompt file handles all variations — no forks, no duplication.

Tools and MCP declared with the prompt

Bind tool definitions and MCP server references directly in front matter. When the prompt ships, its tools ship with it. No separate config to keep in sync, no drift between prompt intent and available capabilities.

Runtime context injection

{{ variable }} placeholders inject user data, app state, and session context at render time. Declare expected inputs in front matter — missing or extra variables are caught before the API call, not after.

Composable shared instructions

includes lets you define tone, policy, or safety instructions once and compose them into any prompt. Update the shared file and every prompt that includes it gets the change — no copy-paste, no drift.

Render for any provider

Write one prompt. Render it for OpenAI, Anthropic, Gemini, or OpenRouter. Each adapter produces the correct request body shape — you own the HTTP call, auth, and transport.

How it works

1. Install and scaffold

npm install promptopskit
npx promptopskit init ./prompts

Creates a prompts/ directory with starter files and a shared instructions folder.

2. Write a prompt file

One Markdown file defines everything: model, parameters, tools, context, overrides, includes, and prompt text.

---
id: support.reply
schema_version: 1
provider: openai
model: gpt-5.4
reasoning:
  effort: medium
sampling:
  temperature: 0.7
tools:
  - lookup_account
  - reset_password
context:
  inputs:
    - user_message
    - app_context
  history:
    max_items: 10
includes:
  - ./shared/tone.md
  - ./shared/safety.md
environments:
  dev:
    model: gpt-5.4-mini
    sampling:
      temperature: 0.2
  prod:
    model: gpt-5.4
tiers:
  free:
    model: gpt-5.4-mini
    sampling:
      max_output_tokens: 1024
  pro:
    model: gpt-5.4
---

# System instructions

You are a helpful support assistant working in {{ app_context }}.
Use tools to look up account details before answering.

# Prompt template

{{ user_message }}

3. Render for any provider

Load the prompt, supply variables, and get a ready-to-send request body.

import { createPromptOpsKit } from 'promptopskit';

const kit = createPromptOpsKit({ sourceDir: './prompts' });

const { request } = await kit.renderPrompt({
  path: 'support/reply',
  provider: 'openai',
  environment: 'prod',
  tier: 'pro',
  variables: {
    user_message: 'How do I reset my password?',
    app_context: 'Account settings page',
  },
});

// request.body is shaped for OpenAI — send it with your own HTTP client
const res = await fetch('https://api.openai.com/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
  },
  body: JSON.stringify(request.body),
});

Features for engineering teams

Validation & schema checks

  • Zod schema validation for front matter
  • Levenshtein-based "did you mean?" for field typos
  • Variable usage checks — missing and unused inputs
  • Strict mode fails on any unresolved placeholder

CLI for local dev and CI

  • init — scaffold starter prompts
  • validate — check all prompts before merge
  • compile — pre-compile .md → JSON or ESM
  • render — preview output with test variables
  • inspect — print normalized prompt as JSON
  • skill — deploy AI agent instructions

Compiled artifacts for production

  • Pre-compile prompts to JSON or ESM — skip parsing at runtime
  • Auto mode loads compiled when available, falls back to source
  • LRU cache with mtime-based invalidation during development

No transport lock-in

  • Adapters return request body only — no HTTP client bundled
  • You control auth, retries, headers, and observability
  • Works with any fetch wrapper, SDK, or infrastructure

Frequently asked questions

What happens when I need to change models?

Change one line in the prompt file's front matter. The model, parameters, and any per-environment overrides update together. No hunting through application code for scattered references.

How do environment and tier overrides work?

Define environments (dev/staging/prod) and tiers (free/pro/enterprise) in front matter. At render time, pass environment and tier — PromptOpsKit merges the right model, sampling, and tool settings. Precedence: base → env → tier → runtime.

How do tools work with prompts?

List tool names or inline tool definitions in front matter under tools. They travel with the prompt and appear in the rendered request body — no separate wiring needed in your app.

How does context injection work at runtime?

Declare expected inputs in front matter. At render time, pass variables like user messages, app state, or session data. PromptOpsKit validates that all declared inputs are provided and flags any extras.

Does it support conversation history?

Yes. Pass a history array at render time and configure context.history.max_items in front matter to control how many turns are included in the request body.

Is PromptOpsKit only for one model provider?

No. It ships with adapters for OpenAI, Anthropic, Gemini, and OpenRouter. Each produces the correct request body shape for that API. Switch providers by changing one field.

Does it make HTTP requests for me?

No. PromptOpsKit returns request-body payloads. Your application owns transport, auth, retries, and headers — so it works with any HTTP client or infrastructure.

Can I validate prompts in CI?

Yes. Run promptopskit validate ./prompts --strict in your pipeline. It catches schema errors, missing variables, broken includes, and field typos before deploy.

Repo-native, not dashboard-native

PromptOpsKit is MIT-licensed and lives entirely in your codebase. No hosted dashboards, no external admin tools. Source prompts in development, compiled artifacts in production, and CLI validation in CI — all from one npm install.

View repository