Changing a model touches everything
Swapping from gpt-4o to gpt-5.4 means finding every file that references the model name, the temperature, the max tokens. One missed spot and you're running two models in production without knowing it.
Open-source prompt infrastructure
Turn hardcoded prompt strings, system instructions, tools, and model settings into reusable, environment-aware assets — rendered for OpenAI, Anthropic, Gemini, and OpenRouter, and shipped with your app.
npm install promptopskit
Prompts are already in your codebase — as scattered strings and provider-specific glue. Model parameters live in separate config objects. Tool definitions drift independently. Environment differences are handled with ad-hoc if/else branches. Shared instructions get copy-pasted across features. Every change is buried in application commits with no clear ownership or review path.
Swapping from gpt-4o to gpt-5.4 means finding every file that references the model name, the temperature, the max tokens. One missed spot and you're running two models in production without knowing it.
The prompt string lives in one file. Model config lives in another. Tool definitions live in a third. When someone updates the prompt but forgets the tool list, the model hallucinates tool calls that don't exist.
Dev uses a cheaper model. Prod uses a bigger one. Free-tier users get lower output limits. Without structured overrides, each combination becomes another copy of the same prompt with slightly different settings.
PromptOpsKit separates prompt operations from application code — without taking them out of source control. Each prompt becomes a structured, reusable asset that captures model settings, tools, context rules, overrides, and composable instructions in one file.
Prompt text, model name, sampling parameters, reasoning effort, tool definitions, and context rules live together in a single .md file with YAML front matter. Change the model in one line — it applies everywhere that prompt is used.
Define environments and tiers in front matter. Dev gets gpt-5.4-mini with low temperature. Prod gets gpt-5.4. Free-tier caps output tokens. One prompt file handles all variations — no forks, no duplication.
Bind tool definitions and MCP server references directly in front matter. When the prompt ships, its tools ship with it. No separate config to keep in sync, no drift between prompt intent and available capabilities.
{{ variable }} placeholders inject user data, app state, and session context at render time. Declare expected inputs in front matter — missing or extra variables are caught before the API call, not after.
includes lets you define tone, policy, or safety instructions once and compose them into any prompt. Update the shared file and every prompt that includes it gets the change — no copy-paste, no drift.
Write one prompt. Render it for OpenAI, Anthropic, Gemini, or OpenRouter. Each adapter produces the correct request body shape — you own the HTTP call, auth, and transport.
npm install promptopskit
npx promptopskit init ./prompts
Creates a prompts/ directory with starter files and a shared instructions folder.
One Markdown file defines everything: model, parameters, tools, context, overrides, includes, and prompt text.
---
id: support.reply
schema_version: 1
provider: openai
model: gpt-5.4
reasoning:
effort: medium
sampling:
temperature: 0.7
tools:
- lookup_account
- reset_password
context:
inputs:
- user_message
- app_context
history:
max_items: 10
includes:
- ./shared/tone.md
- ./shared/safety.md
environments:
dev:
model: gpt-5.4-mini
sampling:
temperature: 0.2
prod:
model: gpt-5.4
tiers:
free:
model: gpt-5.4-mini
sampling:
max_output_tokens: 1024
pro:
model: gpt-5.4
---
# System instructions
You are a helpful support assistant working in {{ app_context }}.
Use tools to look up account details before answering.
# Prompt template
{{ user_message }}
Load the prompt, supply variables, and get a ready-to-send request body.
import { createPromptOpsKit } from 'promptopskit';
const kit = createPromptOpsKit({ sourceDir: './prompts' });
const { request } = await kit.renderPrompt({
path: 'support/reply',
provider: 'openai',
environment: 'prod',
tier: 'pro',
variables: {
user_message: 'How do I reset my password?',
app_context: 'Account settings page',
},
});
// request.body is shaped for OpenAI — send it with your own HTTP client
const res = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
},
body: JSON.stringify(request.body),
});
init — scaffold starter promptsvalidate — check all prompts before mergecompile — pre-compile .md → JSON or ESMrender — preview output with test variablesinspect — print normalized prompt as JSONskill — deploy AI agent instructionsChange one line in the prompt file's front matter. The model, parameters, and any per-environment overrides update together. No hunting through application code for scattered references.
Define environments (dev/staging/prod) and tiers (free/pro/enterprise) in front matter. At render time, pass environment and tier — PromptOpsKit merges the right model, sampling, and tool settings. Precedence: base → env → tier → runtime.
List tool names or inline tool definitions in front matter under tools. They travel with the prompt and appear in the rendered request body — no separate wiring needed in your app.
Declare expected inputs in front matter. At render time, pass variables like user messages, app state, or session data. PromptOpsKit validates that all declared inputs are provided and flags any extras.
Yes. Pass a history array at render time and configure context.history.max_items in front matter to control how many turns are included in the request body.
No. It ships with adapters for OpenAI, Anthropic, Gemini, and OpenRouter. Each produces the correct request body shape for that API. Switch providers by changing one field.
No. PromptOpsKit returns request-body payloads. Your application owns transport, auth, retries, and headers — so it works with any HTTP client or infrastructure.
Yes. Run promptopskit validate ./prompts --strict in your pipeline. It catches schema errors, missing variables, broken includes, and field typos before deploy.
PromptOpsKit is MIT-licensed and lives entirely in your codebase. No hosted dashboards,
no external admin tools. Source prompts in development, compiled artifacts in production,
and CLI validation in CI — all from one npm install.