Building an AI content pipeline: from generation to publishing

Architecture for a system where an LLM generates content, a human or rule-based system approves it, and Postproxy publishes it. Covers the full chain from prompt to live post.

Building an AI content pipeline: from generation to publishing

The pipeline that always stopped one step short

Content generation is a solved problem. LLMs produce text. Image models produce visuals. Workflow engines stitch them together. You can go from a prompt to a finished post in seconds.

And then it stops. Someone copies the text into a social media dashboard. Someone uploads the image manually. Someone clicks publish on three platforms, one at a time. The most automated part of the system — generation — hands off to the least automated part — publishing — and a person fills the gap.

This has been the state of things for a while. Not because publishing is inherently hard to automate, but because each social platform requires its own upload protocol, its own authentication flow, its own format constraints, and its own failure modes. Building and maintaining that layer was its own engineering project.

That layer is what Postproxy is. And with it, the full cycle — from generation to live post — can finally run without a person in the middle.

What the full cycle looks like

A complete AI content pipeline has three stages:

Generate ──▶ Approve ──▶ Publish

Generate is whatever system produces the content. An LLM API call. A workflow in n8n. An agent. A script that pulls from a content calendar and sends a prompt to Claude or GPT. The specifics do not matter here — what matters is that this stage produces text and optionally media.

Approve is whatever decides the content should go live. A human reviewing a draft. A rule-based check. An AI classifier. Or nothing at all, if you trust your generation step enough. This stage is optional but usually wise.

Publish is where the content crosses the boundary into the real world. Multiple platforms, each with different APIs, different upload flows, different constraints. This is the stage that used to require a person or a custom integration per platform.

Postproxy replaces the last stage. Your system generates content however it wants, approves it however it wants, and calls one API to publish it everywhere.

Generation is not Postproxy’s job

Postproxy does not generate content. It does not run prompts, choose topics, or produce images. That is the job of whatever system you build or adopt upstream.

Your generation system might be:

  • A script that calls an LLM API on a schedule
  • An n8n workflow that pulls topics from a content calendar and generates posts with AI
  • An agent framework that decides what to post based on signals
  • A human who writes the text and uses AI only for images
  • A fully autonomous system that runs without human involvement

Postproxy does not care. It accepts text, media URLs, and a list of target platforms. Where those came from is your business.

The only requirement from the generation side is that any media — images, videos — must be available at a publicly accessible URL by the time the publish call happens. If your generation step produces images, upload them to cloud storage (S3, R2, GCS, or similar) first. Postproxy needs a URL, not a file path or base64 string.

Approval: the stage most pipelines skip

Between generation and publishing, there is a decision: should this content actually go live?

Fully autonomous pipelines skip this step. The LLM output goes straight to publishing. This works for low-stakes content where mistakes are cheap. For anything else — brand accounts, company pages, regulated industries — some form of approval is worth having.

Postproxy supports this through draft posts. Create a post with draft: true, and it will be saved without publishing:

Terminal window
curl -X POST "https://api.postproxy.dev/api/posts" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"post": {
"body": "Your generated content here"
},
"profiles": ["twitter", "instagram", "linkedin", "threads"],
"media": ["https://your-storage.com/generated-image.png"],
"draft": true
}'

The post sits in draft status. Nothing is published. Nothing reaches any platform. The content waits.

When the approval comes — a human clicking a button, a script deciding the content passes checks, an agent confirming it looks right — a single call publishes the draft:

Terminal window
curl -X POST "https://api.postproxy.dev/api/posts/POST_ID/publish" \
-H "Authorization: Bearer YOUR_API_KEY"

The gap between these two calls is where approval lives. What fills that gap is up to you. A Slack notification with an approve button. A daily review queue. A rule-based system that checks for flagged terms and publishes automatically if nothing is flagged. Postproxy does not prescribe how you approve content. It gives you a clean boundary — draft and publish — and you decide what happens in between.

This is what we described in human-in-the-loop is a state, not a fallback. The system is not broken while waiting. It is in a known state, with explicit transitions.

For fully autonomous pipelines that skip approval, omit draft: true. Postproxy publishes immediately.

Publishing: the part that is finally automated

This is where Postproxy fits. Your system calls one endpoint, and the content goes out to every connected platform.

Terminal window
curl -X POST "https://api.postproxy.dev/api/posts" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"post": {
"body": "We just shipped dark mode across all three apps."
},
"profiles": ["twitter", "instagram", "linkedin", "threads"],
"media": ["https://your-storage.com/dark-mode-announcement.png"],
"platforms": {
"instagram": {
"first_comment": "Link in bio for the full writeup"
}
}
}'

One request. Postproxy handles the seven different upload protocols, the per-platform format validation, the processing waits, the container creation on Instagram and Threads, the binary uploads to LinkedIn, the chunked upload to X. Your pipeline does not need to know any of that.

This is the step that used to be manual. Someone copying text, uploading images, clicking publish, platform by platform. Or worse — a custom integration per platform, maintained in-house, breaking every time an API changes. Now it is one API call from whatever system generates the content.

Observing outcomes

Publishing to multiple platforms means partial success is normal. Three platforms might succeed while one fails. Your pipeline should know.

After publishing, check what happened:

Terminal window
curl "https://api.postproxy.dev/api/posts/POST_ID" \
-H "Authorization: Bearer YOUR_API_KEY"

The response includes per-platform status:

{
"id": "abc123",
"status": "processed",
"platforms": [
{ "platform": "twitter", "status": "published" },
{ "platform": "instagram", "status": "published" },
{ "platform": "linkedin", "status": "failed" },
{ "platform": "threads", "status": "published" }
]
}

Three published, one failed. Your pipeline can log this, retry the failure, notify someone, or move on. The important thing is that the outcome is explicit. The system does not pretend everything worked when it didn’t. Partial success is the default, not an edge case.

Postproxy also collects insights — impressions per platform — updated periodically after a post is published. Over time, this data can inform which content performs where, feeding back into your generation step.

The full cycle, connected

Here is what the complete pipeline looks like:

┌──────────────────┐
│ Your system │
│ │
│ Generate │ LLM, image model, content calendar,
│ content │ agent, script — whatever you use
│ │
└────────┬─────────┘
┌──────────────────┐
│ Your system │
│ │
│ Approve │ Human review, rule-based checks,
│ (optional) │ auto-approve, or skip entirely
│ │
└────────┬─────────┘
┌──────────────────┐
│ Postproxy │
│ │
│ POST /api/posts │ One call. All platforms.
│ │ Media uploads, format validation,
│ │ processing, publishing — handled.
│ │
└────────┬─────────┘
┌──────────────────┐
│ Postproxy │
│ │
│ GET /api/posts │ Per-platform outcomes.
│ │ What published, what failed, why.
│ │ Insights over time.
│ │
└──────────────────┘

Everything above the Postproxy line is yours. Generate content however you want. Approve it however you want. Schedule it however you want. Use any language, any framework, any orchestration tool.

Everything below the line is Postproxy. One API. Eight platforms. Per-platform outcomes.

The full cycle — from prompt to live post — no longer requires a person to bridge the gap between generation and publishing.

How teams run this in practice

The architecture above can be implemented in different ways depending on what you already have.

A cron job. A script runs daily. It calls an LLM to generate a post, uploads any media to storage, and calls the Postproxy API. Ten lines of code, fully automated.

A workflow engine. An n8n workflow pulls from a content calendar, generates text and images with AI, and publishes via the Postproxy node. Each step is visible, debuggable, and can be paused for human review.

An AI agent. An agent with access to Postproxy’s remote MCP server can check publishing history, generate content, save drafts when uncertain, and publish directly when confident. The agent handles the full cycle in a single session.

A custom application. Your app generates content through whatever flow makes sense — user-assisted, fully automated, or hybrid — and calls the Postproxy API at the end. The publishing layer is a single HTTP call, not a platform integration project.

In every case, the pattern is the same. Your system does the thinking. Postproxy does the publishing. The gap that used to require manual work — logging into platforms, formatting per-platform, uploading media, clicking publish — is closed.

What changes when the full cycle is automated

When publishing stops being manual, upstream automation becomes more valuable.

A content calendar that generates posts automatically is useful on its own. But if someone still has to copy-paste into eight platforms every morning, the automation only saved half the work. When publishing is also automated, the calendar produces content and it reaches the audience. End to end.

An AI agent that generates updates based on signals — new product releases, trending topics, customer milestones — can now act on what it generates. Not by queuing work for a person, but by publishing it. Or by saving it as a draft when it is not confident, and publishing it when a human confirms.

A development team that wants to announce releases can generate an announcement, review it, and publish it from the same pipeline that deploys the code.

The generation tools already exist. The approval patterns are well understood. What was missing was the last step — reliable, multi-platform publishing through a single API. That is what Postproxy provides, and it is what makes the full cycle possible.

What Postproxy handles

Postproxy is the publishing layer. It does not generate content, and it does not make approval decisions. It executes publishing intent and reports what actually happened.

  • Publishing to eight platforms via a single API call
  • Draft creation for content awaiting approval, with a publish endpoint to release it
  • Seven different media upload protocols — chunked, container, resumable, URL pull
  • Automatic video transcoding to each platform’s required codec and format
  • Per-platform format validation before upload begins
  • Per-platform processing status polling
  • Per-platform outcome reporting — what published, what failed, and why
  • Scheduling for future publishing
  • Insights collection after publishing

Your system handles everything upstream. Postproxy handles the last mile.

Connect your accounts and start building your pipeline through the Postproxy API.

Ready to get started?

Start with our free plan and scale as your needs grow. No credit card required.