Using LLMs to generate platform-specific social media content

Each social platform rewards different formats: threads on X, hashtag-heavy captions on Instagram, professional tone on LinkedIn. How to prompt LLMs to generate tailored variations from a single content brief.

Using LLMs to generate platform-specific social media content

The same post everywhere is a missed opportunity

Most cross-posting workflows take one piece of text and push it to every platform. The same 200 characters go to X, Instagram, LinkedIn, Threads, and Facebook. It works. It publishes. But it misses the point of each platform.

X rewards punchy, opinionated, conversational writing. Threads on X — a series of connected posts — let you develop an idea across multiple beats. Instagram audiences expect visual captions with relevant hashtags that help discovery. LinkedIn favors a professional, slightly longer-form tone with clear structure. Threads is casual, conversational, closer to how people talk than how brands write.

When you send the same text everywhere, you are optimizing for none of them. The content is not wrong on any platform, but it is not native to any of them either. It reads like what it is — a cross-post.

LLMs change this. Instead of writing one post and copying it everywhere, you can write one brief and generate platform-native variations. Each version is tailored to the format, tone, and conventions that work on that specific platform. And the whole process takes seconds.

Start with a content brief, not a post

The mistake most people make is writing a finished post and then asking an LLM to “adapt it for Instagram.” That produces awkward rewrites — the same structure wearing different clothes.

A better approach is to give the LLM a content brief and let it generate each platform version from scratch. The brief captures the intent. The LLM handles the execution per platform.

A content brief has four parts:

  • Topic: What the post is about
  • Key message: The one thing someone should take away
  • Tone: How it should feel (conversational, authoritative, playful, urgent)
  • Supporting details: Facts, quotes, links, or context the LLM should use

Here is an example brief:

Topic: We just launched a webhooks feature for real-time post status updates
Key message: You no longer need to poll for status — Postproxy tells you when something publishes or fails
Tone: Confident but not hype-y. Developer audience.
Details:
- Webhook fires on publish success, publish failure, and processing timeout
- Configurable per workspace
- Payload includes per-platform status
- Docs at postproxy.dev/docs/webhooks

This brief contains everything the LLM needs to write for any platform. It does not contain any platform-specific formatting, because that is the LLM’s job.

Prompting for X

X rewards brevity, opinion, and a conversational voice. Good X posts often read like something a person would say out loud. They have a point of view. They skip the preamble.

For single tweets, you want the LLM to get to the point fast. For threads, you want it to structure an argument across multiple beats, where each post stands on its own but builds toward a conclusion.

Single post prompt:

You are writing a post for X (Twitter). The audience is developers.
Rules:
- Maximum 280 characters
- Lead with the most interesting or surprising part
- Sound like a person, not a brand
- No hashtags unless they add real meaning
- If there is a link, put it at the end
- Do not start with "Exciting news" or "We're thrilled"
Content brief:
{brief}
Write one post.

Thread prompt:

You are writing a thread for X (Twitter). The audience is developers.
Rules:
- 3-6 posts in the thread
- First post must hook — it should make someone want to read the rest
- Each post is under 280 characters
- Each post should make sense on its own if someone sees it out of context
- Last post can include a link or call to action
- Conversational tone. No corporate language.
- Number each post (1/, 2/, etc.)
Content brief:
{brief}
Write the thread.

What good output looks like:

Single post:

You can stop polling for post status. Postproxy now fires webhooks on publish, fail, and timeout — per platform, per post. Docs: postproxy.dev/docs/webhooks

Thread:

1/ We kept seeing the same pattern: publish a post, then poll every few seconds to check if it actually went live. That polling loop is now unnecessary.
2/ Postproxy webhooks fire on three events: publish success, publish failure, and processing timeout. Per platform. So you know exactly what happened on X vs Instagram vs LinkedIn.
3/ The payload includes the full per-platform status breakdown. No guessing, no parsing HTML emails, no checking dashboards manually.
4/ Configure once per workspace. Point it at your endpoint. That's it. Docs: postproxy.dev/docs/webhooks

The thread version develops the idea across beats. Each post is self-contained but they build a narrative. The first post hooks by naming a familiar pain. The last post closes with a link.

Prompting for LinkedIn

LinkedIn is professional but not stiff. The best-performing LinkedIn posts tend to be structured, use line breaks generously, and open with a statement that makes someone stop scrolling. Lists and lessons resonate. So does vulnerability — admitting what went wrong before explaining what you learned.

You are writing a LinkedIn post. The audience is tech professionals and engineering leaders.
Rules:
- 800-1500 characters (LinkedIn truncates after about 210 characters with a "see more" link, so the first two lines must hook)
- Open with a strong first line that works as a standalone statement
- Use line breaks between thoughts — dense paragraphs get skipped
- Professional but human. Not formal. Not stiff.
- Structure with a clear beginning (hook), middle (substance), and end (takeaway or CTA)
- Hashtags at the end if relevant, 3-5 maximum
- Do not use emoji as bullet points
- Do not start with "I'm excited to announce" or "Big news"
Content brief:
{brief}
Write one LinkedIn post.

What good output looks like:

Polling for publish status is a solved problem that nobody should still be solving.
Every team that builds social media publishing eventually writes the same loop: publish, wait, check, wait, check again. It works. It is also wasteful, fragile, and unnecessary.
We just shipped webhooks for Postproxy.
When a post publishes, fails, or times out — on any platform — your endpoint gets a payload with the full status breakdown. X succeeded, Instagram is processing, LinkedIn failed with a token error. All in one request, not five polling loops.
Configure once per workspace. Point it at your URL. Delete the polling code.
Documentation: postproxy.dev/docs/webhooks
#devtools #socialmedia #apis #automation

The first line works as a hook even before the “see more” fold. The structure is scannable. The tone is confident without being promotional.

Prompting for Instagram

Instagram captions serve a different purpose than posts on other platforms. The image or video does the heavy lifting. The caption provides context, tells a story, or invites engagement. Hashtags are genuinely useful for discovery on Instagram in a way they are not on other platforms.

You are writing an Instagram caption. The audience is a mix of tech-savvy professionals and creators.
Rules:
- 400-1000 characters for the caption
- The first line is critical — it appears before the "more" truncation
- Tell a micro-story or share an insight, not just announce a feature
- Conversational and slightly informal
- End with a question or call to action to encourage engagement
- Add 15-25 relevant hashtags in a separate block after the caption (separated by two line breaks)
- Use a mix of broad hashtags (#developer, #tech) and niche ones (#publishingapi, #socialmediaautomation)
- No links in the caption (Instagram does not make them clickable)
- If there is a link, reference "link in bio" naturally
Content brief:
{brief}
Write one Instagram caption with hashtags.

What good output looks like:

That moment when you realize you've been polling an API every 5 seconds for something that could just... tell you when it's done.
We built webhooks into Postproxy so your system gets notified the instant a post publishes, fails, or times out. Per platform. No more status-checking loops.
One less thing running in the background. One less thing to debug at 2am.
Full docs in bio →
What's the worst polling loop you've ever written? 👇
#developer #devtools #api #webhooks #socialmedia #socialmediamarketing #automation #techstartup #buildinpublic #coding #programming #publishingapi #socialmediaautomation #contentcreator #digitalmarketing #saas #indiedev #devlife #techinnovation #apiintegration

The caption tells a tiny story. The hashtags are a separate block. The call to action invites engagement. There is no link in the body — just a natural reference to the bio.

Prompting for Threads

Threads is conversational. It is where people write the way they talk. Short sentences. Incomplete thoughts. Reactions. Hot takes. The platform rewards authenticity over polish.

You are writing a Threads post. The audience is tech-adjacent and online-native.
Rules:
- Maximum 500 characters
- Casual, conversational tone — like texting a smart friend
- Short sentences. Fragments are fine.
- Can be a thought, observation, hot take, or reaction
- No hashtags (they exist on Threads but are not part of the culture)
- No "link in bio" language
- No corporate voice whatsoever
- It should feel like a person posted this, not a company
Content brief:
{brief}
Write one Threads post.

What good output looks like:

just replaced 47 lines of polling code with one webhook url. genuinely mad i didn't do this sooner.
postproxy now just tells you when your post published or failed. per platform. you don't have to ask.
the bar for "quality of life improvement" should not be this low and yet here we are.

Lowercase. Conversational. Self-deprecating. It reads like a person who just had a small win and wanted to share it. That is exactly what performs on Threads.

Prompting for Facebook

Facebook pages have a wide audience range and support longer text with link previews. The tone is less specialized than other platforms — it sits between LinkedIn’s professionalism and Threads’ casualness. Link posts generate preview cards automatically, which changes how you structure the text.

You are writing a Facebook page post. The audience is a broad professional audience.
Rules:
- 300-800 characters
- Clear and direct. Not overly casual, not overly formal.
- If including a link, write the post so it complements the link preview card (do not repeat the page title — Facebook will display it from the og:title)
- Front-load the value — what will the reader get from this?
- Can include a brief call to action
- 0-3 hashtags maximum (hashtags are low-value on Facebook but acceptable)
- Do not use emoji as structural elements
Content brief:
{brief}
Write one Facebook post.

Generating all platforms at once

In practice, you rarely want to generate for one platform at a time. You want all variations from a single prompt call.

You are generating social media posts for multiple platforms from a single content brief. Generate one post per platform, tailored to each platform's conventions.
Platforms and rules:
**X (single post):**
- Max 280 characters. Punchy, opinionated, conversational. No hashtags unless meaningful. Link at end if needed.
**X (thread):**
- 3-5 posts, each under 280 characters. First post hooks. Each stands alone. Numbered (1/, 2/, etc.). Link in last post.
**LinkedIn:**
- 800-1500 characters. Strong first line (appears before "see more"). Line breaks between thoughts. Professional but human. 3-5 hashtags at end.
**Instagram:**
- 400-1000 characters caption. Micro-story or insight. Question or CTA at end. 15-25 hashtags in separate block. No links in body. Reference "link in bio" if needed.
**Threads:**
- Max 500 characters. Casual, like texting a friend. Short sentences. No hashtags. Must feel like a real person wrote it.
**Facebook:**
- 300-800 characters. Direct, complements link preview. Not too casual, not too formal. 0-3 hashtags.
Content brief:
{brief}
Output format: Return each platform version under a clear heading. For X, include both a single post and a thread option.

This single prompt produces six variations. Each one is native to its platform. The brief stays the same — only the execution changes.

Plugging generation into a publishing pipeline

Generating platform-specific content is useful on its own. But the real value comes when you connect it to automated publishing. The LLM generates variations, and each variation goes to its target platform — without a person copying and pasting between tabs.

Here is a minimal pipeline in Python:

import anthropic
import requests
import json
client = anthropic.Anthropic()
POSTPROXY_API_KEY = "your-api-key"
def generate_platform_posts(brief):
"""Generate platform-specific posts from a content brief."""
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=2000,
messages=[{
"role": "user",
"content": f"""Generate social media posts from this brief.
Return valid JSON with keys: x_single, x_thread (array of strings),
linkedin, instagram_caption, instagram_hashtags, threads, facebook.
Content brief:
{brief}"""
}]
)
return json.loads(response.content[0].text)
def publish_all(posts, media_urls=None):
"""Publish each platform variation to its target."""
results = {}
# Single posts for each platform
platform_content = {
"twitter": posts["x_single"],
"linkedin": posts["linkedin"],
"instagram": posts["instagram_caption"] + "\n\n" + posts["instagram_hashtags"],
"threads": posts["threads"],
"facebook": posts["facebook"],
}
for platform, content in platform_content.items():
response = requests.post(
"https://api.postproxy.dev/api/posts",
headers={
"Authorization": f"Bearer {POSTPROXY_API_KEY}",
"Content-Type": "application/json",
},
json={
"post": {"body": content},
"profiles": [platform],
"media": media_urls or [],
},
)
results[platform] = response.json()
return results
# Usage
brief = """
Topic: We just launched webhooks for real-time post status
Key message: Stop polling — get notified when posts publish or fail
Tone: Confident, developer-focused
Details:
- Fires on success, failure, and timeout
- Per-platform status in payload
- Configure per workspace
- Docs at postproxy.dev/docs/webhooks
"""
posts = generate_platform_posts(brief)
results = publish_all(posts, media_urls=["https://your-cdn.com/webhook-announcement.png"])

Each platform gets its own content. The X post is 280 characters and punchy. The LinkedIn post is structured and professional. The Instagram caption has hashtags. The Threads post reads like a person. One brief, six native posts, zero manual formatting.

Making the output consistent with few-shot examples

LLMs produce better platform-specific content when you show them examples of what good looks like. Instead of relying entirely on rules, include two or three examples of real posts that performed well on each platform.

You are writing a post for X. Here are examples of posts that performed well for this brand:
Example 1: "We accidentally published the same post 47 times to LinkedIn. That's how we learned about idempotency keys. Here's what they are and why your publishing system needs them."
Example 2: "Hot take: if your social media publishing requires a dashboard, it's not automated. It's manual with extra steps."
Example 3: "Shipped: per-platform status on every post. You can now see exactly what happened on X vs Instagram vs LinkedIn after publishing. No more guessing."
Now write a new post in this style.
Content brief:
{brief}

Few-shot examples anchor the voice, length, and structure more reliably than rules alone. The LLM picks up on patterns — sentence length, level of technical detail, use of humor — that are hard to describe in instructions but easy to demonstrate.

If you have a backlog of posts, pick three to five that represent the voice you want and include them in every prompt for that platform. The output quality improves immediately.

Handling the edge cases

A few things that come up in practice.

Character limits. LLMs are not great at counting characters. They often produce X posts that are 290 or 310 characters. Always validate the output programmatically before publishing. If a post exceeds the limit, either ask the LLM to shorten it or truncate with a simple rule (cut to the last complete sentence under the limit).

def enforce_limit(text, max_chars):
if len(text) <= max_chars:
return text
# Cut to last sentence that fits
sentences = text.split(". ")
result = ""
for sentence in sentences:
candidate = result + sentence + ". " if result else sentence + ". "
if len(candidate.rstrip()) <= max_chars:
result = candidate
else:
break
return result.rstrip()

Link handling. Instagram does not support clickable links in captions. X counts URLs as 23 characters regardless of actual length. LinkedIn renders link previews. Your prompt should specify link behavior per platform, but also validate that the output follows the rules.

Hashtag quality. LLMs tend to over-generate hashtags and default to generic ones. For Instagram, specify a mix of broad and niche hashtags, and consider maintaining a curated hashtag list per topic that you inject into the prompt rather than letting the LLM invent them.

Brand voice drift. Over many generations, the LLM’s output can drift from your brand voice. Few-shot examples help, but periodic human review of generated content keeps the voice anchored. Even in fully automated pipelines, spot-checking a sample of posts weekly catches drift before it becomes a pattern.

The generation-publishing contract

The pattern that emerges is clean. Generation and publishing are separate concerns with a clear interface between them.

The LLM handles: what to say, how to say it per platform, and what format to use.

The publishing API handles: getting the content to each platform, managing uploads, handling failures, and reporting outcomes.

Your system handles: the brief (what the content should be about), the trigger (when to generate and publish), and the review step (whether a human should approve before it goes live).

This separation means you can improve each part independently. Better prompts improve content quality without touching the publishing pipeline. Better publishing infrastructure improves reliability without touching the prompts. And the brief — the thing that captures your intent — stays the same regardless of which LLM you use or how many platforms you publish to.

One brief. Platform-native content everywhere. That is the workflow that LLMs make possible and that API-first publishing makes practical.

Ready to get started?

Start with our free plan and scale as your needs grow. No credit card required.