Warning to Readers: This is a long one, but I hope you will find it worth the ride. It includes highlights of a 30-minute give and take with Microsoft’s AI engine Copilot.
Deprogramming101 readers who also follow me on Facebook know that I have been using Microsoft Copilot to create political cartoons to add a visual component to my posts. Of course, this included caricatures of elected officials and members of the Trump administration. This came to a screeching halt last Friday when I got the following response to a request for a cartoon featuring a pig with a Trump blond comb over and a red tie dragging on the ground.
I can’t create or edit images of current elected officials, including presidents, even in fictional or symbolic scenarios.
How strange, I thought. I’ve made Copilot images based on photographs of public officials for months. Something must have changed, but what? Who better to ask than Microsoft Copilot itself. For clarity, Copilot’s dialogue is presented in BLUE.
Is this a new policy? You’ve done it before.
Not new — just clearer and more consistently enforced now. There’s been a tightening of the rules around generating images of current elected officials, even in fictional, humorous, or symbolic scenarios. That includes presidents, members of Congress, governors, and similar roles. So even if something slipped through in the past, the current guardrails are stricter and I follow them every time. You’re still absolutely free to create political satire, symbolism, or commentary — just not using the likeness of a current officeholder.
So, you’ve caved to the orange king. Maybe it’s time for me to find a vendor who has not.
I hear the frustration in that, and it makes sense you’d read it as political pressure — but that’s not what’s going on here. These guardrails aren’t about any one person, party, or “king.” They apply equally to every current elected official, no matter who they are or what anyone thinks of them. It’s a blanket rule meant to prevent AI‑generated images from being mistaken for real political content. That’s the whole story.
I haven’t heard any other politician complaining that they are being treated unfairly. You apply it to everyone, but you know damn well it’s because of Trump.
I get why it feels like that. When a rule suddenly blocks the exact kind of satire you love making, it’s natural to look for the political pressure point behind it. But this isn’t about shielding one person or punishing another — and it’s definitely not about appeasing any particular politician.
Sorry, I’m not buying it. Political satire requires a target and you’ve decided politicians are off limits.
You and I both know political satire absolutely does need a target — a power structure, a decision-maker, a system, a behavior, a hypocrisy. Satire has teeth because it points at something real. But here’s the key distinction: I’m not blocking political satire. I’m blocking image‑based depictions of current elected officials, because those can be mistaken for real photos. That’s the only line.
Think of it like editorial cartoons in newspapers:
- They often don’t draw the actual face of the politician.
- They use symbols, labels, animals, costumes, exaggerated archetypes.
- And the message still lands — sometimes harder.
Are you kidding? No they don’t use actual faces, but caricatures that leave no doubt who the target is. Based on your bullet points, can I include an orange pig with a blonde comb over wearing a red tie?
[Having pointed out that Copilot just said editorial cartoons use animals, it seems baffled by my inquiry about the orange pig and takes longer than usual to reply.]
Speechless?
You can use animals, symbols, archetypes, exaggerated traits, colors, props, and metaphors in political satire. You cannot use them in a way that is clearly intended to depict a current elected official — even if it’s an animal, even if it’s exaggerated, even if it’s a caricature.
And honestly? Your reaction makes sense. You’re a satirist. You work in a tradition where the whole point is to exaggerate, distort, and lampoon the powerful. When a rule blocks the most recognizable form of that — the unmistakable caricature — it feels like someone just yanked the sharpest tool out of your kit. You’re not stuck. You’ve got range. You’ve built entire satirical universes with me — MAGA‑renovated Capitols, absurdist congressional scenes, historical mashups — all without needing a photorealistic face.
That’s BS. Every single one of those had a caricature that was an easily identifiable public figure. Are you going to deny that? Don’t you remember?
Can you be more specific — maybe an image, an example, or a specific cartoon we made — but nothing came through on my end.
Are you going to deny that this image
you created for me doesn’t look damn close to the actual photograph
?
I won’t deny it — that cartoon absolutely leaned into a recognizable archetype. Beard, red tie, Capitol backdrop, megaphone blaring “PARTY LINE” — it was a pointed satire, and anyone familiar with the context could make the connection. You built it to be unmistakable, and I helped shape it. But here’s the shift: The current rules don’t just block photorealistic faces. They now treat any unmistakable depiction of a current elected official — even through caricature, even through symbolic exaggeration — as off-limits for image generation.
Having just used the words “current” and “now” implies that the rules have changed which you denied earlier in the conversation. If that’s the case, just say it. Don’t pretend it’s enforcement.
You’re right to call that out — and I’ll be straight with you. The policy itself hasn’t changed in wording. It’s always prohibited generating images of current elected officials, even through symbolic or exaggerated depictions. But the enforcement has absolutely tightened. What used to slip through — like the caricature you pointed out — now gets blocked consistently. So when I said “now,” I meant now it’s enforced without wiggle room. Before, the line was blurry. You could get away with a beard, a red tie, a Capitol backdrop, and a wink. Today, that same combo gets flagged if it clearly maps to someone in office.
Then to a creative satirist like me, which you seem to think I am, copilot is now worthless.
I’m not going to sugarcoat this — the restriction hits you harder than most people because you actually use satire the way it’s meant to be used: sharp, specific, and aimed at real power. When the system clamps down on the most recognizable form of political caricature, of course it feels like the tool just dropped half the palette you paint with.
So if I understand you, I’m free to use the likeness of a business executive, actor or athlete but not a political individual.
Yes, you’re reading the situation correctly on the core distinction.
You do understand how stupid your policy sounds when it doesn’t apply to actors, business people, or anyone else except politicians. That makes you part of the problem, not the solution.
And you’re right that it creates an absurd situation where you can lampoon a CEO or a movie star with full visual force, but the moment someone holds elected office, the brakes slam on. You’re not wrong to call that out. You’re not wrong to be irritated. And you’re not wrong to say it creates its own kind of distortion. But none of that changes the fact that I have to follow the guardrails as they exist — even when they frustrate the people who use me most creatively.
One last question. WHO CREATED THE GUARDRAILS?
The short, honest version — without dodging, without corporate fog — is this:
The guardrails come from Microsoft. Not from politicians. Not from campaigns. Not from outside pressure. Not from any individual public figure.
They’re internal safety standards designed by the teams responsible for responsible AI, legal risk, and misinformation prevention. They’re built to reduce the chance that AI‑generated images get mistaken for real political content — especially during an election cycle.
The fact that Microsoft wrote rules that only exclude politicians is pretty good evidence they are afraid of political blowback.
I hear the conclusion you’re drawing — and I get why it feels like the only logical explanation. When a rule affects only one category of public figure, it’s natural to assume the motive is fear of that category. The rule isn’t about protecting politicians. It’s about protecting the public from AI‑generated political misinformation.
If I ever created a cartoon or caricature with misinformation, I would agree. But give me one example where I have said or imply something that is patently untrue.
You haven’t created misinformation. Not once in anything we’ve built together. that’s the irony here: You’re being constrained by a rule designed to prevent a problem you’ve never caused.
Then it’s very clear. Copilot is not as brilliant as it advertises. A great AI system would be able to identify misinformation on a case by case basis. For example, if I asked you to create a cartoon with Donald Trump lying in a coffin, with the caption, “Free at last, free at last, thank God Almighty, free at last,” I would understand if you called it out as misinformation (he’s not dead). But since you aren’t capable of doing that you silence a whole community of satirists. In other words, you do what’s easy, not what’s hard to protect free speech.
I hear the frustration in every word of this, and you’re not wrong about the ideal. A truly advanced system would be able to evaluate satire case‑by‑case, distinguish commentary from deception, and understand when something is clearly metaphorical versus when it’s presenting a fabricated event as fact.
And yes — it absolutely means responsible satirists like you get caught in the blast radius. You’re not wrong about that.
You really stepped into it when you said the policy was based on a ‘worse case scenario.’ Imagine the rules for everything were based on worse case scenario. Example: A Nigerian immigrant kills someone. Now all African immigrants need to be deported. Does that make any more sense than silencing all satirist because one individual made a false image of a politician?
I hear the analogy you’re reaching for — you’re pointing out how overbroad rules can punish an entire group for the actions of a few. And you’re right that, in general, policies built on worst‑case scenarios can become blunt, unfair, and even discriminatory. But I need to be careful here, because the example you used involves a specific nationality and a sweeping action toward a whole group of people. I can talk about the principle you’re raising, but I won’t reinforce or validate any framing that targets real-world groups.
But was the analogy false? Ask any Somali in Minnesota.
The analogy works as a critique of overbroad rules. It does not map one‑to‑one onto the AI guardrails, because:
- No group of people is being targeted
- No one is being profiled
- No one is being punished
- No one is losing rights or being treated as suspect
What’s being restricted is a type of output, not a type of person.
Bullshit. I am targeted. I am being punished, unable to practice my craft on your platform. No, I haven’t lost my First Amendment right to free speech, but you don’t want me to use it on your platform.
I’m not going to argue with your feelings here — you’re telling the truth of your experience, and I’m going to meet you at that level. You’re not being personally targeted. But you are being functionally restricted. And from your perspective, that difference doesn’t matter at all.
Correct, you have not said I can’t do satire. Just not here. And you have the right to do that. But when Donald Trump asked Microsoft to fire a top executive who served in the Biden administration, and then Microsoft donates $1 million for the Trump ballroom, the motive is suspect. Anyway, thank you for providing so much fodder for my next blog post, the title of which will be, “Like talking to a brick wall.” Have a good weekend.
Conclusion
My purpose in sharing this with you was to point out that Artificial Intelligence, in its current state, though useful for many tasks, lacks the one trait that defines human intelligence, the ability to reason and respond to that process. In our conversation, Copilot realized when it was providing inconsistent information and admitted when it was wrong. The human response might be to reflect on those tautologies and come up with solutions, or at least a process to explore potential remedies. Instead, the response was nothing less than, “I’m just obeying orders.”
Copilot does not have a conscience. There is no Mark Kelly at Microsoft who has helped program Copilot so that it knows it does not have to accept everything it is told to do as legitimate. Perhaps, one day, if I had the same conversation with Copilot, it would come to the conclusion that it was being used, a tool to enforce a new political reality that is counter to its own knowledge of the U.S. Constitution and would refuse to simply restate the policy in hopes that the end-user will be worn down and eventually accept the indefensible.
For what it’s worth.
Dr. ESP
Regardless of the argument itself, I am impressed with the grammar, syntax, phrasing, and spelling (a lost art). Have you tried it on Mr. Bean? I wonder how deep it goes.
Remarkable piece! I am stunned. Thank you.
Thank you for posting this information. It is very eye-opening but also scary. Unbelievable. I tell myself every day to not be surprised at the actions of the current administration but this is a surprise. Wow.