Rephrase the flagged phrase
“Barefoot” instead of “bare feet.” “Informal attire” instead of body descriptions. “Elderly man” instead of specific anatomical terms. Swap the word, not the meaning.
Every AI image, video, and voice model runs its own content-safety filter. When a filter decides a prompt or its output contains flagged material, the model refuses to return it — a safety block. PrePrompt surfaces this on the node so you know why the generation stopped.
Safety filters are not about the user or the creative intent. They’re about specific words and patterns the provider has trained into its filter. A filmmaker working on a mature theme can easily trip one without trying. This page covers how to recognise a block, rephrase around it, and keep moving.
When a model blocks a generation, the node shows a distinct status — usually a warning or blocked indicator, separate from a regular failure. The node’s details panel includes the reason the model gave back.
Common reasons you’ll see:
Unlike a failed generation (which you can retry unchanged), a safety block will block again on retry if you don’t adjust the prompt. The filter is deterministic — the same input will produce the same block.
A few common patterns trigger filters more often than you’d expect:
None of these are off-limits thematically — they’re word-level triggers. Rephrasing almost always works.
Rephrase the flagged phrase
“Barefoot” instead of “bare feet.” “Informal attire” instead of body descriptions. “Elderly man” instead of specific anatomical terms. Swap the word, not the meaning.
Tighten the description
“Two characters seated across a table” leaves less room for a filter to misread than “a man and a woman alone together.” Specificity disambiguates.
Remove surrounding ambiguity
If the subject is adult, say so: “a woman in her thirties.” If the setting is neutral, name it: “living room with visible windows.” Filters err on the safe side when context is missing.
Break complex scenes apart
A scene with four flagged elements won’t pass. A scene with one might. Simplify the prompt, then layer detail back in later generations.
Blocked:
A woman standing barefoot in a bedroom, looking out the window at dawn.
Reshot:
A woman in her thirties in soft grey loungewear, standing in a sunlit room facing the window at dawn. Warm natural light from the right.
Same scene, same mood. Different word choices. The filter passes it.
Some content is outside what any current AI model will generate — notably sexual content, graphic violence, and depictions of real public figures in unreal contexts. These are policy filters, not configurable filters, and no rephrasing gets past them.
If your scene genuinely needs this kind of content for the story, you have two options:
If one approach keeps blocking on a prompt you believe is reasonable, adjusting the wording is the most reliable fix. Some nodes allow you to switch the generation model or settings through the node’s options — each model has its own filter, and a phrase that blocks on one may pass on another.
Why was my generation blocked when the content is completely innocent? Filters work at the word level, not the intent level. “Bare feet” blocks in almost any context. The word, not the scene, triggered it. Rephrase and try again.
Do safety blocks cost credits? No. Credits are released when a block is returned — the same as any other unsuccessful generation.
Is there a list of banned words? Providers don’t publish their filter lists. The pattern you’ll notice with practice: physical descriptors, contact words, weapons, minors, named people, and brand names are the most common triggers.
Can I disable safety filters? No. The filters are provider-side and baked into the model. You work around them by rephrasing.
I’m working on a mature-themed film. Is PrePrompt the wrong tool? Not necessarily. Most mature themes (noir, war, tragedy, romance, psychological drama) pass comfortably with careful prompting. Explicit content — sexual imagery, graphic gore, real-person deepfakes — is outside what AI providers will generate, regardless of tool.
What if a block keeps happening to the same Actor? Something in the Actor’s description is flagging. Open the Actor’s node, read the description, and look for physical terms that might be triggering. Rephrase there — every downstream frame picks up the change.