• Home
  • Cinema
  • Studio
  • Digital
  • Royalty Free Music
  • web agency
  • blog
  • Home
  • Cinema
  • Studio
  • Digital
  • Royalty Free Music
  • web agency
  • blog

BLOG! BLOG! BLOG! BLOG! BLOG! BLOG! BLOG! BLOG! BLOG! BLOG! BLOG! BLOG!

blog

articoli, news, update

IAMCCS Dataset Creator Workflow + Flux 2 Klein 9B + QE Prompt Enhancer (updated)

bla bla bla

  • Febbraio 25, 2026

Hi folks, this is CCS.
Today I’m sharing something I’ve been using directly in my own production pipeline: a Dataset Creator workflow (version 1) built on Flux 2 Klein 9B, integrated with the IAMCCS QE Prompt Enhancer and a dedicated new preset designed exactly for this task.

And now, please welcome our charming testimonial, who will be presenting the post!

WHY A DATASET MATTERS FOR VIDEO GENERATION

If you work with video generation models โ€” WAN, LTX-2, SVI Pro, Hunyuan, doesn’t matter โ€” you know that feeding just one or two reference images of a character or subject almost never gives you the consistency and motion coherence you need for real cinematic work.

What models need is variety across a consistent identity.

Multiple angles. Multiple expressions. Multiple distance shots. The same subject, described differently, rendered with controlled variation โ€” so that when you feed this material into an I2V pipeline, the model has a richer visual vocabulary to pull from.

That’s exactly what this workflow produces, automatically.

Grab the nodes to run the workflow:

https://github.com/IAMCCS/IAMCCS-nodes

https://github.com/IAMCCS/IAMCCS_annotate

https://github.com/IAMCCS/IAMCCS_QE_prompt_enhancer

THE BACKBONE: FLUX 2 KLEIN 9B

Flux 2 Klein 9B is a smaller, focused variant of the Flux 2 family โ€” and in practice, for dataset generation purposes, it behaves almost like a LoRA-influenced model out of the box.

What I mean is: it doesn’t need heavy conditioning to stay consistent. It’s fast, it produces structured outputs, and it handles prompt variations gracefully without drifting into completely different aesthetics between iterations.

For us filmmakers, this is the key property. When you’re building a dataset, you don’t want 30 images that look like they came from 30 different directors. You want 30 images that look like they came from the same production.

Flux 2 Klein 9B delivers that.

The workflow supports both paths:

– Safetensors (flux-2-klein-9b.safetensors)

– GGUF (flux-2-klein-9b-Q8_0.gguf) โ€” for VRAM-constrained machines.

Use the GGUF path if you’re on 12โ€“16 GB VRAM. Use the safetensors path if you have headroom and want full precision.

WHAT YOU NEED INSTALLED

– IAMCCS_nodes (updated) โ€” for IAMCCS_MultiSwitch, and the workflow utilities

– IAMCCS_QE_prompt_enhancer (updated)โ€” the prompt enhancer node and the new Dataset preset

– comfyui-easy-use โ€” for the easy promptLine node that drives multi-variation generation

– ComfyUI-GGUF โ€” if you want to use the GGUF path for Flux 2 Klein

Models you’ll need:

– flux-2-klein-9b.safetensors (or flux-2-klein-9b-Q8_0.ggu` for GGUF users)

– qwen_3_8b_fp8mixed.safetensors โ€” CLIP encoder, type: flux2

– flux2-vae.safetensors

THE NEW QE PROMPT ENHANCER PRESET: Dataset generator_1

The IAMCCS QE Prompt Enhancer now includes a dedicated preset: “Dataset generator_1″๐Ÿ“Š
This preset is specifically designed for dataset generation tasks. It structures prompt output for multi-angle, multi-variation rendering: it formats descriptions in a way that guides the model to produce consistent subject representation across different compositional setups โ€” wide shots, medium shots, close-ups, three-quarter angles, profile views.

You select the preset inside the IAMCCS_QE_PromptEnhancer node, and it becomes the primary prompt source for the generation chain.

The workflow has a CUSTOM mode too: if you want to write your own multi-line prompt manually without going through the enhancer, the IAMCCS_MultiSwitch in the workflow lets you switch seamlessly between the QE output and your own Text Multiline node.

One toggle.

No rewiring.

HOW THE WORKFLOW RUNS โ€” STEP BY STEP

Step 1 โ€” Load your models

Load Flux 2 Klein 9B, the Qwen CLIP, and the Flux 2 VAE. These never change between runs โ€” set them once.

Step 2 โ€” Load your reference image

Feed a LoadImage node with your subject image.

The workflow pipes it through ImageScaleToTotalPixels for normalization, and then GetImageSize captures the exact dimensions โ€” these are passed directly to EmptyFlux2LatentImage so the latent space always matches your reference proportions.

No manual W/H calculation. No aspect ratio guessing.

Step 3 โ€” Write your prompts with the PromptLine node

This is the heart of the batch generation logic.

The easy promptLine node โ€” titled “PromptLine – Empty Line will generate same image”โ€” works like this:

> Each non-empty line = one prompt variation = one generated image.

> An empty line = regenerate with the same prompt (useful for seeding variety).

Write one prompt per line:

subject, front view, neutral expression, soft studio light

subject, side profile, slightly turned left, natural daylight

subject, three-quarter view, low angle, cinematic lighting

subject, medium shot, dynamic pose, golden hour

four lines โ†’ four images โ†’ four dataset entries, all generated in a single queue run.

This is how you build a dataset without running the workflow 30 times manually.

You load your prompts, queue once, and the pipeline processes every line sequentially.

Step 4 โ€” Switch between QE preset and custom prompts

The IAMCCS_MultiSwitch node sits between your prompt sources and the easy promptLine input.

– Input 1: output from IAMCCS_QE_PromptEnhancer (with the `Dataset generator_1 ๐Ÿ“Š` preset active)

– Input 2 (CUSTOM): output from your own `Text Multiline` node

One widget toggle decides which source feeds the generation chain.

Use QE when you want structured, enhanced prompt variety. Use CUSTOM when you already know exactly what you want to write.

Step 5 โ€” Generate and save

CLIPTextEncode encodes the prompt โ†’ BasicGuider` / CFGGuider chains through the standard Flux 2 sampler stack (RandomNoise, KSamplerSelect, Flux2Scheduler, SamplerCustomAdvanced) โ†’ VAEDecode โ†’ SaveImage.

Output goes to the IAMCCS/dataset folder automatically.

PreviewImage lets you monitor the generation as it runs.

THE MULTILINE PROMPT SYSTEM โ€” HOW TO THINK ABOUT IT

Don’t think of the easy promptLine as a “batch” system in the traditional sense.

Think of it as a script for your subject.

You’re writing a shot list โ€” the same way you’d prep a character for a photoshoot or a storyboard for a scene. Each line is a “shot”:

  • – what angle

  • – what distance

  • – what lighting

  • – what expression or pose

The workflow then executes the shot list for you, frame by frame.

A solid dataset for a human character might be 12โ€“20 lines. For a prop or environment, 6โ€“10 is usually enough. For a vehicle or object that needs to work well in I2V pipelines, think about the cardinal and diagonal directions, plus distance variations.

One practical rule I use: always include at least one front-neutral, one side, and one three-quarter. Those three angles alone cover 80% of what video models need to interpolate motion correctly from a reference set.

WHY THIS MATTERS FOR FILMMAKERS

Here’s the honest reason I built this workflow for myself.

When I work on a scene that involves a character or a specific visual subject โ€” vehicle, location detail, costume โ€” and I want to use AI video generation as part of the production, I need that subject to be learnable by the model I’m using.

The models that perform best on consistent characters aren’t magic. They perform well because they’re fed good data: multiple clear views, consistent descriptors, clean variation.

That’s what a dataset is. And that’s what this workflow generates directly from a single reference image and a prompt list, in one session, ready to be used downstream โ€” whether you’re working with SVI Pro, WAN 2.2, LTX-2, or anything that accepts image conditioning.

This is not speculative. I use this workflow in production. Flux 2 Klein 9B is fast enough that the dataset generation itself doesn’t become a bottleneck.

Thank you for following, experimenting, and supporting.

Every piece of this work comes from real production problems I hit as a filmmaker โ€” and sharing it back to the open-source community is the point.

Grab the workflow, load your reference, write your shot list, and queue.

More soon.

โ€” CCS

Share:

Categorie

  • Cinema

  • Digital

  • Musica

  • Web design

Altri post

IAMCCS Newsletter โ€” February 05, 2026

Read More ยป

LTX-2 Audio + Image โ†’ Video (v2): an audio-first breakthrough using RVC, Kokoro, Qwen3-TTS, ChatterBox & Maya (Patreon Supporters)

Read More ยป

๐ŸŽง Qwen3-TTS + LTX-2: Audio-Driven I2V Workflow with Lip Sync

Read More ยป

IAMCCS_nodes Update: New Utilities in Town ๐Ÿš€

Read More ยป

FAIDENBLASS
studio

Hai bisogno di video-editing, post-produzione per il tuo progetto audio-video? Contattaci!
CONTATTACI

FAIDENBLASS
digital

Utilizziamo gli ultimi sistemi di Stable diffusion in locale di generazione di immagini + video. Contattaci per ogni info o richiesta
CONTATTACI

FAIDENBLASS
agency

Costruiamo siti web dinamici e responsivi su wordpress
CONTATTACI

Send Us A Message

PrevPreviousIAMCCS Newsletter โ€” February 05, 2026

FAIDENBLASS

Faidenblass: il punto di incontro tra arte analogica e digitale.

links

  • Carmine Cristallo Scalzi
  • Mitologia Elfica
  • faidenblass web agency

pagine sito

  • Home
  • Cinema
  • Studio
  • Digital
  • Royalty Free Music
  • web agency
  • blog
  • Home
  • Cinema
  • Studio
  • Digital
  • Royalty Free Music
  • web agency
  • blog