Hi folks, this is CCS.
I’m a filmmaker first — not a technologist who discovered cinema later.
Alongside my film work (including my first feature film Sometimes in the Dark, other info here on my website), I’ve been conducting hands-on research in AI-cinema and generative film language, working directly on models, workflows and optical behavior (as You all know through my blog and my patreon account).
Before theory, there was practice. Everything you’re about to read didn’t start in a paper.
It started on set, inside images that resisted clarity.
Sometimes in the Dark was my first direct confrontation with the limits of digital perfection.
Shot with ultra-defined sensors, yet deliberately destabilized through vintage and anamorphic optics, the film exposed a simple truth:
when the image becomes too clean, cinema starts to disappear.

Me with my RED during a shooting
The uneven sharpness, the imperfect depth, the refusal of total visibility were not stylistic gestures. They were defensive acts — ways to protect meaning from collapsing into information.
This is where the questions behind this research were born. Not in abstraction, but inside real images, real lenses, real compromises.
The paper you’ll find on Zenodo doesn’t theorize cinema from outside.
It emerges from inside the image, from the same struggle that shaped Sometimes in the Dark. You can find other info or stills about the film on my website: www.carminecristalloscalzi.com
or
www.faidenblass.com

A screenshot from the film
This research is developed in first person under IAMCCS Research, not as speculation, but as practice: training LoRAs, building pipelines, testing what generative systems preserve — and what they erase — of cinematic language.
The text you’re reading comes from that process.
👉 The full theoretical paper
“When Cinema Thinks Through Form – Anamorphic, Optical Imperfection and Filmic Language in Generative Cinema”
is openly available on Zenodo (DOI):
https://doi.org/10.5281/zenodo.18069441
What follows here is a compressed manifesto — an extract — that sets the ground for the next post, where I’ll present anamorph1x, my anamorphic LoRA research, together with the related Z-Image / Flux workflows.
Intro to cinematic imperfection as language
Most generative systems today equate quality with visibility: sharpness, uniform detail, cleanliness. But cinema has never worked like that.
Cinema is not a competition of visibility.
It is a negotiation between what is shown and what is held back.
This is the central problem of both streaming culture and generative models:
the more correct the image becomes, the less cinematic it feels.
Anamorphic cinema as counter-model
Anamorphic cinema offers a different definition of quality.
Oval bokeh, edge distortion, uneven sharpness, spatial compression
are not defects to be fixed — they are optical language.
Anamorphic cinema encodes authorship in the geometry of the lens itself,
transforming imperfection into dramaturgy.
Anamorphic lenses do not just widen the frame.
They reorganize space, attention, distance, ambiguity. They allow cinema to think through form.
The generative paradox
Diffusion models are trained to normalize. They erase exactly those optical variations
that historically made cinema cinema. Even when you ask for “cinematic”,
the result is often cinema as decoration, not as structure.
When format and optics are treated as interchangeable parameters,
the image remains spectacular — but anonymous.
This is not a prompt issue.
It’s a training and pipeline issue.
Research context
This post introduces a broader research project:
“When Cinema Thinks Through Form – Anamorphic, Optical Imperfection and Filmic Language in Generative Cinema”
A theoretical paper developed alongside my anamorphic LoRA research,
focused on preserving optical behavior, not simulating looks.
The goal is simple:
Generative cinema should not be cleaner than cinema.
It should be intentional.
A second part of this article is available on my website: Cinema Before Theory.
It is presented separately, as it addresses purely filmic questions and deliberately excludes AI-related discourse.
What’s next
In the next patreon post, I’ll present:
🎥 anamorph1x
An experimental (free) LoRA trained to preserve anamorphic optical grammar
inside Z-Image / Flux-based workflows
Safetensor files hosted on huggingface
Including:
Images and examples coming next.
—
CCS / IAMCCS