The Nespresso flagship on 5th Avenue in New York needed a video wall that could bridge the retail zone and the experience zone. The piece had to feel premium, continuous, and responsive to visitor presence without becoming a literal interface or repeating canned loops. This is the story of how the project found its content approach.
Starting with pre-rendered content
The initial exploration looked at what could be achieved with pre-rendered video. This is the default assumption in many retail media projects: commission a set of beautifully produced loops, load them onto a media player, and schedule playback. It is a known workflow, relatively simple to produce, and easy to hand off to an operations team.
But the limitations showed up quickly. A loop, however long, has a fixed duration. In a space where people linger, return, and spend extended time near the wall, the moment the content resets becomes visible. The piece stops feeling alive and starts feeling like a recording. The visual language loses its connection to the room.
There is also the question of responsiveness. Pre-rendered content cannot react to the space. It plays the same sequence whether the store is empty or crowded, whether someone is standing close to the wall or walking past at a distance. For a flagship that wanted the wall to feel connected to visitor presence, this was a fundamental mismatch.
Rather than arguing against pre-rendered content in the abstract, we let the client see these limitations through the exploration itself. The ceiling of the approach became clear through the process, not through a pitch.
The shift to GLSL and TouchDesigner
Once the limitations of pre-rendered content were established, the project moved into generative territory. The question became: how do you build a visual system that produces continuous, non-repeating content, responds to real-time inputs, and still stays within the authored visual language of the brand?
The answer was GLSL programming inside TouchDesigner. GLSL, the shader language running directly on the GPU, allowed us to build a fluid simulation where coffee and cream behave as a slow visual language, continuously emerging and dissolving. The simulation runs in real time, which means the content is generated frame by frame rather than played back from a file. No two moments are identical.
TouchDesigner provided the framework around the shader: the sensing pipeline, the parameter control, the state management, and the integration with the hardware layer. It handled the connection between the LiDAR and computer vision sensing system and the visual output, translating visitor movement into forces within the fluid simulation.
The visual language itself was tightly authored. The colour palette stayed within coffee and cream tones. The fluid dynamics were tuned to feel slow, premium, and organic. The interaction layer was subtle: nearby presence gains weight in the simulation, leaving ripples, trails, and wake patterns that linger briefly before merging back. The system does not announce its interactivity. It rewards attention without demanding it.
What GLSL makes possible
Shader-based content has specific advantages for permanent installations. The GPU handles the heavy computation, which means high-resolution fluid simulation at stable frame rates. The visual output can be driven by multiple inputs simultaneously: time, noise fields, sensing data, scheduled parameters, and operator overrides. Because the content is defined as a set of rules rather than a timeline, there is no duration limit and no loop point.
For the Nespresso wall, this meant the content could run from opening to closing, every day, without repetition. The visual character stays consistent because the rules are authored. The variation stays fresh because the simulation is continuous and responsive to real conditions in the space.
This is the core distinction between pre-rendered and generative: a video file is a fixed artefact. A generative system is a living set of authored behaviours. The authored part is what keeps it on-brand. The living part is what keeps it from going stale.
Operational implications
The shift from pre-rendered to generative also changed the operational model. There are no content files to manage, update, or schedule. There are no loop durations to coordinate with store hours. The system starts, runs, and responds to the space. Maintenance is system-level: hardware health, sensor calibration, software stability. Not content rotation.
For a permanent installation in a flagship store, this simplicity matters. The fewer moving parts in the daily workflow, the more reliably the system performs over months and years.
Related services: real-time generative visuals, interactive video walls, and people tracking and computer vision.
Related project: Nespresso New York interactive video wall.









