Alice Bloomfield remembers the moment with the kind of wry disbelief that has become common in an industry caught between labour‑intensive craft and the rapid promises of generative tools. After three months hand‑drawing a stop‑motion style music video for screening at Outernet London, Bloomfield told a Nicer Tuesdays audience that the venue requested the project be played back at twice the frame rate. “I’d just spent three months in my room, working, and now you want twice as many frames?” she said, describing the panic of being asked to effectively double her workload at the eleventh hour. The solution, she recalled, was to work with an AI specialist to interpolate a new frame between each existing frame and then do light manual clean‑up — a hybrid fix that preserved the hand‑made look while meeting the technical brief. (Sources informing this account emphasise that the interpolation step smoothed motion for the immersive display rather than replacing the original hand work.)

Bloomfield’s experience is far from an isolated curiosity: generative AI has seeped into visual effects and post‑production pipelines across film and advertising. As VFX supervisor Jim Geduldick told The Guardian, “Everybody’s using it. Everybody’s playing with it.” Industry conversations now routinely move between excitement for new workflows and anxiety about what those workflows are displacing, from entry‑level jobs to long‑established crafts. According to commentators and practitioners, the prevailing pattern is not wholesale replacement so much as practical augmentation — teams using AI where it saves time, while reserving human judgement for nuance and creative intent.

Many practitioners frame the best outcomes as deliberately hybrid. Directors and studios cited in the original report say they build bespoke datasets from hand‑drawn or filmed assets so that models respond to a project’s particular aesthetic. Studios such as Unveil and animators like Jeremy Higgins have described workflows where human input — whether the initial drawings, the curation of training material, or final frame‑by‑frame clean‑up — is the thing that keeps work from feeling hollow. The argument is straightforward: the machine can generate motion or fill gaps, but the human touch supplies the “soul” that audiences read as authenticity.

Technically, the interpolation Bloomfield used is part of a growing suite of AI‑based frame interpolation tools that sit between older optical‑flow methods and true learned motion prediction. Technical write‑ups explain the basic pipeline: analyse motion across input frames, predict intermediate frames and render them in a way that handles occlusions and complex motion more robustly than many traditional algorithms. Popular models and toolchains — referenced by engineers and technical blogs — include approaches like RIFE and DAIN; their practical applications range from slow‑motion effects and restoration to smoothing animation for high‑frame‑rate displays. Developers now talk about integrating these capabilities into streaming players and mobile editors to allow smoother playback and accessibility features in real time.

The practical impetus for some of these fixes is driven by exhibition hardware. Outernet London’s specifications underline why a 12→24fps change matters: its internal LED canvas is a monumental, multi‑storey surface with extremely high pixel density and controller infrastructure designed to render hundreds of millions of pixels in real time. On such a surface, judder and motion artefacts are amplified; a higher apparent frame rate and interpolated motion can materially alter how handcrafted work reads to audiences in an immersive setting.

But the same technical affordances that enable last‑minute rescue work also intensify industry‑wide concerns. Practitioners warn of two related risks: the normalization of lower‑quality, mass‑produced content — often derided as “slop” — and the erosion of work‑opportunities that sustain emerging artists and technicians. Those anxieties echoed loudly during the 2023 actors’ and writers’ industrial disputes, when unions raised protections against the unconsented replication of performers and the use of synthetic replacements for entry‑level roles. Union negotiators, industry lawyers and many creatives framed the debate as one about consent, fair pay and the preservation of routes into the industry as automation accelerates.

Public reaction to AI‑led creative experiments has sometimes been immediate and caustic. A high‑profile example came when a major brand released an AI‑generated holiday advertisement and met widespread criticism for uncanny faces and an overall lack of warmth; journalists and critics framed the backlash as a symptom of rushed technical substitution for human craft. The brand defended the work as a collaboration between human storytellers and generative tools, but the episode underscored the reputational risk companies face when automated processes produce work that audiences perceive as inauthentic.

Against that polarised backdrop, many industry voices and trade reports advise a cautious, case‑by‑case adoption strategy. Studios are experimenting where the economics and ethics feel manageable, while unions and regulators press for contractual safeguards, transparency about AI use and limits on how training datasets are created. Practitioners who have embraced hybrid workflows tend to stress three practical rules: use AI to augment repetitive or technically prohibitive tasks, keep humans in the loop for creative decisions and disclosure, and curate training data so models reflect the project’s aesthetic rather than generic mass content.

Looking forward, technical and commercial trends point to broader adoption of real‑time interpolation and other frame‑level AI tools in distribution and playback — a development that could make hybrid workflows more scalable and less threatening if paired with clear labour protections. The industry’s immediate test will be whether those safeguards are written into contracts and studio practice, and whether practitioners can make “augmentation” mean genuine enhancement of craft, not its cheapening. As Bloomfield’s example shows, the most defensible position for creatives is to treat AI as a specialised tool that can amplify hand work — not as a substitute for the labour and judgment that give animation its human resonance.

📌 Reference Map:

Reference Map:

Source: Noah Wire Services