ause the screen for a second and take a deep look into Tilly Norwood's big, dark green eyes. They might look human, but they don't speak to you.
Unpause the video and look at the gestures -- the unsubtle turns of the body, coded to predict and mimic one motion after another by scanning a database of human references. The way Tilly moves seems human enough, but only for a split second.
Decades into the evolved state of computing and visual effects, the uncanny valley still exists, and Tilly -- the much-hyped new 'AI actress' that Hollywood agents are scrambling to 'represent' and no different than an uninspired picture animated by computation -- is no exception. 'It' (you cannot call a 'code' she or her, in my opinion) is just the latest, momentary divergence in a rapidly evolving fad; a new poster-child representing fakery.
In retrospect, Tilly's case is art -- or rather, programming -- imitating life. She embodies the inevitable trajectory of AI-generated social media influencers graduating from feeds to film.
So, what's stopping the other digitally created influencers on TikTok and Instagram from doing the same? After all, the technology, supposedly to get them going, is just a text prompt away.
The hype and hysteria around the Artificial Intelligence-generated actress Tilly Norwood often obscures a critical question: is the technology actually good enough to replace VFX artists, animators or actors just yet? And as Pakistan's visual industry scrambles to jump on to the bandwagon, does it actually have a soul?
Unsurprisingly, therein lies another parallel: like the great bulk of real human 'influencers' who amass a following but have little talent when they eventually make it to the screen, Tilly's 'performance' -- taking bits from hundreds of thousands of films, actors and people -- is manufactured to display, but not deliver conviction... yet.
A bad actor is always a bad actor, digital or real. And yes, somehow 'yet' manages to creep into sentences when analysing or reporting artificial intelligence. There is a good reason for that. Tilly's showreels invite questions larger than the fleeting hype they create, or the heebie-jeebies they seem to give real actors and their unions.
The 2023 SAG-AFTRA strike -- it ran 118 days from July to November -- was Hollywood's first true showdown with AI, when the studios had begun scanning actors and storing their likenesses, with plans to reanimate them indefinitely. With 160,000 members on strike, the union forced new lines into contracts: mandatory consent, compensation, and streaming bonuses.
Yet, beneath the clauses, one fact lingered: technology had quietly made one's identity licensable.
Producing 1,000 AI images burns roughly 2.9 kWh of electricity. That's about the same energy a laptop uses running continuously for a full day. Scale that up to millions of images and the environmental toll becomes staggering. Imagine, vast data centres burning through electricity and water for cooling their systems down, all to sustain what feels like effortless digital fun
There is a thriving community of filmmakers who just want to create without restrictions, who feel it is easier to create new actors than to tolerate demands. After all, 'unlimited' subscriptions cost between $20 and $200 a month.
Let's be real though, there are obvious logistical weaknesses to the whole pro-AI discourse, and yet -- again, there is always a 'yet' -- some of it might prove useful for countries such as Pakistan, despite it still being "AI Slop."
But let's ease into that with some details first.
Generative AI -- the term given to creating images from text prompts -- carries a heavy invisible cost. Producing 1,000 AI images burns roughly 2.9 kWh of electricity. That's about the same energy a laptop uses running continuously for a full day. Scale that up to millions of images and the environmental toll becomes staggering. Imagine, vast data centres burning through electricity and water for cooling their systems down, all to sustain what feels like effortless digital fun.
The ethereal and intangible output of a simple text prompt rests on an infrastructure that quietly drains real-world resources. It gives temporary highs to people who can't seem to look away from their smart phones or computers, because writing words gives them pretty pictures in return.
But since there are hundreds of millions craving that self-gratification, every major conglomerate has jumped into the rat-race.
Google (through its suit of 'creative' AI tools that include Imagen and Nano Banana), Meta (Facebook and WhatsApp, and their lacklustre Llama 4), OpenAI (Sora), Alibaba (Qwen, Wan), ByteDance (makers of TikTok, Seedance and Seedream), Kuaishou (Kling) and Runway (Gen 4 and Aleph) push updates every few months, rolling out new features, offering free trials, refining interfaces that deliver results in a click or two.
It is now a contest about who can stay in users' minds -- and on their screens -- the longest.
In that ecosystem, creating "Tilly" (or something close or different to it) becomes trivial.
In open-source frameworks, where users can download and use the application free of cost using softwares such as Stable Diffusion -- a company that James Cameron has partnered with -- anyone with modest computer configurations and a good graphics card can create a virtual person, or deep-fake someone they know, to do who knows what.
Once you know how to engineer the prompt ("prompt engineering" is a refined way of saying you can describe an image better than others), creating landmarks, fantasy landscapes, 3D characters or people is as simple as adding the words "in the style of" at the beginning.
To make Tilly, one need only write: portrait of a beautiful young petite woman, brown shoulder-length hair, light freckles, realistic dark green eyes, with soft cinematic lighting, 3/4 view, neutral expression, background blurred, filmic colour grade -- and you're halfway there.
Still quantifying a digital identity like Tilly is a slippery thing, since it exists without weight or origin. It is, after all, a synthetic bundle of probabilities and presets that could belong to anyone, or no one at all.
One could call 'Tilly' a toy, but even toys have tangibility -- one can contort them to stupid poses (like we all did as children), and they have backstories. However, with digital characters, the name, face, charm (if there is any) is instantly reproducible, and infinitely disposable at the click of a delete button.
And lest we forget, building a world around digital characters is anything but easy. Making 10-second reels is one thing, but making an entire film out of reference images and text prompts is a logistical nightmare, as many filmmakers learn the hard way.
Moving from frame to frame and the illusion begins to slip. Characters' jawlines shift, the lighting forgets itself, the eyes lose their emotional thread (if, that is, they had any in the first place), and continuity disintegrates into the wind after Thanos' snap. What looks uncanny in a 10-second clip becomes incoherent at increasingly longer intervals.
Performance, too, has no instinct or rhythm in an AI actor; it is devoid of the flicker of discovery that a real actor brings.
But it is not that surprising that some of the criticism might just turn out to be as short-lived as the shelf-life of this article.
Mere days before writing this, OpenAI's Sora 2 came and turned the industry on its head. The results, for lack of a better word, are amazing and unparalleled. However, it is a gap others will now sprint to close.
People are buying even with its $200 price tag, and new fads -- like the slew of videos showing Stephen Hawking pulling off skateboard stunts, jumping into swimming pools, stealing food and running away from the police -- are becoming fun trends. Trends that do not carry any emotional remorse for faking images of a real man who recently passed away, for the sake of stupid entertainment.
One can argue that the images pose no real harm, and that they are nothing worse than memes -- except, when once it took someone a few hours and some measure of Photoshop skills to make something whose existence was disposable, now it takes seconds for anyone with no skill at all.
For the past year, this writer has been dabbling with most of the high-end AI tools in the market, testing the limits of their digital conjurings. What one learns is that it takes a lot of trial and error, and a good amount of waiting, to get the right images for making decent videos. Keeping the 'vision' consistent is easier said than done.
A few tools, however, could genuinely alter how AI assists in filmmaking.
Runway ML -- that was trained on Lionsgate Studio's library of movies -- is fast becoming the industry's sandbox for generative video, allowing creators to materialise shots from text, extend frames (where the last frame of an image is used to extend the shot), and even stylise entire sequences with some precision (to be honest, these options are available in almost all AI services now).
Aleph, one of Runway's more refined tools, promises photoreal scene-level control; the sort of polish that used to take VFX teams weeks, seems to be created in seconds.
But is any of this a one-click solution? Hardly. The results remain mercurial: one shot breath-taking, the next unusable.
Producers of an animated film in Pakistan stressed -- read: borderline banked on -- character animation short-cuts through AI because they liked what they saw on the internet.
However, what that producer -- or most producers keen on using AI -- don't factor, is the lack of nuance. For starters, most showreels only show the best of tens (if not hundreds) of iterations. Secondly, no matter how refined, the emotion and movement is never convincing. An easy way to re-add some measure of emotion is through performance capture. However, in this case one doesn't need expensive motion capture suits to make that work. One simply needs to shoot a short fifteen second video on a smart phone and the AI will copy those poses and the lipsync over to the characters.
The catch here is the limit. The camera will most likely remain stationary (as some softwares recommend), or offer limited moving shots that deliver less than stellar results. Creating a semblance of sense out of that will likely be a problem for the editor, and the viewer's eye.
So, in hindsight, no one is replacing VFX artists, animators or actors just yet. For now, these tools are minor collaborators than conquerors of the business. They're brilliant in bursts and small fixes that require tedious back and forth between hundreds of artists, but remain unreliable in an industry that demands continuity and quality control.
As for Pakistan, the country lives in a different algorithm altogether.
On paper, tools such as Sora 2 or Runway could be a godsend -- trimming costs, skipping the need to erect elaborate sets, replacing backgrounds, or even resurrecting half-funded projects that were deemed way too expensive to make.
However, in practical use, one can see things unravelling faster than the time it takes to write a workable prompt.
One can see the technology's many shortcomings, like the lack of consistency and realism, in the brave (and at times beautiful) attempts from AI Box Cinema -- actor and filmmaker Shamoon Abbasi's dabbling in AI narrative -- or the commercials from Ufone (Dil Se Ba-ikhtiar, where PTCL Group mentored and trained 100 women to create and market their handcrafted designs that were showcased by AI models); Dawlance (celebrating Independence Day); Zong (streaming ICC matches free on their network); or Golden Pearl's facewash, where a digitally created actress -- fake as fake can be -- shows us that the product can erase dark spots and boost collagen levels, on an actress who is but a figment of digital imagination.
Let that thought sink in for a moment (also FYI, pixels do not need collagen).
Pakistan's fallibility comes from our one consistent flaw: an utter lack of planning, foresight and intelligent and appropriate use of technology. From the examples above, it is clear that what we do is follow the trend without question.
For an industry that has few visual effects houses and less talent (the ones we have demand unreasonable fees), the flexibility AI's quick-fix solutions offer are nothing short of minor miracles.
AI can generate striking images, yes, but stories and narratives? I think not.
As I write this piece on Google Docs, Gemini's star-shaped icon at the top constantly reminds me that a quick fix of grammar -- or even a paragraph's entire reconstruction -- is but a right-click away. But is rephrasing not an invasion and belittlement of one's own voice and individuality as a writer?
A filmmaker, whom I will not name here, said that he is using ChatGPT's help to write his films. Giving that a spin just for the heck of it, one finds out that the stories it concocts are generic with a capital 'G'.
Although there are specific tools that help screenwriters write entire films, the foresight to extract and rework your own voice from that text will likely bring you back to square one.
We, the reviewers, might forever keep chastising local writers for weak scripts, yet that very human flaw is also what gives texture to our cinema.
Maybe some use of AI can help write better stories or identify the shortfalls (a regular contention in our reviews in Icon). And maybe it can help build the industry as long as short-form fixes go for visual effects. But until AI learns to dream in flaws, contradictions and the elusive human touch, it will only mimic, not imagine.
Published in Dawn, ICON, October 19th, 2025