
Visual Effects is the creation and manipulation of moving images in post-production. It ranges from adding a simple title to creating an entire world like Pandora in Avatar.
Any film can benefit from VFX – there is no need to go to the expense of taking your unit to New York or renting a helicopter when both the city and the aircraft can be digitally created and added to foregrounds filmed at your convenience. Why hire 300 junior actors for a crowd scene when VFX can multiply 50 as required?
Everything we do in VFX depends on computing power. What’s possible depends on the cost and speed of processors and memory.
In 1965 Gordon Moore, who went on to found Intel, predicted the exponential growth in possible processor density that continues today. Simply put, he said that the available power of computing will double every two years. This has held true ever since.

For VFX it means that over time the impossible becomes possible, difficult things become easy and expensive things become cheaper.5
Currently we think about the spatial resolution of an image – how many pixels make up a frame. Images 2000 pixels across are being superseded by images 4000 pixels across, with four times the amount of data of course. Another doubling in resolution (quadrupling the size of the data set again) will give us image detail equivalent to the maximum the human eye can resolve.
Where film currently leads digital acquisition is colour depth or latitude. Better image sensors and faster processors will give us new cameras that match celluloid in this respect – with a commensurate increase in the size of the data set.
The entire path of a movie image, from shooting to the viewer, will eventually be as digits with no compromise in quality.
New file formats will be introduced to store and move that data around. Creative decisions will not have to be committed before the next process in the production chain can occur, as all the data that goes to make up a shot, a scene or, one day, a complete movie will remain live and adjustable until the director or perhaps the viewer decides exactly what they want to see.

Entirely new camera technology will ultimately revolutionise acquisition.
Lytro in the US are prototyping a lens and sensor array that collects light data from a scene in a way that it can be used to reconstruct the geometry of a shot later – the way light falls in 3D space is recorded and understood as a mathematical model. Focus doesn’t matter to the pure numbers and can be set in post-production. Right now there is a consumer level prototype but in a few years when the technology is applied to movie cameras there is the possibility of a total game changer.
Object recognition techniques are being researched and that research condensed into algorithms that take advantage of greater processor speeds. Computers will be able to intelligently isolate an object in a scene without the need for green screen or rotoscopy. Software will understand the volume that an object occupies, giving us the ability to relight and otherwise manipulate it.
We will be able to calculate and recreate backgrounds that are hidden behind foreground objects by detecting patterns and referring to an experiential library. Similar techniques will let us see the back of an object – the side turned away from the camera. Eventually this will be possible for the most complex thing in a shot, a human.
Techniques are being developed that can reconstruct a 3 dimensional model of a face from a library data set, a collection of mathematical relationships and a still photo, then animate that face at will. A recognisable and convincing character on screen will be created from a snapshot of an actor who has since aged or retired.

The study and analysis of human form and movement that is happening in other fields, for example medicine and security, is giving us spatial and behavioral models that will allow better and faster creation of crowds and armies of virtual humans that act like real people without having to be animated frame-by-frame and character-by-character. Ultimately CG actors will have believable facial expressions.
The interaction between human skin and light is incredibly complex, and we’ve all been observing it since birth, so we can easily spot a fake. New mathematical techniques supported by massive processing power will bring us closer to being able to make a digital human that is indistinguishable from the real thing.
Right now we can put a small Computer Graphics (CG) person at the back of a scene and the viewer will have no idea that he isn’t real. Placing him in the mid-ground, closer to the camera, is possible but time consuming and expensive. In the future we will be able to create convincing people in close-up.

Rather than spend large budgets on building complete sets of physical scenery, filmmakers will only need to construct just enough of a real environment for the director and actors to interact comfortably with. The un-built parts will be created in CG, in real time so the complete shot can be watched live on set as it is blocked and recorded. The designer and DoP will be able to make changes after the scene is in the can (or in the memory bank).
We will develop more sophisticated mathematical models of physical things like explosions, the deformation of metal in a car crash, the way water splashes. It will become cheaper and easier to create many more events in CG than shoot them for real.
The lines between movies and computer games will blur. Games will become photo real, stories and assets will be shared between the mediums and viewer interactivity with films will become possible. The audience will decide whether to passively sit and be told a story or to participate in it and influence the plotline and outcome.
No comments:
Post a Comment