[3D is being pushed as the next step for all screen-based entertainment and Ubisoft's Murray Pannell has stated his expectation that it become standard in three years. Yet why has reaction to the technology been so mixed?]
Ubisoft's Head of UK Marketing, Murray Pannell, expressed
his opinion in a Eurogamer interview this week that it will take just three years for 3D televisions to become the
This forecast seems over-ambitious, not least as HD televisions have
been commercially available since 2005 and by all accounts have yet to
be taken up by a majority of television-owning households.
It's possible that Mr. Pannell's belief in the technology's market potential is driven not only by the the public's enthusiastic reception to James Cameron's Avatar, which pioneered the latest version of the technology in cinemas, but also how it offers viewers a genuine evolution in the way they interact with their televisions. I can see that to a lot of low or middle income families, or even those less technically minded, upgrading to HD resolution wouldn't be seen as either offering any significantly more substantial improvements over a standard definition set – thinking back to the launch of the DVD, its bid to supplant VHS was augmented not only by improved picture quality, but also scene selection and extra content on a format both compact and reliable - something the technology's predecessor (the laser-disc) definitely wasn't. Offering viewers better picture resolution wasn't exactly a new concept either: the difference was scarcely greater than upgrading from a cheap old SD television to an expensive new one. While in a minority, I'm surely not the only person who also finds HD occasionally ugly to look at, making artificially sharp things that in life (such as shadows) are pleasantly muted and enhancing the uncanny valley effect of many modern videogame graphical styles.
But while many 3D demonstrations, for both gaming and standard
viewing, have been greeted by the public with enthusiasm, there doesn't
seem to be any clamour for it to supplant the two-dimension experience.
2D films haven't seen any noticeable drop in income, even when a 3D
alternative is offered, while even the prospect of 3D gaming has yet to
truly excite that most technologically-driven of audiences. The
revelation of Nintendo's 3DS at E3 got a lot of people salivating, but
the discussion surrounding it seems as much focused on the broad
library of games announced for the system (something Nintendo fans
aren't used to) and the improved graphical abilities backing up the
added depth. Even the absence of glasses, long considered the biggest
barrier to 3D adoption, doesn't appear to have completely settled 3D in
people's minds as the new standard for the gaming experience: news of
Sony's 3D compatible games, albeit requiring both glasses and the
purchase of a new television, fell largely on deaf ears.
The logical conclusion is that while people are happy to accept the technology as a side-dish to the standard 2D experience, they aren't all that thrilled by the prospect of it becoming the sole means through which screen-based entertainment can be enjoyed. Avatar represented the pinnacle of the technology at its release, yet respected critics such as the BBC's Mark Kermode (in his review of the film) declared themselves unconvinced of its ability to offer anything more than a visual gimmick.
The difficulties of producing realistic depth also raises questions about whether the integration of 3D can compromise an image's artistic integrity. I greatly enjoyed Avatar in 3D, but the thought occurred of how much damage would be done by adding the effect to a film less reliant on CGI and spectacle, or worse still a classic by visual masters such as Sergio Leone or Francis Ford Coppola. Those directors layer their shots in the same way a painter does, on the rules of foreground, middle-ground and background. Broadly speaking, the foreground images represent the key figures of the painting or shot, the middle-ground gives them context and the background a sense of time and location. Although the natural assumption would be that 3D could greatly enhance these qualities, it is the stark division between these layers on a flat surface that allows the creator to guide the viewer's experience. By adding literal depth, distinguishing between where the foreground ends and the middle-ground begins becomes much harder, applying just as truthfully to the middle and back grounds. Great artists can change the way we interpret one layer through what we see immediately behind or in front of it, a deliberate and calculated artificiality using the limitations of a flat image to its advantage. While games aiming for 'realistic' graphical styles may end up enhanced by simulated depth (by removing our need to make imaginative interpretations of depth on-screen, the disparity between our minds projecting a real-life concept into a world of uncanny valley visuals is gone), those presenting more stylised visuals, such as killer7 and Okami, will surely only lose some of their impact: when was the last time you saw a Japanese sumi-e brush painting in three dimensions?
There's also the danger of the 3D effect being done badly. This can be seen in some of the films hurriedly retrofitted to take advantage of the technology in the wake of Avatar's success. Clash of the Titans (famous for its advertising slogan 'Also available in 2D') was lambasted by critics as unwatchable in its 3D form, so badly had the conversion been done. Many will argue that these deficiencies will be ironed out as the tech improves, but that doesn't take in account people's never-ending ability to use good tools badly. Do we really expect low-budget programming to produce the same quality of 3D images as a multi-million dollar drama? With 2D, directors may have to make budget cuts in getting their work to screen, but they're still painting on the same canvas as their more expensive brethren. When push comes to shove, would a strapped-for-cash producer choose to abandon his project altogether or save money by compromising the quality of his 3D?
For the gaming medium, where players are able to move around in the worlds being drawn in three dimensions, such misjudgments will become all the more noticeable. Already games seen from the first-person perspective can cause motion-sickness when there seems to be a conflict between layers of movement on a 2D screen: how much worse might this issue become when those movements are tracked across simulated depth as well? Plenty of people have already reported headaches from prolonged watching of 3D images, so walking around them isn't likely to help the situation.
Leaving aside the technical barriers still to be overcome, such as the requirement for glasses or a straight viewing angle, the problems with screen darkness and the fact that those with eye problems may not be able to view a 3D image at all, proof is still needed that the technology can provide a consistent experience with the same artistic strengths of 2D and no added health issues. Watching a 3D movie is fun and no doubt playing a 3D game is too, but once the wow factor has worn off, is the technology really offering anything better than that which is already on the market? I, for one, remain sceptical.