Escape at DigiPro & SIGGRAPH 2023

Written by 

SIGGRAPH - or to give the conference its full name, the Special Interest Group for Graphics and Interactive Techniques - is 50 this year! 

SIGGRAPH pulls a diverse, global audience of professionals and academics, and provides an up-close-and-personal view of the latest technology and production techniques for computer graphics image-makers. With our commitment to remain experts in CG and VFX and deliver that knowledge to our customers, and with one eye on a commemorative 50th birthday T-shirt, we made our way to downtown LA, and the Los Angeles Convention centre to be educated, engaged and inspired. What did we discover? Generative AI, super-computing, USD, USD and yet….more USD.


DigiPro Highlights - JW Marriott Hotel, Downtown LA

First up, DigiPro. DigiPro precedes SIGGRAPH, and is a professional-only, one day event. Where SIGGRAPH draws from the wider audience of research, production, educators and enthusiasts, DigiPro brings together - in its own words, the world’s premier creators of digital visual effects, animation, and interactive experiences. No false promise this, with senior staff from all major production houses across the world in attendance. DigiPro 2023 offered a rich blend of technical and creative updates across virtual production and technology infrastructure, with a keynote from Disney Animation Studios's Marlon West providing a journey through the past and highlighting technology innovation at the house of mouse.

Industrial Light & Magic delivered the morning's first talk, with a detailed look at StageCraft, their virtual production platform, as deployed on The Mandalorian Series 2 and Obi-Wan Kenobi. For anybody interested in the 'world-building' possibilities of VFX, this talk was an absolute delight, and really made you want to see the shows. StageCraft allows ILM to integrate virtual cameras, LED volume content, motion-capture performance and colour pipelines into truly real-time VFX and the results are game-changing, both in terms of results quality and production costs and efficiency. VFX studio's adoption of games-engine technology for real-time results is a subject that really piques our interest, and we were excited to see ILM's Helios, their cinematic render engine for real-time environments, and in particular how it enabled ILM to produce exceptional lighting and colour work on Obi-Wan Kenobi. The environments we were shown were painterly, lush and utterly immersive. Beautiful work and a fascinating talk from one of the finest VFX companies in the history of the medium.




Weta FX's work on Avatar: The Way of Water has loomed very large at both DigiPro and SIGGRAPH - which is perhaps no surprise. This is a community primarily interested in outstanding technical achievements, rather than the broader merits of a piece of entertainment. Whatever your feelings about the blue creatures of the planet Pandora (and mine are somewhat lukewarm), Weta's work on this film is quite simply exceptional. Just before morning recess at DigiPro on Saturday, Weta's senior VFX supervisor, Joe Letteri took to the stage to run through highlights of the studio's technical achievements on Jim Cameron's Avatar sequel. Weta achieved a variety of quantum leaps with their work on this production, notably the development of a new facial animation system and the ability to simulate the infinitesimally complex qualities of water to a degree never previously achieved. Particularly fascinating was the role of machine learning in these effects; the fluid effects team at Weta fed the initial solves from their on-set volumetric data into a neural network that had learned the unique visual properties of water, which was able to give them - with further adjustments for lighting and texture - highly realistic results, for a show where nearly every scene takes place in water. Overall, one could only marvel at Weta’s spirit of invention and ingenuity. Coming onto the project, Letteri’s manifesto seems to have been to create new tools and practices where the existing ones were in any way flawed, and the results speak for themselves

Marlon West is head of effects and a VFX supervisor at Walt Disney Animation Studios. His was the keynote speech for the day, and used the studio's centenary this year as a jumping back point to reflect upon the contribution technology has made to animation across their storied history. The sometimes-held view that digital tools simply overthrew traditional techniques is misleading, as West was keen to point out, showing the innovative use of digital tools on Tarzan, Pocahontas and Beauty and the Beast. West's career is relatively unusual, having successfully transitioned from hand-drawn animation to CG. He joined the studio in 1993 and worked on The Lion King, where he contributed to the famous stampede sequence, an early example of CG techniques enhancing traditional animation. He spoke eloquently, and sensitively about the sometimes painful transition between classical and 3D animation that the studio, and individual animators faced in the late nineties as CG animation overtook classical animation to become the dominant art-form. West’s talk provided a useful counterpoint to other, more technical sessions throughout the day and perhaps inadvertently proved the point that the better and more sophisticated the tools, the more they favour the artist, rather than the technician.

Weta - ‘How Much Cloud is Too Much Cloud?’

One for the Sys-admins rather than the artists, this one, a deep dive (sorry!) into the scale of the rendering challenge on Avatar: The Way of Water. The systems team at Weta gave an inside look at their deployment of a hybrid renderfarm, with their on-premise utilities necessarily extended via the use of an AWS spot fleet across 3 different AWS geographical regions. The gigantic data sets and global distribution necessitated continuous performant storage, and the development of a custom-built NFS re-export cache to counter additional latency across regions. Individual render nodes - some eight thousand of them - carried 128 cores and 1TB RAM with internal 100Gb networking! One can only wonder at the size of the rendering bill on that show!


SIGGRAPH Highlights

Monday Keynote - Dr. Dario Gil, Senior Vice President and Director of IBM Research ‘What’s Next in Quantum Computing’

For the uninitiated (of which I was one, prior to having sat through this session) Quantum computers are….well, they’re different, and they have the potential to approach complex problems differently, perhaps more successfully than traditional computers - or indeed ‘classical computers.’ This was the term that Dr Dario Gill used to differentiate between today’s computers, processing information via the use of bits and bytes, or zeros and ones if you like - and new super-computers whose calculative intelligence is arrived at through the use of Qubits. Qubits rely on the use of Quantum mechanics, that is to say that unlike a classical computer’s Bit, which can only exist in two states, a Qubit can exist in a superposition of its two basic states. Quantum computers are also not constructed from the same hardware as classical computers, using large quantum processors housed in supercooled ‘chandeliers’ situated in a mainframe (to use an antediluvian term) structure.

Jason Jenner So what’s this got to do with SIGGRAPH and why is it important? Whilst SIGGRAPH’s remit is specifically toward the use of computers to produce pictures - putting it simply - the conference has a pioneering spirit at its heart, hailing from the days of Jim Blinn and the early graphics adventurers. It’s assumed that paradigmatic leaps in computing will ultimately benefit the CG practitioner and thus topics such as this, and generative AI find a natural home at SIGGRAPH. The exciting possibilities of Quantum computers are that, when plugged into large language models (LLMs), Machine Learning basically - they potentially have the capability to solve incredibly complex problems much more quickly than classical computers - if classical computers can answer them at all. Quantum computers' multivariate processing capability makes them more adept at interpreting the enormous data sets intrinsic to machine learning and AI, enabling them to solve complex optimization and logistical tasks on a large scale - routing and traffic control for instance. At the more exciting end of the spectrum though is their potential for solving the big human crises in drug and medical research and, ultimately, climate change. Riveting stuff, and it should be said that Dr Dario Gil was that rarest of breeds, a gifted technologist and orator, and made a potentially bewildering subject compelling and (relatively) easy to follow.

Production Session: Playing with Fire - Pixar Ember & Wade - Pixar’s Elemental

Elemental was another piece of entertainment that kept cropping up in various places at SIGGRAPH. This particular session was a show and tell narrowing in on the creative development of the two lead characters, Ember and Wade, the former composed entirely of fire, the latter of water. Elemental is allegorical, and understood to have been inspired by director Peter Sohn’s childhood experiences of growing up as part of a Korean immigrant family in The Bronx, New York City. The technological achievements in realising characters formed entirely from elemental dynamics was impressive, and required Pixar to innovate to achieve new effects - however, what really struck, and often does with Pixar is just how expansive and rigorous is their creative process, their exploration of different ideas and concepts. The character and VFX team worked through endless iterations of Ember’s look and feel, playing continuously with the challenges of a character who Sohn described as being ‘made of fire, not on fire’, ultimately landing on a clever combination of illustrative and volumetric simulations to create a realistic, but emotionally engaging character. Likewise, we were treated to some early look development work on Wade, with various fluid simulations melded with stylised character elements to achieve the right balance. The process through which ideas are pushed, pulled, with occasionally strong visual ideas sidelined for those which better serve character and story-telling is always fascinating to witness. Elsewhere at SIGGRAPH, Pixar opened the book on how an OpenUSD pipeline enabled the production of Elemental, and allowed the studio to cope with more complex-than-ever-before shots, and the rendering workload - one of many examples of USD’s ubiquitous presence across the week.

SIGGRAPH Tuesday Keynote - Nvidia Jensen Huang

PHOTO 2023 09 13 15 58 26 5 Naturally this was SIGGRAPH’s headline keynote. One of a handful of global tech, CEO-rockstars, Jensen Huang’s presence at SIGGRAPH couldn’t be overlooked, and Nvidia were seriously committed to ensuring it couldn’t be, with a large team of black-clad female reps walking the halls in the days preceding his appearance with branded placards bearing the time stamp of his forthcoming address. Huang did draw probably the largest audience of the show, and thus we all processed  - coffee fuelled - into the largest of the main auditoria at the LA convention centre for an 8am kick-off (it seems that if you are one of Time Magazine’s 100 most influential people, you can pull conference starts times an hour earlier!)

Nvidia presentations at major tech shows naturally bring a sense of anticipation; what imminent new releases will be announced and what horizon technology will be roadmapped? Whilst it was undeniably exciting to attend one of Jensen’s keynotes in person, the technologies presented were reconfigurations and extensions of existing products, rather than anything definitively new. That said, in terms of an update keenly relevant to our industry, Nvidia did, whilst Huang was still onstage, announce the release of the RTX4000, RTX4500 and RTX5000 on the Ada Lovelace architecture, all set to become workstation standards for Escape and our customers across the coming months. The presentation divided cleanly in two - the first half concerning the release of the Grace Hopper GH200 Superchip for GPU supercomputing, and the second focused exclusively on OpenUSD, Omniverse and generative AI. 

The GH200 super chip combines Nvidia’s Grace CPU - a 72 core best-in-class ARM data centre chip - and the Hopper H100 Tensor core GPU, in a dual configuration to deliver a CPU and GPU coherent memory model for accelerated AI and high performance computing (HPC) applications. GH200 superchips can be aggregated using Nvidia’s NVLink, and deployed in a range of modular configurations to meet the giant compute needed for large language models and generative AI In the data centre. The message here was one of capability and cost effectiveness, with the modular, configurable GH200 systems scaling to meet the demands of the world’s most aggressive HPC workloads in a more consolidated, lower power consumption footprint.

Exciting and genuinely empowering the outer reaches of HPC, but not really directly relevant to the daily tasks of your regular CG artists - not so Omniverse though, to which Huang turned his attention throughout the second phase of his address. 

Omniverse, for those unaware, is Nvidia's proprietary version of the OpenUSD format - or to give it its full name, Open Universal Scene Description. OpenUSD, developed in production at PIxar Animations Studios, is a robust file format that provides a common language for the exchange of 3D scene data across multiple applications and, critically, allows artists to collaborate using the same assets and scenes, simultaneously. In a nutshell, OpenUSD is about workflow optimization and its potential impact on CG creativity is enormous. To quote Huang directly,  “OpenUSD is a very….big….deal.” Indeed it is, and when allied to machine learning, or ‘generative AI’ as Huang named it, potentially even bigger. These tools combined enable the creation of digital twins, such as factories, warehouses, even rail networks. These have the potential to enable operation of their real-world counterparts digitally, for which Omniverse has to be real-time. Attempting to harness the power OpenUSD (in the form of Omniverse) and generative AI, Nvidia has made its DGX Cloud available to developers and creatives through the Hugging Face platform and its own AI Workbench platform. The mission being, to fine tune LLMs and accelerate the adoption of generative AI - and use of Nvidia L40 GPUs - across a dizzying array of business specific, application workloads.

PHOTO 2023 09 13 15 58 28Elsewhere at SIGGRAPH, Nvidia hosted various breakout sessions demonstrating the advantages of Omniverse and USD for visualisation and architecture workflows. Whether in an Nvidia wrapper or open source, USD was everywhere at SIGGRAPH and it clearly has the ability to rewrite the way complex computer graphics are created - yet, what is not being talked about with equal enthusiasm is the impact on the underlying IT infrastructure that USD will have; storage and networking solutions will have to evolve to meet the stresses that USD will place upon them.

Overall it was a coin-toss as to whether OpenUSD or Machine Learning (generative AI, LLMs - the phraseology is still evolving) was more prevalent across the wealth of presentations and discussions we attended at DigiPro and SIGGRAPH. Both cropped up repeatedly, and there is no doubt they will have a transformative impact on the creation of computer graphics across all industries throughout the next few years.

Production Session: Sony Imageworks - Spiderman: Across the Spider-Verse

Across the Spider-Verse is a visually exciting and inventive film, with the collision of interesting styles; it cannot be easily classified as either 2D or 3D, as it contains elements of both, as well as hand drawn, painterly, futuristic and punk visual references to bring life to the different worlds within the Spider-Verse. The production design and character teams took to the stage at SIGGRAPH to run through the tools and techniques they had employed to produce a film that is genuinely arresting visually. The characters in the film occupy different worlds, each with its own unique look - for instance ‘Mumbattan’ takes its visual cues from the Indian ‘Indrajal’ comics of the 60s and 70s - many of which require a very 2D or ‘print’ look, to function within a 3D environment. The senior artists demonstrated solving the problems faced with elegantly combining the film's more illustrative elements with the requirements of 3D. One section of the film takes place in ‘Earth-65’ which required an expressive, painterly look, and the Sony Imageworks team integrated painting software Rebelle into their pipeline, using machine learning to develop a tool that provided an animated watercolour effect. In terms of harnessing creativity and technology to produce something unique, this was among my favourite sessions and is what makes SIGGRAPH such a unique experience. 

SIGGRAPH - Birds of a Feather Sessions

In and amongst the keynote and production sessions that garner huge, general audiences at SIGGRAPH hide the Birds of a Feather sessions. Those with converging interests flock together to discuss a prevailing technical issue, which can take various forms including sharing ideas on a specific technical challenge, or industry standardisation. BoF sessions are intended to be non-commercial and provide an open forum for technical knowledge sharing, and as such are not usually open to tech vendors. This year two particular sessions stood out, the VFX Reference Platform and a gathering to discuss rendering at scale for production;

VFX Reference Platform - the VFX Reference Platform is a set of tool and library versions intended as a common target platform for building software for the VFX industry. The Reference Platform is updated annually by a group of software vendors in collaboration with the Visual Effects Society Technology Committee. Its primary goals are;

  • Align software providers to minimise incompatibilities between industry standard digital content creation tools

  • Reduce complexity and ease the support burden for Linux based integrated pipelines

  • Transparency; creating a platform and process that is in the shared interests of both vendors and content creation studios

The VFX platform session was chaired by friend of Escape, Nick Cannon, CTO at Walt Disney Animation Studios, and ILM CTO Francois Chardavoine. This year’s SIGGRAPH session was a great overview of the achievements of the last year, and plans ahead for 2024. With Redhat’s recent withdrawal of Centos, choices on Linux distributions were a central topic of discussion. Both Rocky and Alma were positioned as replacements, and it was interesting that whilst Rocky is thought to be the leading contender, Alma had achieved a 40% adoption rate with attendees at the session. Further discussions as to whether OpenUSD or ML (machine-learning) could be set as version controlled were rejected on account of the fast moving nature of those particular tool sets currently.

Rendering at Scale - What is often interesting at SIGGRAPH, and BoF sessions in particular is how common some of the technical challenges are, regardless of location. Attending this session it was great to see all participating studios discussing the pros and cons of the various approaches to rendering at scale. The discussion surveyed a variety of topics including disparities between workstations GPUs and cloud GPUs, the impact on rendering of USD workflows, and application machine learning. 

Escape Technology had a fabulous experience at SIGGRAPH 2023 and we’d like to thank all the organisers, participants and volunteers who make the show what it is. It remains at the forefront of learning and exploration for computer graphics techniques across entertainment and visualisation, and we hope to attend SIGGRAPH 2024 in Denver!

PHOTO 2023 09 13 15 58 25