I started at HP Research Labs just after the turn of the new millennium. Since the first computers, enterprises have craved flexibility, scale and agility from their computing investments. This was true in 2000 and it is still true now; it is uncanny how those needs have remained constant over the course of the last two decades, despite the technical advancements that have occurred.
Back then, we were working on something called ‘Utility Computing’. HP Labs were pioneering the adoption of HP’s new Adaptive Enterprise Strategy. As Utility Computing got increasingly more ubiquitous, it later got renamed to ‘cloud research’. This was really the beginning of what we recognise today as ‘Cloud’.
HP Labs were pioneering cloud at least three years before AWS went public in 2006. It was an exciting time. We built our cloud platform internally and sold one to Philips in the Netherlands under the name ‘Utility Data Centre’. We had an instance of this utility data centre on premise that was far more advanced than what we believed Amazon had. More advanced in the way security worked and how customers could define and redefine pools of resource on isolated networks. Customers suddenly had the control and flexibility they’d been looking for. However, HP needed a marketing hook for a public cloud, rather than just on premise and private, and we didn’t have much to point to.
The media and entertainment industry is a great place to try out new technologies. It’s very open to change, with every production looking for a competitive edge. It also faces a huge computing resource requirement for rendering. It’s exactly the right place to test and develop new technologies.
We had the technology we wanted to test and we had the industry we wanted to test it in, all we needed was a location that would give us the connectivity we needed. Bristol was the perfect place.
Bristol has a great high band network and one of the reasons for that, funnily enough, can be traced back to Brunel and his groundbreaking engineering in the city. Around Bristol’s docks, and throughout the city centre, there is an extensive network of steam ducts – Brunel was one of the first service providers! He built a steam pump at the Underfall Yard that can still be visited today.
The steam was delivered through the network to power bridges and locks, and to move cranes all around the city. This custom-built ducting was then much later adapted and used to run fibre, saving Bristol time and money.
HP played an active part in the group that enabled one of the first metropolitan, high band networks to take hold. We were responsible for setting up the BMEX project, which places like Watershed, Bristol University, BBC, and many post-production houses all had connectivity to. If you were going to have services like the ones we were providing, you needed to be able to connect to them. And if you’re going to connect to something, you’re going to want services at the end of it. It all went hand in hand.
We partnered with a company in Bristol called 422 South. We contracted them to render a film with the brief that it should be something they couldn’t render internally in their studio.
422 South were going to send their scenes to us over the internet and we were then going to render it in our data centre. Unfortunately, 422 South couldn’t get a connection onto the high band network because they were too far away from the city centre and still on 1Mbps SDSL! So, whilst the rendering happened using HP Labs’ utility computing, the actual data transfer took place over sub-broadband speeds. We weren’t afraid to reach out and use any technology available to us!
"The advantage of the solution built was that you had more time to iterate the animation and the feedback loop was much faster. This gave us more time for creativity." David Corfield, 422 South
The end result was rendered in a dynamically configured environment. Was it the first film ever rendered in the cloud? Almost certainly. I worked on the infrastructure for this, particularly the system imaging and monitoring tools.
We got a lot of attention from the 422 South production but fast forward six months and we needed something bigger. Something that would make a really big statement about the capability we had developed.
We had a good relationship with Dreamworks, as they were just up the road from HP Labs in Palo Alto. They initially wanted to buy more hardware from HP but we lit up the ‘cloud rendering’ conversation because we felt it would be more interesting for them.
We took some of the lessons we learned from 422 South around data transfer optimisation and applied them to a different sort of setup with Dreamworks; particularly the technology around data de-duplication and compression.
Our work lead to large parts of Shrek 2 being rendered on HP’s data centre. Ironically, it wasn’t quite as flexible as the solution we gave 422 South, but we still allowed Dreamworks to access a remote pool of nodes. This was a watershed moment for the concept of cloud rendering for the box office. For the first time a major production had been delivered from a flexible, shared platform.
After Shrek 2, we began thinking about how we could meet the peak rendering demand of not just one, but lots of customers, on a shared platform. At this point, I became very involved – helping to set up a community in Bristol, engaging with developers and content creators who were open to working in this way. We found several studios through a partnership with Watershed in Bristol. The project was called SE3D.
We gave these studios access to a shared pool of massive compute resource. 3D artists got hundreds of times more power than they could have dreamt of getting in their own studio.
In terms of the 3D integration, we just supported Maya. This was in 2004, so back when Maya was owned by Alias. We had a good relationship with Alias, who were receptive enough to see the potential of ‘cloud rendering’, so they granted us some licenses to help us run at scale.
So, we ran that programme, the animations got finished and we showcased a thread at the Encounters Film Festival in 2004.
We had two initiatives. We had URS as the remote rendering service; and we had this utility platform on which it sat. It was one of, if not the first, attempt to separate out Infrastructure-As-A-Service (IAAS), Platform-As-A-Service (PAAS) and Software-As-A-Service (SAAS).
HP’s interest was in the utility platform. We were thinking about how we could sell that to enterprise. URS was put to one side whilst the utility platform became the focus.
The irony is that HP had a platform that they could have gone to market with in 2004. They were arguably the number one IT company in the world at the time. But HP waited because they were an excellent business, selling infrastructure to enterprises that was making money. Then in 2006, AWS beat them to it. From that moment, it was hard for HP to claw back the lost ground.
I learned a lot about scalability of platforms and managing technology at scale. I apply a lot of that experience to my day-to-day work at YellowDog, as we look to sustainably grow a dynamic, secure shared service platform – whether that is for rendering films and animation or for any number of other applications we improve.
My main take away from my experience was – if you can ever be first to market, do it, do everything it takes to make it happen.
You are seeing this because you are using a browser that is not supported. The YellowDog website is built using modern technology and standards. We recommend upgrading your browser with one of the following to properly view our website:Windows
Please note that this is not an exhaustive list of browsers. We also do not intend to recommend a particular manufacturer's browser over another's; only to suggest upgrading to a browser version that is compliant with current standards to give you the best and most secure browsing experience.