Blade Announces $33 Million In Funding For Its Cloud Gaming Platform

Blade announced the company has raised $33 million in a Series B round  to expand its cloud gaming service Shadow. Based in Paris, with offices in Mountain View, California, the company was founded in 2015, and has over 200 employees.

To date Blade has now raised over $104 million. Investors from this round and previous investors include Serena Capital, Western Digital, Charter Communications, Financiere Saint James, 2CRSi and multiple individual investors. In addition to the funding, Shadow is also bringing on a new CEO, Jérôme Arnaud to help build out the company’s international presense in both consumer and professional markets. 

Shadow is the company’s first product, which launched back in January 2018 and currently has 70,000 subscribers globally all paying $35 a month.  As part of the lastest announcement the company has dropped their subscription prices in half with the hopes of getting some momentum before Google’s November 19 launch of  Stadia. 


The new subscription service comes in three different levels, rather than the one subscription price. It is currently available in the UK, France, Germany and Luxembourg (Pricing for the US has not yet been announced):

Boost – at £12.99 per month (£14.99 without commitment), this highly accessible offer allows subscribers to play all the latest games in Full HD from any device.

Ultra – at £24.99 per month (£29.99 without commitment), gamers are offered the possibility to play with superior graphic performance(s) in 4K, up to 144 FPS in Full HD with ray tracing compatibility. This offer provides the power of a GeForce RTX 2080 graphics card, but also the power of a better processor, more RAM and more storage.

Infinite – at £39.99 per month (£49.99 without commitment), Infinite grants the most demanding gamers, streamers and creators direct access to the “best available” on the market: a dream computer including ray tracing combined with the top graphics card at the moment (RTX Titan), and 1TB of storage. With this dream line-up, users will be able to play all the latest games in 4K.

Additional storage can be purchased. But what makes this unique is from other cloud gaming solutions is that Shadow provides a full Windows 10 PC which means you can run any software you want, from AAA games, to enterprise work in Solidworks for CAD, or for content creation with Photoshop and video editing solutions.

“Making stupid objects very smart” is one of the philosophies of Shadow, and as part of that mantra they have also announced a redesign of their interface that enables any PC to adapt with a living room display. The TV version is compatible with Smart TV and Android and will be soon extend to smartphones and tablets, currently in beta on Android and iOS. Shadow has also already demoed live streaming to VR, and is currently working to bring that to consumers. 

Shadow has also partnered with OVHcloud, a Data Service Provider who will oversee infrastructure and data center rollout.  Other key partners include Nvidia, Intel, Microsoft, 2CRSi and Ericsson.


What We Think

Shadow is the first of several products coming from Blade, and the fact they they have full support of Windows 10 is a good starting point as it enables them to also focus not only on gaming but also high-end visualization and enterprise industries like CAD that will help it draw out further optimizations.

It has to be said though, they are up against some behemoth competition with the likes of Google, Microsoft and Sony. What Shadow lacks in endless funds, they make up for in their ability to be more nimble and take advantage of key opportunities.

With Stadia’s launch on November 19th, and news soon coming from both Microsoft and Sony, we expect the cloud gaming sector to heat up now. While M2 Insights doesn’t predict meteoric growth for the first few years, we do believe this is the start of what will be a slow, steady change in content creation, content delivery are the user experience.

If you want to hear more, join us at the The International Future Computing Summit, taking place at the Computer History Museum, November 5th – 6th where Blade will be speaking.

The conference is focused on client-to-cloud and edge computing with discussions centered around games, visual effects, education, and enterprise. A few of the companies we’ve got speaking include: Intel, AMD, Nvidia, Dell, Lenovo, Unity, Epic, The Foundry, Cintoo, Game Tech at Amazon Web Services, Hatch Entertainment, ShadowTech, MagicLeap, AccelByte, Ultra, Adshir, Xilinx, Ericsson, Looking Glass Factory, Poco Loco Amusements, VentureBeat, M2 Insights, Jon Peddie Research, TIRIAS Research, Stanford University, Intel Capital, Boost VC, Alsop Louie Ventures, The Venture Reality Fund, with additional speakers still being added. 

Verizon Develops 5G Edge Technology For VR, AR and MR

Verizon recently built and tested an independent GPU-based orchestration system and developed enterprise mobility capabilities for virtual reality (VR), mixed reality (XR), augmented reality (AR), and cinematic reality (CR). Together, these capabilities could pave the way for a new class of affordable mobile cloud services, provide a platform for developing ultra low-latency cloud gaming, and enable the development of scalable GPU cloud-based services.

GPU-Based orchestration system

5G technology and Verizon’s Intelligent Edge Network are designed to provide real time cloud services on the edge of the network nearest to the customer. Because of the heavy imaging and graphics that would benefit from this technology, many of these applications will run significantly better on a GPU. Artificial Intelligence and Machine Learning (AI/ML), Augmented, Virtual and Mixed Reality (AR/VR/XR), AAA Gaming, and Real-time Enterprise are highly dependent on GPUs for compute capabilities. The limited availability of efficient resource management in GPUs is a barrier to scalable deployment of such technologies.

To meet this need, the Verizon team developed a prototype using GPU slicing and management of virtualization that supports any GPU-based service and will increase the ability for multiple user-loads and tenants.

In proof of concept trials on a live network in Houston, TX using the newly developed GPU orchestration technology in combination with edge services, Verizon engineers were able to successfully test this new technology. In one test for computer vision as a service, this new orchestration allowed for eight times the number of concurrent users, and using a graphics gaming service, it allowed for over 80 times the number of concurrent users.

“Creating a scalable, low cost, edge-based GPU compute [capability] is the gateway to the production of inexpensive, highly powerful mobile devices,” said Nicki Palmer, Chief Product Development Officer. “This new innovation will lead to more cost effective and user friendly mobile mixed reality devices and open up a new world of possibilities for developers, consumers and enterprises.”

Edge application capabilities

To assist developers in creating these new applications and solutions, Verizon’s team developed a suite of edge capabilities. These capabilities, similar in nature to APIs (Application Programming Interface), describe processes that developers can use to build an application without the need for additional code. This eases the burden on developers and also creates more consistency across apps. Building on this technology, the team has created eight services for developers to use when creating applications and solutions for use on 5G Edge technology:

  1. 2D Computer vision – Users provide 2D images that a device can recognize and track.(Examples: A consumer may view a poster through glasses and 2D computer vision could be used to make it come alive. A consumer may view a movie poster and a trailer would automatically play. A consumer may view a product box and see an overlay of nutritional info, coupons, etc.)
  2. XR lighting – Currently when 3D objects are inserted into the real world they appear as 2D and can seem out of place. XR lighting can send back environment lighting and video info to reproduce a scene reflecting accurate lighting, shadows, roughness, reflections, and metallics on the 3D object so that it blends perfectly into the environment around it.
  3. Split rendering – Split rendering enables the delivery of PC/console level graphics to any mobile device. Split rendering splits the graphics processing workload of games, 3D models, or other complex graphics and pushes the most difficult calculations onto the server allowing the lighter calculations to remain on the device.
  4. Real time ray tracing – Traditional 3D scenes or objects are designed using complex custom algorithms to extrapolate or calculate how light will fall and how ending colors will look and feel. With real-time ray tracing capability, each pixel can receive accurate light coloring, greatly advancing the realism of 3D.
  5. Spatial audio – In the ongoing evolution of sound (from mono to stereo to 7.1 surround sound) spatial audio is the next step. This type of audio processing is extremely processor heavy. In designing for spatialized audio, objects in a 3D scene must react with the sound so that users have a true sense of the space and relative location of an object in an augmented reality environment. As audio is emitted it bounces off digital objects and based on directionality and where your head location is, reproduces what you would hear in the real world.
  6. 3D Computer vision – 3D computer visioning leads to 3D object recognition by training the edge network to understand what a 3D object is from all angles. (Examples: A football helmet could provide highlights, stats, etc. for players seen from any angle. At a grocery store, 3D computer visioning would allow a consumer using AR to recognize and respond to objects with unusual 3D shapes such as fruits and vegetables.)
  7. Real time transcoding – Transcoding is taking one file format and transferring it to another format. Footage is usually much larger in its raw format. Real time transcoding takes that away so a consumer doesn’t have to worry about what file format goes up and what comes down. Real time transcoding is a content creation tool that saves time and optimizes workflows.
  8. Asset caching – Asset caching provides for real time use of assets on the edge. It allows people to work collaboratively. (Example: Multiple people could work on a video file in real time altogether instead of handing it off to avoid overwriting each other’s work.) The fast file format for 5G and this caching optimization tool allow a limitless number of people to work on the same file in real time.

With the development of these new innovative technologies and more than a dozen patents pending, the Verizon team was awarded with the “Biggest contribution to Edge Computing R&D” at the International London Edge Conference. Now you can see some of this exciting technology in person. At Mobile World Congress Americas in LA next week, Verizon will demonstrate some of these new capabilities at the Verizon 5G Built Right booth. Some of the demos include:

  • Reimagined workspace with augmented reality: 1000 Augmented Realities + AR smart glasses map the workspace and render overlays and information displays.
  • Qwake Tech: AR helmet attachment allows firefighters to see in low visibility environments.
  • 3D handheld scanner: Volumetric scanner creates detailed renderings of objects and entire scenes.
  • Verizon AR Shopping: Instantly and seamlessly overlays digital displays on top of physical products.
  • Visceral Science: Educational experiences in VR of essential science concepts (i.e. lifecycle of a star) resonating with the middle and high school curricula.

“The future is now. We’re no longer simply talking about the possibilities of 5G and edge computing,” said Palmer. “The work our Verizon 5G Lab team is doing is pushing the envelope of innovation and leading our industry into a new day where the possibilities inherent in 5G technology are becoming reality.”

  • Verizon’s 5G Lab, which is comprised of both network and XR technologists, continues to combine 5G network and XR expertise to produce ground breaking capabilities for the edge.
  • The Verizon team recently developed a suite of enterprise XR technologies to drive emerging mobility experiences.
  • These technologies could pave the way for several new mobile cloud based applications and services and provide a platform for developing ultra low-latency cloud gaming, enabling the development of scalable GPU cloud-based services.
  • Demonstrations of some of these technologies will take place in the Verizon booth at Mobile World Congress Americas.

Unity Technologies Acquires ChilliConnect

ChilliConnect’s cloud-based toolkit to provide live game management services for building and managing connected games

Unity Technologies, announced the acquisition of ChilliConnect. With the addition of ChilliConnect to Unity Technologies, developers will benefit from easier, better integrated access to live operations and backend game management services to build and manage connected games.

“By joining Unity, we are enhancing that choice for so many more developers worldwide. Our services will remain engine agnostic, but we’re excited to provide Unity developers with an easier path to successful.

“Creating games on Unity is a great first step, but developer teams of all sizes are then faced with the challenge of managing and maintaining those games, which can be very complicated and costly,” said Luc Barthelet, Vice President and General Manager, Cloud Services, Unity Technologies. “ChilliConnect is the connecting piece for Unity developers, providing the essential LiveOps and backend services needed to successfully operate a connected game. Having ChilliConnect onboard closes the loop for developers to create, operate and monetize their game with Unity and we’re very excited to have them as part of the team.”

ChilliConnect provides developers with cloud-based game services to enable backend operations at scale, allowing developers to add online game features and run LiveOps without needing to manage their own server infrastructure. ChilliConnect’s services, and the offerings from Unity’s other recent acquisition deltaDNA, round out Unity’s full LiveOps solution for building and operating connected games. With these acquisitions, developers can now access cost-effective, customizable live game solutions, regardless of individual needs.

“Much like Unity, we believe in democratizing game development and management regardless of studio size, engine technology or commercial aspirations. We want developers to have a choice,” said Paul Farley, Chief Executive Officer, ChilliConnect. “By joining Unity, we are enhancing that choice for so many more developers worldwide. Our services will remain engine agnostic, but we’re excited to provide Unity developers with an easier path to successful connected game operation.”

Unity exists to empower the success of the world’s creators with the most accessible and powerful real-time 3D development and monetization platform. Games and experiences made with Unity have reached more than 3 billion devices worldwide this year and were installed more than 34 billion times in the last 12 months.