Skip navigation
distributed computing Getty Images

Ray Project Facilitates Building of Distributed Apps

Anyscale raises new funding to help the Ray distributed programming framework gain traction.

While code is often started by a single developer on a single system, increasingly distributed computing is the norm, where code exists on many different systems across a network.

Among the emerging efforts in the distributed computing space is the open source Ray project, which is a framework for building and running distributed apps. Ray got its start in 2018 at the RISELab computing laboratory at the University of California, Berkeley, and it is now being used by such large organizations as J.P. Morgan, Intel and Ericsson.

To help push Ray forward, the founders of the project started Anyscale. On Dec. 17, Anyscale announced that it has raised $20.6 million in a Series A round of funding led by Andreessen Horowitz.

"Distributed computing is becoming increasingly essential to AI [artificial intelligence] and computing, and we expect distributed applications to be the norm rather than the exception going forward," Ion Stoica, co-founder and executive chair, told ITPro Today.

Stoica noted that a key challenge to building distributed apps is that it requires deep programming and infrastructure expertise. The Ray project is trying to make it as easy to program clusters and build distributed apps as it is to program on a laptop.

"With Ray, a Python programmer should be able to take her program running on her laptop and scale it to a cluster of hundreds or thousands of machines by just changing a few lines of code," he said. "To do the same thing today, it takes an entire team of experts to develop, deploy and manage such a distributed application."

How Ray Works

Ray is a computation framework that uses database-like techniques to scale and provide fault tolerance.

For example, Ray uses a distributed scheduling architecture to support high-throughput scheduling without a centralized bottleneck, according to Stoica. It uses a sharded database to store system metadata and also uses shared memory and zero-copy serialization to avoid compute-intensive deserialization costs.

How Ray Fits into Cloud Native

For many developers, distributed deployment of an application is often achieved using a cloud-native approach with the Kubernetes container orchestration platform. Ray is orthogonal to Kubernetes as it runs on top of Kubernetes as well as laptops, and on public clouds, Stoica said.

In Stoica's view, Kubernetes makes it easy to deploy and manage different applications on the same cluster and is targeted at DevOps. That said, he noted that Kubernetes doesn’t solve the problem of actually making it easy to develop distributed apps. For example, to implement a large-scale reinforcement learning application today, the application needs to handle the logic for scheduling tasks, handling machine failures, efficiently transferring data and so on. Kubernetes simply runs applications once they have been developed, he added.

"In contrast, Ray is a programming framework and it targets software developers," Stoica said.

What's Next for Ray

It's not yet clear what exactly Anyscale will be bringing to market for Ray. Stoica said the company is just getting started, so it is too early to comment much on the business model.

"This being said, we are planning to build a product company, rather than a support company," he said.

Stoica also emphasized that the main priority for Anyscale is to make the open source Ray project highly successful and the standard for building the next generation of distributed apps.

"On one hand, this means improving the performance, scale and fault tolerance of Ray," Stoica said. "On the other hand, it means building great libraries for machine learning and other popular workloads."

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish