Apache Spark and the Typesafe Reactive Platform

Typesafe offers support for customers worldwide deploying Apache Spark through our Typesafe Together program. Because Spark is built on Scala and leverages Akka, Typesafe is uniquely qualified to provide the essential training, consulting, and support to accelerate the path to productivity and ensure project success for teams building Reactive applications with Spark.

Our approach is to provide the best-practice tips and on-going support your development team needs to become productive building Reactive Big Data applications with Spark and Typesafe technologies.

Spark Core, Spark SQL, and Spark Streaming modules

Typesafe provides support for Spark Core, Spark SQL, and Spark Streaming modules, for the Scala and Java APIs, which includes unlimited incidents and answers plus bug fixes and patches. Support questions for other languages and technologies, including the GraphX and MLlib APIs for Spark, will be answered on a best-effort basis, but they are not officially supported.

Spark on Mesos

Support covers “standalone” and Mesos clustering for the full lifecycle of your project, which means that we support Spark’s integration with Mesos in production environments, but not Mesos itself. Support includes unlimited incidents and answers for Spark. Incidents, bug fixes, and patches for Mesos are provided by Mesosphere. Issues involving integration with third-party tools that aren't part of Typesafe Reactive Platform or Mesos are not officially supported, but a best effort will be made.

Spark on YARN

Because YARN developers could also benefit from our Spark expertise, we provide development support for teams targeting YARN. Production YARN support is provided by the team's Hadoop vendor.

Expert Answers from Engineers

Your support questions are fielded by an engineer for validation and then passed to our Spark engineers for resolution.

Typical support questions may include:

  • When should I leverage Akka Streams rather than Spark Streaming?
  • How can I improve the performance of my Spark applications?
  • How can I monitor and debug my Spark applications?
  • Which resource manager should I use for my Spark cluster?
  • Which Scala libraries and tools will provide the best support for what I need in my Spark application?

Optional Training and Consulting

Typesafe offers an optional 10-day services package designed to accelerate your team’s path to productivity and ensure project success by focusing on areas where our expertise can offer you the highest value. Some examples of how we can help, include:

  • Two-day Spark developer training to give developers the best opportunity for success
  • Architecture and code reviews to ensure your development efforts are on track
  • Production readiness review to ensure your application rollout is successful
  • Periodic health checks to keep applications tuned for continued success


Typesafe's Spark solution focuses on Big Data production deployments on Apache Mesos or Standalone Spark clusters, rather than Hadoop.

Why Spark and Mesos?

We believe Mesos is a winning long-term solution for Reactive Big Data applications, because Mesos is a next-generation resource manager with the flexibility to manage all applications and services in your entire infrastructure. You don’t need separate clusters, one for Hadoop YARN, one for Cassandra, etc. We believe this will be a more efficient approach for companies looking to leverage Spark for their Reactive applications. In fact, Mesos is being used at Apple and Twitter to optimize the utilization of their infrastructures.

Why Spark and Scala?

Spark is written in Scala. Scala is a language where data is treated as a first-class citizen. The collections library of the language reflects this—the APIs are designed to provide an easy and standard way to manipulate the data that they contain, based on common constructs that data scientists and developers alike are already familiar with. As a result, when the Spark team needed to pick a JVM language to implement Spark, Scala was a natural fit. Spark’s own API is a natural extension of the Scala collections library.


To get started with the Typesafe Reactive Platform and Spark, we recommend checking out the following resources:

Code Samples & Tutorials

Spark Workshop

This Activator template teaches you how to write Apache Spark applications that analyze real data sets, using Spark's batch-mode, streaming, and SQL APIs.


Hello Apache Spark!

Apache Spark is a fast and general engine for large-scale data processing. This Typesafe Activator template will get you started with Spark.


Apache Spark in Action

A starter application with Apache Spark.


Reading Materials

Apache Spark: Preparing for the next wave of Reactive Big Data

Over 2000 respondents answered a survey on Apache Spark usage and adoption, emphasizing the industry's increasing demand for features like fast processing of large data sets & event steaming.


Getting Started with Spark

If you are exploring Big Data and wondering if Spark is right for your Reactive application, this white paper is for you. It provides an insightful overview of new trends in Big Data and includes handy diagrams of representative architectures.


Databricks Application Spotlight: Typesafe

Spark has emerged as the next-generation platform for writing Big Data applications for Hadoop and Mesos clusters. Part of Spark’s success is due to the foundation it is built upon, components of the Typesafe Reactive Platform.


Spark Programming Guide

Programming guide for getting started with Spark.


Videos & Webinars

Spark and the Typesafe Reactive Platform

Apache Spark has emerged as the next-generation platform for writing Big Data applications. The combination of Spark and the Typesafe Reactive Platform, including Scala, Akka, Play, and Slick, gives Enterprise developers a comprehensive suite of tools for building Certified on Spark applications with minimal effort.


Apache Spark: Preparing for the Next Wave of Reactive Big Data

In this webinar resident Big Data expert, Dean Wampler, will present the surveys findings and discuss why JVM devs should care even a little about Big Data tooling in 2015.


Why Spark Is the Next Top (Compute) Model

In this presentation, Dean Wampler argues that Spark/Scala is a better data processing engine than MapReduce/Java because tools inspired by mathematics, such as FP, are ideal tools for working with data.


Why Scala Is Taking Over the Big Data World

Those dealing with Big Data are increasingly won over by the power of Scala. In this presentation Dean Wampler, recognized Scala author and Big Data expert, explains why data-centric applications are driving Scala adoption.

WATCH NOW (registration with Skills Matter is required to view this content)


Ensure the Success of Your Spark Application

If you plan to develop a commercial application, your business could benefit from a relationship with Typesafe. The Typesafe Together annual subscription program is designed to mitigate risk and ensure the successful launch and operation of your application by delivering certified builds and amazing service throughout the entire project lifecycle—from prototyping to production.

Get In Contact with Typesafe
  • White Paper

    Introducing the Typesafe Reactive Platform

    Learn about the Typesafe technology ecosystem

    Get White paper
  • White Paper

    BIG DATA APPLICATIONS Getting Started with Spark

    New trends in Big Data & handy representative architectures

  • White Paper

    Introducing Typesafe ConductR

    A Reactive Application Manager for Operations

    Get White paper