Comparing TypedDatasets with Spark's Datasets

Goal: This tutorial compares the standard Spark Datasets API with the one provided by Frameless' TypedDataset. It shows how TypedDatasets allow for an expressive and type-safe api with no compromises on performance.

For this tutorial we first create a simple dataset and save it on disk as a parquet file. Parquet is a popular columnar format and well supported by Spark. It's important to note that when operating on parquet datasets, Spark knows that each column is stored separately, so if we only need a subset of the columns Spark will optimize for this and avoid reading the entire dataset. This is a rather simplistic view of how Spark and parquet work together but it will serve us well for the context of this discussion.

import spark.implicits._
// import spark.implicits._

// Our example case class Foo acting here as a schema
case class Foo(i: Long, j: String)
// defined class Foo

// Assuming spark is loaded and SparkSession is bind to spark
val initialDs = spark.createDataset( Foo(1, "Q") :: Foo(10, "W") :: Foo(100, "E") :: Nil )
// initialDs: org.apache.spark.sql.Dataset[Foo] = [i: bigint, j: string]

// Assuming you are on Linux or Mac OS
initialDs.write.parquet("/tmp/foo")

val ds = spark.read.parquet("/tmp/foo").as[Foo]
// ds: org.apache.spark.sql.Dataset[Foo] = [i: bigint, j: string]

ds.show()
// +---+---+
// |  i|  j|
// +---+---+
// |100|  E|
// | 10|  W|
// |  1|  Q|
// +---+---+
//

The value ds holds the content of the initialDs read from a parquet file. Let's try to only use field i from Foo and see how Spark's Catalyst (the query optimizer) optimizes this.

// Using a standard Spark TypedColumn in select()
val filteredDs = ds.filter($"i" === 10).select($"i".as[Long])
// filteredDs: org.apache.spark.sql.Dataset[Long] = [i: bigint]

filteredDs.show()
// +---+
// |  i|
// +---+
// | 10|
// +---+
//

The filteredDs is of type Dataset[Long]. Since we only access field i from Foo the type is correct. Unfortunately, this syntax requires handholding by explicitly setting the TypedColumn in the select statement to return type Long (look at the as[Long] statement). We will discuss this limitation next in more detail. Now, let's take a quick look at the optimized Physical Plan that Spark's Catalyst generated.

filteredDs.explain()
// == Physical Plan ==
// *(1) Filter (isnotnull(i#23L) AND (i#23L = 10))
// +- *(1) ColumnarToRow
//    +- FileScan parquet [i#23L] Batched: true, DataFilters: [isnotnull(i#23L), (i#23L = 10)], Format: Parquet, Location: InMemoryFileIndex[file:/tmp/foo], PartitionFilters: [], PushedFilters: [IsNotNull(i), EqualTo(i,10)], ReadSchema: struct<i:bigint>
// 
//

The last line is very important (see ReadSchema). The schema read from the parquet file only required reading column i without needing to access column j. This is great! We have both an optimized query plan and type-safety!

Unfortunately, this syntax is not bulletproof: it fails at run-time if we try to access a non existing column x:

scala> ds.filter($"i" === 10).select($"x".as[Long])
org.apache.spark.sql.AnalysisException: cannot resolve '`x`' given input columns: [i, j];
'Project ['x]
+- Filter (i#23L = cast(10 as bigint))
   +- Relation[i#23L,j#24] parquet

  at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
  at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$$nestedInanonfun$checkAnalysis$1$2.applyOrElse(CheckAnalysis.scala:155)
  at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$$nestedInanonfun$checkAnalysis$1$2.applyOrElse(CheckAnalysis.scala:152)
  at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformUp$2(TreeNode.scala:341)
  at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:73)
  at org.apache.spark.sql.catalyst.trees.TreeNode.transformUp(TreeNode.scala:341)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$transformExpressionsUp$1(QueryPlan.scala:104)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$1(QueryPlan.scala:116)
  at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:73)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:116)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:127)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$3(QueryPlan.scala:132)
  at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:285)
  at scala.collection.immutable.List.foreach(List.scala:431)
  at scala.collection.TraversableLike.map(TraversableLike.scala:285)
  at scala.collection.TraversableLike.map$(TraversableLike.scala:278)
  at scala.collection.immutable.List.map(List.scala:305)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.recursiveTransform$1(QueryPlan.scala:132)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.$anonfun$mapExpressions$4(QueryPlan.scala:137)
  at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:243)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:137)
  at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsUp(QueryPlan.scala:104)
  at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1(CheckAnalysis.scala:152)
  at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1$adapted(CheckAnalysis.scala:93)
  at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:183)
  at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:93)
  at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:90)
  at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:154)
  at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:175)
  at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:228)
  at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:172)
  at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:73)
  at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
  at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:143)
  at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
  at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:143)
  at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:73)
  at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:71)
  at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:63)
  at org.apache.spark.sql.Dataset.<init>(Dataset.scala:210)
  at org.apache.spark.sql.Dataset.<init>(Dataset.scala:216)
  at org.apache.spark.sql.Dataset.select(Dataset.scala:1517)
  ... 42 elided

There are two things to improve here. First, we would want to avoid the as[Long] casting that we are required to type for type-safety. This is clearly an area where we may introduce a bug by casting to an incompatible type. Second, we want a solution where reference to a non existing column name fails at compilation time. The standard Spark Dataset can achieve this using the following syntax.

ds.filter(_.i == 10).map(_.i).show()
// +-----+
// |value|
// +-----+
// |   10|
// +-----+
//

This looks great! It reminds us the familiar syntax from Scala. The two closures in filter and map are functions that operate on Foo and the compiler will helps us capture all the mistakes we mentioned above.

scala> ds.filter(_.i == 10).map(_.x).show()
<console>:20: error: value x is not a member of Foo
       ds.filter(_.i == 10).map(_.x).show()
                                  ^

Unfortunately, this syntax does not allow Spark to optimize the code.

ds.filter(_.i == 10).map(_.i).explain()
// == Physical Plan ==
// *(1) SerializeFromObject [input[0, bigint, false] AS value#74L]
// +- *(1) MapElements <function1>, obj#73: bigint
//    +- *(1) Filter <function1>.apply
//       +- *(1) DeserializeToObject newInstance(class $line14.$read$$iw$$iw$$iw$$iw$Foo), obj#72: $line14.$read$$iw$$iw$$iw$$iw$Foo
//          +- *(1) ColumnarToRow
//             +- FileScan parquet [i#23L,j#24] Batched: true, DataFilters: [], Format: Parquet, Location: InMemoryFileIndex[file:/tmp/foo], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<i:bigint,j:string>
// 
//

As we see from the explained Physical Plan, Spark was not able to optimize our query as before. Reading the parquet file will required loading all the fields of Foo. This might be ok for small datasets or for datasets with few columns, but will be extremely slow for most practical applications. Intuitively, Spark currently does not have a way to look inside the code we pass in these two closures. It only knows that they both take one argument of type Foo, but it has no way of knowing if we use just one or all of Foo's fields.

The TypedDataset in Frameless solves this problem. It allows for a simple and type-safe syntax with a fully optimized query plan.

import frameless.TypedDataset
// import frameless.TypedDataset

import frameless.syntax._
// import frameless.syntax._

val fds = TypedDataset.create(ds)
// fds: frameless.TypedDataset[Foo] = [i: bigint, j: string]

fds.filter(fds('i) === 10).select(fds('i)).show().run()
// +-----+
// |value|
// +-----+
// |   10|
// +-----+
//

And the optimized Physical Plan:

fds.filter(fds('i) === 10).select(fds('i)).explain()
// == Physical Plan ==
// *(1) Project [i#23L AS value#158L]
// +- *(1) Filter (isnotnull(i#23L) AND (i#23L = 10))
//    +- *(1) ColumnarToRow
//       +- FileScan parquet [i#23L] Batched: true, DataFilters: [isnotnull(i#23L), (i#23L = 10)], Format: Parquet, Location: InMemoryFileIndex[file:/tmp/foo], PartitionFilters: [], PushedFilters: [IsNotNull(i), EqualTo(i,10)], ReadSchema: struct<i:bigint>
// 
//

And the compiler is our friend.

scala> fds.filter(fds('i) === 10).select(fds('x))
<console>:24: error: No column Symbol with shapeless.tag.Tagged[String("x")] of type A in Foo
       fds.filter(fds('i) === 10).select(fds('x))
                                            ^

Differences in Encoders

Encoders in Spark's Datasets are partially type-safe. If you try to create a Dataset using a type that is not a Scala Product then you get a compilation error:

class Bar(i: Int)
// defined class Bar

Bar is neither a case class nor a Product, so the following correctly gives a compilation error in Spark:

scala> spark.createDataset(Seq(new Bar(1)))
<console>:24: error: Unable to find encoder for type Bar. An implicit Encoder[Bar] is needed to store Bar instances in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._  Support for serializing other types will be added in future releases.
       spark.createDataset(Seq(new Bar(1)))
                          ^

However, the compile type guards implemented in Spark are not sufficient to detect non encodable members. For example, using the following case class leads to a runtime failure:

case class MyDate(jday: java.util.Date)
// defined class MyDate
val myDateDs = spark.createDataset(Seq(MyDate(new java.util.Date(System.currentTimeMillis))))
// java.lang.UnsupportedOperationException: No Encoder found for java.util.Date
// - field (class: "java.util.Date", name: "jday")
// - root class: "MyDate"
//   at org.apache.spark.sql.catalyst.ScalaReflection$.$anonfun$serializerFor$1(ScalaReflection.scala:591)
//   at scala.reflect.internal.tpe.TypeConstraints$UndoLog.undo(TypeConstraints.scala:73)
//   at org.apache.spark.sql.catalyst.ScalaReflection.cleanUpReflectionObjects(ScalaReflection.scala:904)
//   at org.apache.spark.sql.catalyst.ScalaReflection.cleanUpReflectionObjects$(ScalaReflection.scala:903)
//   at org.apache.spark.sql.catalyst.ScalaReflection$.cleanUpReflectionObjects(ScalaReflection.scala:49)
//   at org.apache.spark.sql.catalyst.ScalaReflection$.serializerFor(ScalaReflection.scala:432)
//   at org.apache.spark.sql.catalyst.ScalaReflection$.$anonfun$serializerFor$6(ScalaReflection.scala:577)
//   at scala.collection.immutable.List.map(List.scala:293)
//   at org.apache.spark.sql.catalyst.ScalaReflection$.$anonfun$serializerFor$1(ScalaReflection.scala:562)
//   at scala.reflect.internal.tpe.TypeConstraints$UndoLog.undo(TypeConstraints.scala:73)
//   at org.apache.spark.sql.catalyst.ScalaReflection.cleanUpReflectionObjects(ScalaReflection.scala:904)
//   at org.apache.spark.sql.catalyst.ScalaReflection.cleanUpReflectionObjects$(ScalaReflection.scala:903)
//   at org.apache.spark.sql.catalyst.ScalaReflection$.cleanUpReflectionObjects(ScalaReflection.scala:49)
//   at org.apache.spark.sql.catalyst.ScalaReflection$.serializerFor(ScalaReflection.scala:432)
//   at org.apache.spark.sql.catalyst.ScalaReflection$.$anonfun$serializerForType$1(ScalaReflection.scala:421)
//   at scala.reflect.internal.tpe.TypeConstraints$UndoLog.undo(TypeConstraints.scala:73)
//   at org.apache.spark.sql.catalyst.ScalaReflection.cleanUpReflectionObjects(ScalaReflection.scala:904)
//   at org.apache.spark.sql.catalyst.ScalaReflection.cleanUpReflectionObjects$(ScalaReflection.scala:903)
//   at org.apache.spark.sql.catalyst.ScalaReflection$.cleanUpReflectionObjects(ScalaReflection.scala:49)
//   at org.apache.spark.sql.catalyst.ScalaReflection$.serializerForType(ScalaReflection.scala:413)
//   at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$.apply(ExpressionEncoder.scala:56)
//   at org.apache.spark.sql.Encoders$.product(Encoders.scala:285)
//   at org.apache.spark.sql.LowPrioritySQLImplicits.newProductEncoder(SQLImplicits.scala:251)
//   at org.apache.spark.sql.LowPrioritySQLImplicits.newProductEncoder$(SQLImplicits.scala:251)
//   at org.apache.spark.sql.SQLImplicits.newProductEncoder(SQLImplicits.scala:32)
//   ... 42 elided

In comparison, a TypedDataset will notify about the encoding problem at compile time:

TypedDataset.create(Seq(MyDate(new java.util.Date(System.currentTimeMillis))))
// <console>:25: error: could not find implicit value for parameter encoder: frameless.TypedEncoder[MyDate]
//        TypedDataset.create(Seq(MyDate(new java.util.Date(System.currentTimeMillis))))
//                           ^

Aggregate vs Projected columns

Spark's Dataset do not distinguish between columns created from aggregate operations, such as summing or averaging, and simple projections/selections. This is problematic when you start mixing the two.

import org.apache.spark.sql.functions.sum
// import org.apache.spark.sql.functions.sum
ds.select(sum($"i"), $"i"*2)
// org.apache.spark.sql.AnalysisException: grouping expressions sequence is empty, and '`i`' is not an aggregate function. Wrap '(sum(`i`) AS `sum(i)`)' in windowing function(s) or wrap '`i`' in first() (or first_value) if you don't care which value you get.;
// Aggregate [sum(i#23L) AS sum(i)#164L, (i#23L * cast(2 as bigint)) AS (i * 2)#165L]
// +- Relation[i#23L,j#24] parquet
// 
//   at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.failAnalysis(CheckAnalysis.scala:50)
//   at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.failAnalysis$(CheckAnalysis.scala:49)
//   at org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:154)
//   at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkValidAggregateExpression$1(CheckAnalysis.scala:263)
//   at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$12(CheckAnalysis.scala:272)
//   at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$12$adapted(CheckAnalysis.scala:272)
//   at scala.collection.immutable.List.foreach(List.scala:431)
//   at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkValidAggregateExpression$1(CheckAnalysis.scala:272)
//   at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$12(CheckAnalysis.scala:272)
//   at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$12$adapted(CheckAnalysis.scala:272)
//   at scala.collection.immutable.List.foreach(List.scala:431)
//   at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkValidAggregateExpression$1(CheckAnalysis.scala:272)
//   at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$15(CheckAnalysis.scala:299)
//   at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$15$adapted(CheckAnalysis.scala:299)
//   at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
//   at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
//   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
//   at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1(CheckAnalysis.scala:299)
//   at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$1$adapted(CheckAnalysis.scala:93)
//   at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:183)
//   at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:93)
//   at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:90)
//   at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:154)
//   at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:175)
//   at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:228)
//   at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:172)
//   at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:73)
//   at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
//   at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:143)
//   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
//   at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:143)
//   at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:73)
//   at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:71)
//   at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:63)
//   at org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:90)
//   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
//   at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:88)
//   at org.apache.spark.sql.Dataset.withPlan(Dataset.scala:3715)
//   at org.apache.spark.sql.Dataset.select(Dataset.scala:1462)
//   ... 42 elided

In Frameless, mixing the two results in a compilation error.

// To avoid confusing frameless' sum with the standard Spark's sum
import frameless.functions.aggregate.{sum => fsum}
// import frameless.functions.aggregate.{sum=>fsum}
fds.select(fsum(fds('i)))
// <console>:26: error: polymorphic expression cannot be instantiated to expected type;
//  found   : [Out]frameless.TypedAggregate[Foo,Out]
//  required: frameless.TypedColumn[Foo,?]
//        fds.select(fsum(fds('i)))
//                       ^

As the error suggests, we expected a TypedColumn but we got a TypedAggregate instead.

Here is how you apply an aggregation method in Frameless:

fds.agg(fsum(fds('i))+22).show().run()
// +-----+
// |value|
// +-----+
// |  133|
// +-----+
//

Similarly, mixing projections while aggregating does not make sense, and in Frameless you get a compilation error.

fds.agg(fsum(fds('i)), fds('i)).show().run()
// <console>:26: error: polymorphic expression cannot be instantiated to expected type;
//  found   : [A]frameless.TypedColumn[Foo,A]
//  required: frameless.TypedAggregate[Foo,?]
//        fds.agg(fsum(fds('i)), fds('i)).show().run()
//                                  ^

results matching ""

    No results matching ""