DataFrame

To create a DataFrame from an RDD of Rows, there are two main options:

1) As already pointed out, you could use toDF() which can be imported by import sqlContext.implicits._. However, this approach only works for the following types of RDDs:

RDD[Int] RDD[Long] RDD[String] RDD[T <: scala.Product] (source: Scaladoc of the SQLContext.implicits object)

The last signature actually means that it can work for an RDD of tuples or an RDD of case classes (because tuples and case classes are subclasses of scala.Product).

So, to use this approach for an RDD[Row], you have to map it to an RDD[T <: scala.Product]. This can be done by mapping each row to a custom case class or to a tuple, as in the following code snippets:

val df = rdd.map({ case Row(val1: String, ..., valN: Long) => (val1, ..., valN) }).toDF("col1_name", ..., "colN_name") or

case class MyClass(val1: String, ..., valN: Long = 0L) val df = rdd.map({ case Row(val1: String, ..., valN: Long) => MyClass(val1, ..., valN) }).toDF("col1_name", ..., "colN_name") The main drawback of this approach (in my opinion) is that you have to explicitly set the schema of the resulting DataFrame in the map function, column by column. Maybe this can be done programatically if you don't know the schema in advance, but things can get a little messy there. So, alternatively, there is another option:

2) You can use createDataFrame(rowRDD: RDD[Row], schema: StructType) as in the accepted answer, which is available in the SQLContext object. Example for converting an RDD of an old DataFrame:

val rdd = oldDF.rdd val newDF = oldDF.sqlContext.createDataFrame(rdd, oldDF.schema) Note that there is no need to explicitly set any schema column. We reuse the old DF's schema, which is of StructType class and can be easily extended. However, this approach sometimes is not possible, and in some cases can be less efficient than the first one.

Example of retrieving row values

val transactions = Seq((1, 2), (1, 4), (2, 3)).toDF("user_id", "category_id")

val transactions_with_counts = transactions .groupBy($"user_id", $"category_id") .count

transactions_with_counts.printSchema

// root // |-- user_id: integer (nullable = false) // |-- category_id: integer (nullable = false) // |-- count: long (nullable = false) There are a few ways to access Row values and keep expected types:

Pattern matching

import org.apache.spark.sql.Row

transactions_with_counts.map{ case Row(user_id: Int, category_id: Int, rating: Long) => Rating(user_id, category_id, rating) }

Typed get* methods like getInt, getLong:

transactions_with_counts.map( r => Rating(r.getInt(0), r.getInt(1), r.getLong(2)) )

getAs method which can use both names and indices:

transactions_with_counts.map(r => Rating( r.getAs[Int]("user_id"), r.getAs[Int]("category_id"), r.getAsLong )) It can be used to properly extract user defined types, including mllib.linalg.Vector. Obviously accessing by name requires a schema.

Converting to statically typed Dataset (Spark 1.6+ / 2.0+):

transactions_with_counts.as[(Int, Int, Long)]

Inferring the Schema Using Reflection

The Scala interface for Spark SQL supports automatically converting an RDD containing case classes to a DataFrame. The case class defines the schema of the table. The names of the arguments to the case class are read using reflection and become the names of the columns. Case classes can also be nested or contain complex types such as Sequences or Arrays. This RDD can be implicitly converted to a DataFrame and then be registered as a table.

Programmatically Specifying the Schema

When case classes cannot be defined ahead of time (for example, the structure of records is encoded in a string, or a text dataset will be parsed and fields will be projected differently for different users), a DataFrame can be created programmatically with three steps.

1.Create an RDD of Rows from the original RDD; 2.Create the schema represented by a StructType matching the structure of Rows in the RDD created in Step 1. 3.Apply the schema to the RDD of Rows via createDataFrame method provided by SQLContext.

ReadPreference in MongoDB

Difference readPreference

Last updated

Was this helpful?