且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

在关键点上连接Spark数据框

更新时间:2023-11-18 23:43:52

使用Scala的别名方法(这是旧版本的示例spark for spark 2.x,请参阅我的其他答案):

您可以使用案例类来准备样本数据集... 对于ex是可选的:您也可以从hiveContext.sql获取DataFrame.

Alias Approach using scala (this is example given for older version of spark for spark 2.x see my other answer) :

You can use case class to prepare sample dataset ... which is optional for ex: you can get DataFrame from hiveContext.sql as well..

import org.apache.spark.sql.functions.col

case class Person(name: String, age: Int, personid : Int)

case class Profile(name: String, personid  : Int , profileDescription: String)

    val df1 = sqlContext.createDataFrame(
   Person("Bindu",20,  2) 
:: Person("Raphel",25, 5) 
:: Person("Ram",40, 9):: Nil)


val df2 = sqlContext.createDataFrame(
Profile("Spark",2,  "SparkSQLMaster") 
:: Profile("Spark",5, "SparkGuru") 
:: Profile("Spark",9, "DevHunter"):: Nil
)

// you can do alias to refer column name with aliases to  increase readablity

val df_asPerson = df1.as("dfperson")
val df_asProfile = df2.as("dfprofile")


val joined_df = df_asPerson.join(
    df_asProfile
, col("dfperson.personid") === col("dfprofile.personid")
, "inner")


joined_df.select(
  col("dfperson.name")
, col("dfperson.age")
, col("dfprofile.name")
, col("dfprofile.profileDescription"))
.show

我个人不喜欢的示例Temp表方法...

sample Temp table approach which I don't like personally...

对DataFrame使用registerTempTable( tableName )方法的原因是,除了能够使用Spark提供的之外您也可以通过sqlContext.sql( sqlQuery )方法发出SQL查询,该查询使用该DataFrame作为SQL表. tableName参数指定在SQL查询中用于该DataFrame的表名.

The reason to use the registerTempTable( tableName ) method for a DataFrame, is so that in addition to being able to use the Spark-provided methods of a DataFrame, you can also issue SQL queries via the sqlContext.sql( sqlQuery ) method, that use that DataFrame as an SQL table. The tableName parameter specifies the table name to use for that DataFrame in the SQL queries.

df_asPerson.registerTempTable("dfperson");
df_asProfile.registerTempTable("dfprofile")

sqlContext.sql("""SELECT dfperson.name, dfperson.age, dfprofile.profileDescription
                  FROM  dfperson JOIN  dfprofile
                  ON dfperson.personid == dfprofile.personid""")

如果您想了解更多有关joins pls的信息,请参见以下文章:

If you want to know more about joins pls see this nice post : beyond-traditional-join-with-apache-spark

注意: 1)如 @RaphaelRoth 所述,

Note : 1) As mentioned by @RaphaelRoth ,

val resultDf = PersonDf.join(ProfileDf,Seq("personId"))很好 方法,因为如果您对同一张表使用内部联接,则从两侧都没有重复的列.
2)Spark 2.x示例在另一个答案中已更新,具有完整的联接集 带有示例+结果的spark 2.x支持的操作

val resultDf = PersonDf.join(ProfileDf,Seq("personId")) is good approach since it doesnt have duplicate columns from both sides if you are using inner join with same table.
2) Spark 2.x example updated in another answer with full set of join operations supported by spark 2.x with examples + result

提示:

此外,连接中的重要事项:广播功能可以帮助提示,请看我的答案