且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

如何从字典创建数据框,其中每个项目都是PySpark中的一列

更新时间:2022-12-08 23:11:57

最简单的方法是创建一个熊猫DataFrame并将其转换为Spark DataFrame:

Easiest way is to create a pandas DataFrame and convert to a Spark DataFrame:

col_dict = {'col1': [1, 2, 3],
            'col2': [4, 5, 6]}

import pandas as pd
pandas_df = pd.DataFrame(col_dict)
df = sqlCtx.createDataFrame(pandas_df)
df.show()
#+----+----+
#|col1|col2|
#+----+----+
#|   1|   4|
#|   2|   5|
#|   3|   6|
#+----+----+

没有熊猫

如果没有可用的熊猫,则只需将数据处理为适用于createDataFrame()函数的形式.引用上一个答案:

Without Pandas

If pandas is not available, you'll just have to manipulate your data into a form that works for the createDataFrame() function. Quoting myself from a previous answer:

我发现将createDataFrame()的参数视为 元组列表,其中列表中的每个条目对应于 DataFrame和元组的每个元素对应于一列.

I find it's useful to think of the argument to createDataFrame() as a list of tuples where each entry in the list corresponds to a row in the DataFrame and each element of the tuple corresponds to a column.

colnames, data = zip(*col_dict.items())
print(colnames)
#('col2', 'col1')
print(data)
#([4, 5, 6], [1, 2, 3])

现在,我们需要修改数据,以便它是一个元组列表,其中每个元素都包含对应列的数据.幸运的是,使用zip很容易:

Now we need to modify data so that it's a list of tuples, where each element contains the data for the corresponding column. Luckily, this is easy using zip:

data = zip(*data)
print(data)
#[(4, 1), (5, 2), (6, 3)]

现在拨打createDataFrame():

df = sqlCtx.createDataFrame(data, colnames)
df.show()
#+----+----+
#|col2|col1|
#+----+----+
#|   4|   1|
#|   5|   2|
#|   6|   3|
#+----+----+