更新时间:2021-07-31 05:51:35
这里有一种不使用 udf
的方法:
Here is a way to do it without using a udf
:
# create example dataframe
import pyspark.sql.functions as f
data = [
({'fld': 0},)
]
schema = StructType(
[
StructField('state',
StructType(
[StructField('fld', IntegerType())]
)
)
]
)
df = sqlCtx.createDataFrame(data, schema)
df.printSchema()
#root
# |-- state: struct (nullable = true)
# | |-- fld: integer (nullable = true)
现在使用 withColumn()
并使用 lit()
和 alias()
添加新字段.
Now use withColumn()
and add the new field using lit()
and alias()
.
val = 1
df_new = df.withColumn(
'state',
f.struct(*[f.col('state')['fld'].alias('fld'), f.lit(val).alias('a')])
)
df_new.printSchema()
#root
# |-- state: struct (nullable = false)
# | |-- fld: integer (nullable = true)
# | |-- a: integer (nullable = false)
如果嵌套结构中有很多字段,则可以使用列表推导式,使用 df.schema["state"].dataType.names
获取字段名称.例如:
If you have a lot of fields in the nested struct you can use a list comprehension, using df.schema["state"].dataType.names
to get the field names. For example:
val = 1
s_fields = df.schema["state"].dataType.names # ['fld']
df_new = df.withColumn(
'state',
f.struct(*([f.col('state')[c].alias(c) for c in s_fields] + [f.lit(val).alias('a')]))
)
df_new.printSchema()
#root
# |-- state: struct (nullable = false)
# | |-- fld: integer (nullable = true)
# | |-- a: integer (nullable = false)
参考资料