更新时间:2023-02-16 20:30:45
在简单的情况下,您可以提供一个初始模式,该模式是预期模式的超集.例如,在您的情况下:
In simple cases you can provide an initial schema which is a superset of expected schemas. For example in your case:
val schema = Seq[MyType]().toDF.schema
Seq("a", "b", "c").map(Option(_))
.toDF("column1")
.write.parquet("/tmp/column1only")
val df = spark.read.schema(schema).parquet("/tmp/column1only").as[MyType]
df.show
+-------+-------+
|column1|column2|
+-------+-------+
| a| null|
| b| null|
| c| null|
+-------+-------+
df.first
MyType = MyType(Some(a),None)
这种方法可能有点脆弱,因此通常您应该使用SQL文字来填补空白:
This approach can be a little bit fragile so in general you should rather use SQL literals to fill the blanks:
spark.read.parquet("/tmp/column1only")
// or ArrayType(StringType)
.withColumn("column2", lit(null).cast("array<string>"))
.as[MyType]
.first
MyType = MyType(Some(a),None)