更新时间:2022-12-11 22:20:55
您可以使用 pyspark.sql.functions.concat_ws
和
You can achieve this by using a pyspark.sql.Window
, which orders by the clientDateTime
, pyspark.sql.functions.concat_ws
, and pyspark.sql.functions.collect_list
:
import pyspark.sql.functions as f
from pyspark.sql import Window
w = Window.orderBy("DateTime") # define Window for ordering
df.drop("Seq", "sessionCount", "row_number").select(
"*",
f.concat_ws(
"",
f.collect_list(f.col("name")).over(w)
).alias("effective_name")
).show(truncate=False)
#+---------------+--------------+-------------------------+
#|name | DateTime|effective_name |
#+---------------+--------------+-------------------------+
#|abc |1521572913344 |abc |
#|xyz |1521572916109 |abcxyz |
#|rafa |1521572916118 |abcxyzrafa |
#|{} |1521572916129 |abcxyzrafa{} |
#|experience |1521572917816 |abcxyzrafa{}experience |
#+---------------+--------------+-------------------------+
我放下了"Seq"
,"sessionCount"
,"row_number"
,以使输出显示更加友好.
I dropped "Seq"
, "sessionCount"
, "row_number"
to make the output display friendlier.
如果需要按组进行此操作,则可以将partitionBy
添加到Window
.假设您要按sessionSeq
分组,可以执行以下操作:
If you needed to do this per group, you can add a partitionBy
to the Window
. Say in this case you want to group by sessionSeq
, you can do the following:
w = Window.partitionBy("Seq").orderBy("DateTime")
df.drop("sessionCount", "row_number").select(
"*",
f.concat_ws(
"",
f.collect_list(f.col("name")).over(w)
).alias("effective_name")
).show(truncate=False)
#+---------------+--------------+----------+-------------------------+
#|name | DateTime|sessionSeq|effective_name |
#+---------------+--------------+----------+-------------------------+
#|abc |1521572913344 |17 |abc |
#|xyz |1521572916109 |17 |abcxyz |
#|rafa |1521572916118 |17 |abcxyzrafa |
#|{} |1521572916129 |17 |abcxyzrafa{} |
#|experience |1521572917816 |17 |abcxyzrafa{}experience |
#+---------------+--------------+----------+-------------------------+
如果您更喜欢使用withColumn
,则以上内容等同于:
If you prefer to use withColumn
, the above is equivalent to:
df.drop("sessionCount", "row_number").withColumn(
"effective_name",
f.concat_ws(
"",
f.collect_list(f.col("name")).over(w)
)
).show(truncate=False)
说明
您要在多个行上应用一个函数,这称为聚合.对于任何聚合,您都需要定义要聚合的行以及顺序.我们使用Window
进行此操作.在这种情况下,w = Window.partitionBy("Seq").orderBy("DateTime")
将按Seq
对数据进行分区,并按DateTime
进行排序.
You want to apply a function over multiple rows, which is called an aggregation. With any aggregation, you need to define which rows to aggregate over and the order. We do this using a Window
. In this case, w = Window.partitionBy("Seq").orderBy("DateTime")
will partition the data by the Seq
and sort by the DateTime
.
我们首先在窗口上应用聚合函数collect_list("name")
.这将从name
列中收集所有值并将它们放在列表中.插入顺序由窗口的顺序定义.
We first apply the aggregate function collect_list("name")
over the window. This gathers all of the values from the name
column and puts them in a list. The order of insertion is defined by the Window's order.
例如,此步骤的中间输出为:
For example, the intermediate output of this step would be:
df.select(
f.collect_list("name").over(w).alias("collected")
).show()
#+--------------------------------+
#|collected |
#+--------------------------------+
#|[abc] |
#|[abc, xyz] |
#|[abc, xyz, rafa] |
#|[abc, xyz, rafa, {}] |
#|[abc, xyz, rafa, {}, experience]|
#+--------------------------------+
现在列表中已包含适当的值,我们可以将它们与空字符串连接起来作为分隔符.
Now that the appropriate values are in the list, we can concatenate them together with an empty string as the separator.
df.select(
f.concat_ws(
"",
f.collect_list("name").over(w)
).alias("concatenated")
).show()
#+-----------------------+
#|concatenated |
#+-----------------------+
#|abc |
#|abcxyz |
#|abcxyzrafa |
#|abcxyzrafa{} |
#|abcxyzrafa{}experience |
#+-----------------------+