且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

[Spark][Python]获得 key,value形式的 RDD

更新时间:2022-09-17 20:00:49

[Spark][Python]获得 key,value形式的 RDD


[training@localhost ~]$ cat users.txt
user001 Fred Flintstone
user090 Bugs Bunny
user111 Harry Potter
[training@localhost ~]$ hdfs dfs -put users.txt
[training@localhost ~]$ 
[training@localhost ~]$ 
[training@localhost ~]$ hdfs dfs -cat users.txt
user001 Fred Flintstone  <<<<<<<<<<<<<<<<<<,  tab 符 分隔
user090 Bugs Bunny
user111 Harry Potter
[training@localhost ~]$

 user01 = sc.textFile("users.txt")

user02 = user01.map(lambda line : line.split("\t"))

In [16]: user02.take(3)
Out[16]: 
[[u'user001', u'Fred Flintstone'],
[u'user090', u'Bugs Bunny'],
[u'user111', u'Harry Potter']]

user03 = user02.map(lambda fields: (fields[0],fields[1]))

user03.take(3)

Out[20]: 
[(u'user001', u'Fred Flintstone'), <<<<<<<<<<<<<<<< 此处构筑了 key-value pair
(u'user090', u'Bugs Bunny'),
(u'user111', u'Harry Potter')]






本文转自健哥的数据花园博客园博客,原文链接:http://www.cnblogs.com/gaojian/p/008-Aggregating-Data-with-Pair-RDDs.html,如需转载请自行联系原作者