且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

在Spark Scala数据框的列中使用非英语字符

更新时间:2023-11-18 22:56:46

它与选项("encoding","ISO-8859-1")一起使用.例如

val nonEnglishDF = spark.read.format("com.databricks.spark.csv").option(定界符","|").option("header",true).option(编码","ISO-8859-1").load(hdfs文件路径)

Here is part of a file I am trying to load into a dataframe:

alphabet|Sentence|Comment1

è|Small e|None

Ü|Capital U|None

ã|Small a|

Ç|Capital C|None

When I load this file into a dataframe all the non-english characters get converted into boxes. Tried to give option("encoding","UTF-8"), but there is no change.

val nonEnglishDF = spark.read.format("com.databricks.spark.csv").option("delimiter","|").option("header",true).option("encoding","UTF-8").load(hdfs file path)

Please let me know is there is any solution for this. I need to save the file finally with no change in the non-english characters. Currently when the file is saved, it puts boxes or question mark instead of the non-english characters.

It works with option("encoding","ISO-8859-1"). e.g.

val nonEnglishDF = spark.read.format("com.databricks.spark.csv").option("delimiter","|").option("header",true).option("encoding","ISO-8859-1").load(hdfs file path)