我是 pyspark 的新手,我正在嘗試使用 word_tokenize() 函數。這是我的代碼:import nltkfrom nltk import word_tokenizeimport pandas as pddf_pd = df2.select("*").toPandas()df2.select('text').apply(word_tokenize)df_pd.show()我使用 JDK 1.8、Python 3.7、spark 2.4.3。你能告訴我我做錯了什么嗎?如何解決?該部分下面的代碼運行良好,沒有任何錯誤。我收到這樣的消息:Py4JJavaError: An error occurred while calling o106.collectToPython.: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 14.0 failed 1 times, most recent failure: Lost task 0.0 in stage 14.0 (TID 330, localhost, executor driver): java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:3236) at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118) at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93) at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153) at org.apache.spark.util.ByteBufferOutputStream.write(ByteBufferOutputStream.scala:41) at java.io.ObjectOutputStream$BlockDataOutputStream.write(ObjectOutputStream.java:1853) at java.io.ObjectOutputStream.write(ObjectOutputStream.java:709) at org.apache.spark.util.Utils$.writeByteBuffer(Utils.scala:260) at org.apache.spark.scheduler.DirectTaskResult$$anonfun$writeExternal$1.apply$mcV$sp(TaskResult.scala:50) at org.apache.spark.scheduler.DirectTaskResult$$anonfun$writeExternal$1.apply(TaskResult.scala:48) at org.apache.spark.scheduler.DirectTaskResult$$anonfun$writeExternal$1.apply(TaskResult.scala:48) at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1326) at org.apache.spark.scheduler.DirectTaskResult.writeExternal(TaskResult.scala:48) at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java:1459) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1430) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348) and more....
1 回答

嚕嚕噠
TA貢獻1784條經驗 獲得超7個贊
toPandas 針對較小的數據集進行了優化。正如建議的那樣,這可能是由于內存不足,您收到了錯誤。
嘗試限制您的數據集大?。?df_pd = df2.limit(10).select("*").toPandas()
應用您的函數,然后運行 .head(10) 以消除內存錯誤的問題。
添加回答
舉報
0/150
提交
取消