這是我使用的代碼:df = Nonefrom pyspark.sql.functions import litfor category in file_list_filtered: data_files = os.listdir('HMP_Dataset/'+category) for data_file in data_files: print(data_file) temp_df = spark.read.option('header', 'false').option('delimiter', ' ').csv('HMP_Dataset/'+category+'/'+data_file, schema = schema) temp_df = temp_df.withColumn('class', lit(category)) temp_df = temp_df.withColumn('source', lit(data_file)) if df is None: df = temp_df else: df = df.union(temp_df)我得到了這個錯誤:NameError Traceback (most recent call last)<ipython-input-4-4296b4e97942> in <module> 9 for data_file in data_files: 10 print(data_file)---> 11 temp_df = spark.read.option('header', 'false').option('delimiter', ' ').csv('HMP_Dataset/'+category+'/'+data_file, schema = schema) 12 temp_df = temp_df.withColumn('class', lit(category)) 13 temp_df = temp_df.withColumn('source', lit(data_file))NameError: name 'spark' is not defined我該如何解決?
2 回答

慕工程0101907
TA貢獻1887條經驗 獲得超5個贊
初始化 Spark Session,然后spark在您的循環中使用。
df = None
from pyspark.sql.functions import lit
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('app_name').getOrCreate()
for category in file_list_filtered:
...

小怪獸愛吃肉
TA貢獻1852條經驗 獲得超1個贊
嘗試定義sparkvar
from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
sc = SparkContext('local')
spark = SparkSession(sc)
添加回答
舉報
0/150
提交
取消