1 回答

TA貢獻1809條經驗 獲得超8個贊
您可以傳遞字典以創建DataFrame函數。
l = [{'a': 1, 'b': 2, 'c': 3}, {'b': 4, 'c': 5, 'd': 6, 'e': 7}]
df = spark.createDataFrame(l)
#UserWarning: inferring schema from dict is deprecated,please use pyspark.sql.Row instead
#warnings.warn("inferring schema from dict is deprecated
df.show()
+----+---+---+----+----+
| a| b| c| d| e|
+----+---+---+----+----+
| 1| 2| 3|null|null|
|null| 4| 5| 6| 7|
+----+---+---+----+----+
此外,還為列提供,因為不推薦使用字典的架構推理。使用對象創建數據框要求所有字典具有相同的列。schemaRow
通過合并涉及的所有字典中的鍵,以編程方式定義架構。
from pyspark.sql.types import StructType,StructField,IntegerType
#Function to merge keys from several dicts
def merge_keys(*dict_args):
result = set()
for dict_arg in dict_args:
for key in dict_arg.keys():
result.add(key)
return sorted(list(result))
#Generate schema given a column list
def generate_schema(columns):
result = StructType()
for column in columns:
result.add(column,IntegerType(),nullable=True) #change type and nullability as needed
return result
df = spark.createDataFrame(l,schema=generate_schema(merge_keys(*l)))
添加回答
舉報