假設我正在做類似的事情:val df = sqlContext.load("com.databricks.spark.csv", Map("path" -> "cars.csv", "header" -> "true"))df.printSchema()root |-- year: string (nullable = true) |-- make: string (nullable = true) |-- model: string (nullable = true) |-- comment: string (nullable = true) |-- blank: string (nullable = true)df.show()year make model comment blank2012 Tesla S No comment 1997 Ford E350 Go get one now th... 但我真的想要yearas Int(也許可以轉換其他一些列)。我能想到的最好的是df.withColumn("year2", 'year.cast("Int")).select('year2 as 'year, 'make, 'model, 'comment, 'blank)org.apache.spark.sql.DataFrame = [year: int, make: string, model: string, comment: string, blank: string]這有點令人費解。我來自R,我習慣于寫作,例如df2 <- df %>% mutate(year = year %>% as.integer, make = make %>% toupper)我可能會錯過一些東西,因為應該有一種更好的方法來解決此問題。
如何在Spark SQL的DataFrame中更改列類型?
jeck貓
2019-11-05 10:42:25