我正在尝试将一个csv文件读入SparkR(运行Spark 2.0.0) – &试图尝试新增功能.
在这里使用RStudio.
我在“读取”源文件时遇到错误.
我的代码:
Sys.setenv(SPARK_HOME = "C:/spark-2.0.0-bin-hadoop2.6")
library(SparkR, lib.loc = c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib")))
sparkR.session(master = "local[*]", appName = "SparkR")
df <- loadDF("F:/file.csv", "csv", header = "true")
我在loadDF函数中遇到错误.
错误:
loadDF("F:/file.csv", "csv", header = "true")
Error in invokeJava(isStatic = TRUE, className, methodName, …) :
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:258)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:359)
at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:263)
at org.apache.spark.sql.hive.HiveSharedState.metadataHive$lzycompute(HiveSharedState.scala:39)
at org.apache.spark.sql.hive.HiveSharedState.metadataHive(HiveSharedState.scala:38)
at org.apache.spark.sql.hive.HiveSharedState.externalCatalog$lzycompute(HiveSharedState.scala:46)
at org.apache.spark.sql.hive.HiveSharedSt
我在这里错过了一些规格吗?任何指示继续进行将不胜感激.
最佳答案 我也有同样的问题.
但这个简单的代码有类似的问题
createDataFrame(iris)
安装可能有些不对劲?
UPD.是的!我找到解决方案.
对于R,只需通过以下代码启动会话:
sparkR.session(sparkConfig = list(spark.sql.warehouse.dir="/file:C:/temp"))