hue新建账号报错解决方案

网友投稿 308 2022-11-17

hue新建账号报错解决方案

22/06/26 19:03:45 INFO client.SparkClientImpl: 22/06/26 19:03:45 INFO conf.Configuration: resource-types.xml not found22/06/26 19:03:45 INFO client.SparkClientImpl: 22/06/26 19:03:45 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.22/06/26 19:03:45 INFO client.SparkClientImpl: 22/06/26 19:03:45 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (32768 MB per container)22/06/26 19:03:45 INFO client.SparkClientImpl: 22/06/26 19:03:45 INFO yarn.Client: Will allocate AM container, with 2048 MB memory including 1024 MB overhead22/06/26 19:03:45 INFO client.SparkClientImpl: 22/06/26 19:03:45 INFO yarn.Client: Setting up container launch context for our AM22/06/26 19:03:45 INFO client.SparkClientImpl: 22/06/26 19:03:45 INFO yarn.Client: Setting up the launch environment for our AM container22/06/26 19:03:45 INFO client.SparkClientImpl: 22/06/26 19:03:45 INFO yarn.Client: Preparing resources for our AM container22/06/26 19:03:45 INFO client.SparkClientImpl: Exception in thread "main" org.apache.hadoop.security.AccessControlException: Permission denied: user=wangyx, access=WRITE, inode="/user/wangyx":mapred:mapred:drwxr-xr-x22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:256)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1855)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1839)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1798)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:61)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3101)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1123)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:696)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)22/06/26 19:03:45 INFO client.SparkClientImpl: at java.security.AccessController.doPrivileged(Native Method)22/06/26 19:03:45 INFO client.SparkClientImpl: at javax.security.auth.Subject.doAs(Subject.java:422)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)22/06/26 19:03:45 INFO client.SparkClientImpl: 22/06/26 19:03:45 INFO client.SparkClientImpl: at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)22/06/26 19:03:45 INFO client.SparkClientImpl: at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)22/06/26 19:03:45 INFO client.SparkClientImpl: at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)22/06/26 19:03:45 INFO client.SparkClientImpl: at java.lang.reflect.Constructor.newInstance(Constructor.java:423)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2341)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2315)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1248)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1245)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1262)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1237)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2209)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:673)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:428)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:868)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:183)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.spark.deploy.yarn.Client.run(Client.scala:1143)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1606)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:851)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:926)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:935)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)22/06/26 19:03:45 INFO client.SparkClientImpl: Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=wangyx, access=WRITE, inode="/user/wangyx":mapred:mapred:drwxr-xr-x22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:256)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1855)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1839)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1798)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:61)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3101)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1123)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:696)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)22/06/26 19:03:45 INFO client.SparkClientImpl: at java.security.AccessController.doPrivileged(Native Method)22/06/26 19:03:45 INFO client.SparkClientImpl: at javax.security.auth.Subject.doAs(Subject.java:422)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)22/06/26 19:03:45 INFO client.SparkClientImpl: 22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1499)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.ipc.Client.call(Client.java:1445)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.ipc.Client.call(Client.java:1355)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)22/06/26 19:03:45 INFO client.SparkClientImpl: at com.sun.proxy.$Proxy9.mkdirs(Unknown Source)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:640)22/06/26 19:03:45 INFO client.SparkClientImpl: at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)22/06/26 19:03:45 INFO client.SparkClientImpl: at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)22/06/26 19:03:45 INFO client.SparkClientImpl: at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)22/06/26 19:03:45 INFO client.SparkClientImpl: at java.lang.reflect.Method.invoke(Method.java:498)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)22/06/26 19:03:45 INFO client.SparkClientImpl: at com.sun.proxy.$Proxy10.mkdirs(Unknown Source)22/06/26 19:03:45 INFO client.SparkClientImpl: at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2339)22/06/26 19:03:45 INFO client.SparkClientImpl: ... 20 more22/06/26 19:03:45 INFO client.SparkClientImpl: 22/06/26 19:03:45 INFO util.ShutdownHookManager: Shutdown hook called22/06/26 19:03:45 INFO client.SparkClientImpl: 22/06/26 19:03:45 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-ca9cf15c-8a40-40e1-83ee-e6457fa4577e22/06/26 19:03:46 ERROR client.SparkClientImpl: Error while waiting for Remote Spark Driver to connect back to HiveServer2.java.util.concurrent.ExecutionException: java.lang.RuntimeException: spark-submit process failed with exit code 1 and error ? at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:41) at org.apache.hive.spark.client.SparkClientImpl.(SparkClientImpl.java:103) at org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFactory.java:90) at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.createRemoteClient(RemoteHiveSparkClient.java:104) at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.(RemoteHiveSparkClient.java:100) at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:77) at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:131) at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:132) at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:131) at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:122) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1328) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:836) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:772) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:699) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:313) at org.apache.hadoop.util.RunJar.main(RunJar.java:227)Caused by: java.lang.RuntimeException: spark-submit process failed with exit code 1 and error ? at org.apache.hive.spark.client.SparkClientImpl$2.run(SparkClientImpl.java:495) at java.lang.Thread.run(Thread.java:748)22/06/26 19:03:46 ERROR spark.SparkTask: Failed to execute Spark task "Stage-1"org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create Spark client for Spark session 281661e4-07fd-4d6a-84c2-f3bd2a37c0b5_0: java.lang.RuntimeException: spark-submit process failed with exit code 1 and error ? at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.getHiveException(SparkSessionImpl.java:286) at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:135) at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:132) at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:131) at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:122) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2200) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1843) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1563) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1339) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1328) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:836) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:772) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:699) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:313) at org.apache.hadoop.util.RunJar.main(RunJar.java:227)Caused by: java.lang.RuntimeException: Error while waiting for Remote Spark Driver to connect back to HiveServer2. at org.apache.hive.spark.client.SparkClientImpl.(SparkClientImpl.java:124) at org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFactory.java:90) at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.createRemoteClient(RemoteHiveSparkClient.java:104) at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.(RemoteHiveSparkClient.java:100) at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:77) at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:131) ... 22 moreCaused by: java.util.concurrent.ExecutionException: java.lang.RuntimeException: spark-submit process failed with exit code 1 and error ? at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:41) at org.apache.hive.spark.client.SparkClientImpl.(SparkClientImpl.java:103) ... 27 moreCaused by: java.lang.RuntimeException: spark-submit process failed with exit code 1 and error ? at org.apache.hive.spark.client.SparkClientImpl$2.run(SparkClientImpl.java:495) at java.lang.Thread.run(Thread.java:748)FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session 281661e4-07fd-4d6a-84c2-f3bd2a37c0b5_0: java.lang.RuntimeException: spark-submit process failed with exit code 1 and error ?22/06/26 19:03:46 ERROR ql.Driver: FAILED: Execution Error, return code 30041 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client for Spark session 281661e4-07fd-4d6a-84c2-f3bd2a37c0b5_0: java.lang.RuntimeException: spark-submit process failed with exit code 1 and error ?22/06/26 19:03:46 INFO ql.Driver: Completed executing command(queryId=wangyx_20220626190342_4084f851-bd19-441f-9198-7de25138fef4); Time taken: 3.59 seconds22/06/26 19:03:46 INFO conf.HiveConf: Using the default value passed in for log id: 281661e4-07fd-4d6a-84c2-f3bd2a37c0b5

22/06/26 19:03:45 INFO client.SparkClientImpl: Exception in thread "main" org.apache.hadoop.security.AccessControlException: Permission denied: user=wangyx, access=WRITE, inode="/user/wangyx":mapred:mapred:drwxr-xr-x

报没有权限:解决问题方法如下:

hdfs dfs -chown -R wagnyx:wangyx /user/wangyx

set mapreduce.job.queuename=root.tools; ---指定队列select count(1) from ods_sony_watch_llk_ps;

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:Java异常处理try catch的基本用法
下一篇:大数据Hadoop之——Flink DataStream API 和 DataSet API
相关文章

 发表评论

暂时没有评论,来抢沙发吧~