PSQLException: ERROR: permission denied: no privilege to create a readable gpfdist(s) external table
生活随笔
收集整理的這篇文章主要介紹了
PSQLException: ERROR: permission denied: no privilege to create a readable gpfdist(s) external table
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
Spark 使用GSC 連接Gp 寫入數據時候報錯:
PSQLException: ERROR: permission denied: no privilege to create a readable gpfdist(s) external table錯誤原因:當前用戶沒有權限創建外部表
完整錯誤如下:
21/07/01 10:04:24 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 7.0 (TID 8, hddatanode01, executor 2): org.postgresql.util.PSQLException: ERROR: permission denied: no privilege to create a readable gpfdist(s) external tableat org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2310)at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2023)at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:217)at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:421)at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:318)at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:310)at com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:95)at com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java)at io.pivotal.greenplum.spark.SqlExecutor$$anonfun$execute$2$$anonfun$apply$2.apply(SqlExecutor.scala:24)at io.pivotal.greenplum.spark.SqlExecutor$$anonfun$execute$2$$anonfun$apply$2.apply(SqlExecutor.scala:22)at scala.Function1$$anonfun$andThen$1.apply(Function1.scala:52)at resource.AbstractManagedResource$$anonfun$5.apply(AbstractManagedResource.scala:88)at scala.util.control.Exception$Catch$$anonfun$either$1.apply(Exception.scala:125)at scala.util.control.Exception$Catch$$anonfun$either$1.apply(Exception.scala:125)at scala.util.control.Exception$Catch.apply(Exception.scala:103)at scala.util.control.Exception$Catch.either(Exception.scala:125)at resource.AbstractManagedResource.acquireFor(AbstractManagedResource.scala:88)at resource.DeferredExtractableManagedResource.acquireFor(AbstractManagedResource.scala:27)at resource.ManagedResourceOperations$$anon$2$$anonfun$acquireFor$1.apply(ManagedResourceOperations.scala:49)at resource.ManagedResourceOperations$$anon$2$$anonfun$acquireFor$1.apply(ManagedResourceOperations.scala:49)at resource.AbstractManagedResource$$anonfun$5.apply(AbstractManagedResource.scala:88)at scala.util.control.Exception$Catch$$anonfun$either$1.apply(Exception.scala:125)at scala.util.control.Exception$Catch$$anonfun$either$1.apply(Exception.scala:125)at scala.util.control.Exception$Catch.apply(Exception.scala:103)at scala.util.control.Exception$Catch.either(Exception.scala:125)at resource.AbstractManagedResource.acquireFor(AbstractManagedResource.scala:88)at resource.ManagedResourceOperations$$anon$2.acquireFor(ManagedResourceOperations.scala:49)at resource.ManagedResourceOperations$class.apply(ManagedResourceOperations.scala:26)at resource.ManagedResourceOperations$$anon$2.apply(ManagedResourceOperations.scala:47)at resource.DeferredExtractableManagedResource$$anonfun$tried$1.apply(AbstractManagedResource.scala:33)at scala.util.Try$.apply(Try.scala:192)at resource.DeferredExtractableManagedResource.tried(AbstractManagedResource.scala:33)at io.pivotal.greenplum.spark.SqlExecutor.tryFromManaged(SqlExecutor.scala:74)at io.pivotal.greenplum.spark.SqlExecutor.execute(SqlExecutor.scala:20)at io.pivotal.greenplum.spark.externaltable.GreenplumTableManager.createReadableExternalTable(GreenplumTableManager.scala:124)at io.pivotal.greenplum.spark.externaltable.GreenplumTableManager.createReadableExternalTableIfNotExists(GreenplumTableManager.scala:113)at io.pivotal.greenplum.spark.externaltable.GreenplumDataMover.moveData(GreenplumDataMover.scala:41)at io.pivotal.greenplum.spark.externaltable.PartitionWriter$$anonfun$getClosure$1.apply(PartitionWriter.scala:39)at io.pivotal.greenplum.spark.externaltable.PartitionWriter$$anonfun$getClosure$1.apply(PartitionWriter.scala:31)at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:853)at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:853)at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)at org.apache.spark.scheduler.Task.run(Task.scala:121)at org.apache.spark.executor.Executor$TaskRunner$$anonfun$11.apply(Executor.scala:407)at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1408)at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:413)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)at java.lang.Thread.run(Thread.java:748)解決方式:登陸gpmaster 節點,切換到gpadmin用戶,執行修改權限的操作,修改完成之后需要flush才能生效
下面的user 替換為操作gp的用戶名 cdp=# alter role user CREATEEXTTABLE; ALTER ROLE cdp=# flush總結
以上是生活随笔為你收集整理的PSQLException: ERROR: permission denied: no privilege to create a readable gpfdist(s) external table的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 美国会委员会建议禁止中国国企收购美国资产
- 下一篇: 『市场基础变量计算』