Native snappy library not available: this version of libhadoop was built without snappy support
? ? 在使用spark Mllib的時候,訓練好的模型save之后,在線service需要load加載該模型,實現線上預測。
? ? 實際加載load的時候,拋出異常:Native snappy library not available: this version of libhadoop was built without snappy support
? ? 查了下,發現是因為Hadoop需要安裝snappy支持,因此有以下兩種解決辦法:一是換一種方式,另一種是編譯安裝snappy支持:
One approach was to use a different hadoop codec like belowsc.hadoopConfiguration.set("mapreduce.output.fileoutputformat.compress", "true") sc.hadoopConfiguration.set("mapreduce.output.fileoutputformat.compress.type", CompressionType.BLOCK.toString) sc.hadoopConfiguration.set("mapreduce.output.fileoutputformat.compress.codec", "org.apache.hadoop.io.compress.BZip2Codec") sc.hadoopConfiguration.set("mapreduce.map.output.compress", "true") sc.hadoopConfiguration.set("mapreduce.map.output.compress.codec", "org.apache.hadoop.io.compress.BZip2Codec")
Second approach was to mention --driver-library-path?/usr/hdp/<whatever is your current version>/hadoop/lib/native/?as a parameter to my spark-submit job (in command line)
總結
以上是生活随笔為你收集整理的Native snappy library not available: this version of libhadoop was built without snappy support的全部內容,希望文章能夠幫你解決所遇到的問題。
 
                            
                        - 上一篇: 通过java api操作hdfs(ker
- 下一篇: hadoop2.2支持snappy压缩安
