Spark Standalone模式应用程序开发
生活随笔
收集整理的這篇文章主要介紹了
Spark Standalone模式应用程序开发
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
一、Scala版本:
程序如下:
| package scala import org.apache.spark.SparkContext import org.apache.spark.SparkConf object Test { ????def main(args: Array[String]) { ??????val logFile = "file:///spark-bin-0.9.1/README.md" ??????val conf = new SparkConf().setAppName("Spark Application in Scala") ??????val sc = new SparkContext(conf) ??????val logData = sc.textFile(logFile, 2).cache() ??????val numAs = logData.filter(line => line.contains("a")).count() ??????val numBs = logData.filter(line => line.contains("b")).count() ??????println("Lines with a: %s, Lines with b: %s".format(numAs, numBs)) ????} ??} } |
為了編譯這個文件,需要創(chuàng)建一個xxx.sbt文件,這個文件類似于pom.xml文件,這里我們創(chuàng)建一個scala.sbt文件,內容如下:
| name := "Spark application in Scala" version := "1.0" scalaVersion := "2.10.4" libraryDependencies += "org.apache.spark" %% "spark-core" % "1.0.0" resolvers += "Akka Repository" at "http://repo.akka.io/releases/" |
編譯:
| # sbt/sbt package [info] Done packaging. [success] Total time: 270 s, completed Jun 11, 2014 1:05:54 AM |
二、Java版本
| /** ?* User: 過往記憶 ?* Date: 14-6-10 ?* Time: 下午11:37 ?* bolg: https://www.iteblog.com ?* 本文地址:https://www.iteblog.com/archives/1041 ?* 過往記憶博客,專注于hadoop、hive、spark、shark、flume的技術博客,大量的干貨 ?* 過往記憶博客微信公共帳號:iteblog_hadoop ?*/ /* SimpleApp.java */ import org.apache.spark.api.java.*; import org.apache.spark.SparkConf; import org.apache.spark.api.java.function.Function; ? public class SimpleApp { ????public static void main(String[] args) { ????????String logFile = "file:///spark-bin-0.9.1/README.md"; ????????SparkConf conf =new SparkConf().setAppName("Spark Application in Java"); ????????JavaSparkContext sc = new JavaSparkContext(conf); ????????JavaRDD<String> logData = sc.textFile(logFile).cache(); ? ????????long numAs = logData.filter(new Function<String, Boolean>() { ????????????public Boolean call(String s) { return s.contains("a"); } ????????}).count(); ? ????????long numBs = logData.filter(new Function<String, Boolean>() { ????????????public Boolean call(String s) { return s.contains("b"); } ????????}).count(); ? ????????System.out.println("Lines with a: " + numAs +",lines with b: " + numBs); ????} } |
本程序分別統(tǒng)計README.md文件中包含a和b的行數(shù)。本項目的pom.xml文件內容如下:
| <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" ?????????xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ?????????xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 ????????????http://maven.apache.org/xsd/maven-4.0.0.xsd"> ????<modelVersion>4.0.0</modelVersion> ? ????<groupId>spark</groupId> ????<artifactId>spark</artifactId> ????<version>1.0</version> ? ????<dependencies> ????????<dependency> ????????????<groupId>org.apache.spark</groupId> ????????????<artifactId>spark-core_2.10</artifactId> ????????????<version>1.0.0</version> ????????</dependency> ????</dependencies> </project> |
利用Maven來編譯這個工程:
| # mvn install [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 5.815s [INFO] Finished at: Wed Jun 11 00:01:57 CST 2014 [INFO] Final Memory: 13M/32M [INFO] ------------------------------------------------------------------------ |
三、Python版本
| # # User: 過往記憶 # Date: 14-6-10 # Time: 下午11:37 # bolg: https://www.iteblog.com # 本文地址:https://www.iteblog.com/archives/1041 # 過往記憶博客,專注于hadoop、hive、spark、shark、flume的技術博客,大量的干貨 # 過往記憶博客微信公共帳號:iteblog_hadoop # from pyspark import SparkContext ? logFile = "file:///spark-bin-0.9.1/README.md" sc = SparkContext("local", "Spark Application in Python") logData = sc.textFile(logFile).cache() ? numAs = logData.filter(lambda s: 'a' in s).count() numBs = logData.filter(lambda s: 'b' in s).count() ? print "Lines with a: %i, lines with b: %i" % (numAs, numBs) |
四、測試運行
本程序的程序環(huán)境是Spark 1.0.0,單機模式,測試如下:
1、測試Scala版本的程序
| # bin/spark-submit --class "scala.Test"? \ ???????????????????--master local[4]??? \ ??????????????target/scala-2.10/simple-project_2.10-1.0.jar ? 14/06/11 01:07:53 INFO spark.SparkContext: Job finished: count at Test.scala:18, took 0.019705 s Lines with a: 62, Lines with b: 35 |
2、測試Java版本的程序
| # bin/spark-submit --class "SimpleApp"? \ ???????????????????--master local[4]??? \ ??????????????target/spark-1.0-SNAPSHOT.jar ? 14/06/11 00:49:14 INFO spark.SparkContext: Job finished: count at SimpleApp.java:22, took 0.019374 s Lines with a: 62, lines with b: 35 |
3、測試Python版本的程序
# bin/spark-submit --master local[4]??? \ ????????????????simple.py ? Lines with a: 62, lines with b: 35總結
以上是生活随笔為你收集整理的Spark Standalone模式应用程序开发的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: java多线程(一)-Thread类和R
- 下一篇: Spark与Mysql(JdbcRDD)