1.Hive相关
脚本导数据,并设置运行队列
bin/beeline -u 'url' --outputformat=tsv -e "set mapreduce.job.queuename=queue_1" -
e "select * from search_log where date <= 20150525 and date >= 20150523" > test.txt
2.Spark相关
spark任务提交
$SPARK_HOME/bin/spark-submit --class com.test.SimilarQuery --master yarn-cluster --num-executors 40 --driver-memory 4g --executor-
memory 2g --executor-cores 1 similar-query-0.0.1-SNAPSHOT-jar-with-dependencies.jar 20150819 /user/similar-query
3.Hadoop
执行MapReduce Job,并设置运行队列
hadoop jar game-query-down-0.0.1-SNAPSHOT.jar QueryDownJob -Dmapreduce.job.queuename=sns_default arg1 arg2