spark on yarn jar什么时候会将jar包发到spark on yarn jar/usercache/root/filecache目录下

通过oracle,调用java类,并加载jar包到oracle中以支持java类。
  根据鬼子要求,最初的单纯使用oracle发送和接收mail被抛弃(上一篇文章描述了具体实现),转而要求使用oracle调用java,并通过javamail来实现mail的相关处里。这里问题就出现了,我编写过java,我编写过pl/sql,但是,从来没听说过使用oracle来调用java。同事没有一个作过的。不过,后来察看了相关资料,才知道,这个技术还确实有。于是做如下的相关记录。
  我要做的第一个是把我之前编好的一个压缩功能java类和其需要的jar包文件加载到oracle中,并使其能够被成功调用。如何压缩文件,稍后处理。我们先说如何加载java类和jar包到oracle。
  首先,压缩功能需要的环境配置:
     1、操作系统需要拥有支持loadjava命令的jdk。
     2、加载jlha.jar包,到oracle数据库中。
     操作过程:在dos环境下,输入命令: loadjava
     这个命令就是oracle加载 jlha.jar包的命令。
  编写好需要的,负责压缩的类:Directoryzip
     在其源文件头插入一行: create or replace and compile java source named directoryzip as
     并执行在数据库command window中,则导入数据库。
  既然已经成功导入类到oracle中,那么接下来就是编写函数,使得oracle能够调用此类中的代码:create or replace function zipblob (returnBLob BLOB,inBlob BLOB,filename VARCHAR2)
as language java name
' jp.co.uss.cares.common.DirectoryZip.zip(oracle.sql.BLOB,oracle.sql.BLOB,java.lang.String) return oracle.sql.BLOB ' ;
--原始数据
--压缩后的数据
typrow uss_
select d0030
into pBlob
from dewey.cysct0291
where d0020 = '300'
rBlob := empty_blob();
delete from dewey.cysct0291 where d0010 = 'tst';
insert into dewey.cysct0291 values('tst','100',rBlob,'','','','');
select d0030 into rBlob from dewey.cysct0291 where d0010 = 'tst'
--rBlob := zipblob(rBLob,pBlob,'.pdf');
rBlob := zipListToBlob(rBLob,'1,2,3,4,54'||chr(13)||chr(10)||'2,2,3,4,54','.csv');
没有更多推荐了,Spark On YARN内存分配
Spark On YARN内存分配
本文主要了解Spark On YARN部署模式下的内存分配情况,因为没有深入研究Spark的源代码,所以只能根据日志去看相关的源代码,从而了解“为什么会这样,为什么会那样”。
按照Spark应用程序中的driver分布方式不同,Spark on YARN有两种模式: yarn-client 模式、
yarn-cluster 模式。
当在YARN上运行Spark作业,每个Spark executor作为一个YARN容器运行。Spark可以使得多个Tasks在同一个容器里面运行。
下图是yarn-cluster模式的作业执行图,图片来源于网络:
关于Spark On YARN相关的配置参数,请参考Spark配置参数。本文主要讨论内存分配情况,所以只需要关注以下几个内心相关的参数:
spark.driver.memory :默认值512m spark.executor.memory :默认值512m spark.yarn.am.memory :默认值512m spark.yarn.executor.memoryOverhead :值为 executorMemory * 0.07, with minimum of 384
spark.yarn.driver.memoryOverhead :值为 driverMemory * 0.07, with minimum of 384
spark.yarn.am.memoryOverhead :值为 AM memory * 0.07, with minimum of 384
--executor-memory/spark.executor.memory 控制 executor 的堆的大小,但是 JVM 本身也会占用一定的堆空间,比如内部的 String 或者直接 byte buffer,
spark.yarn.XXX.memoryOverhead 属性决定向 YARN 请求的每个 executor 或dirver或am 的额外堆内存大小,默认值为
max(384, 0.07 * spark.executor.memory ) 在 executor 执行的时候配置过大的 memory 经常会导致过长的GC延时,64G是推荐的一个 executor 内存大小的上限。HDFS client 在大量并发线程时存在性能问题。大概的估计是每个 executor 中最多5个并行的 task 就可以占满写入带宽。另外,因为任务是提交到YARN上运行的,所以YARN中有几个关键参数,参考YARN的内存和CPU配置:
yarn.app.mapreduce.am.resource.mb :AM能够申请的最大内存,默认值为1536MB yarn.nodemanager.resource.memory-mb :nodemanager能够申请的最大内存,默认值为8192MB
yarn.scheduler.minimum-allocation-mb :调度时一个container能够申请的最小资源,默认值为1024MB
yarn.scheduler.maximum-allocation-mb :调度时一个container能够申请的最大资源,默认值为8192MB
Spark集群测试环境为:
master:64G内存,16核cpuworker:128G内存,32核cpuworker:128G内存,32核cpuworker:128G内存,32核cpuworker:128G内存,32核cpu注意:YARN集群部署在Spark集群之上的,每一个worker节点上同时部署了一个NodeManager,并且YARN集群中的配置如下:
&property&
&name&yarn.nodemanager.resource.memory-mb&/name&
&value&106496&/value& &!-- 104G --&
&/property&
&property&
&name&yarn.scheduler.minimum-allocation-mb&/name&
&value&2048&/value&
&/property&
&property&
&name&yarn.scheduler.maximum-allocation-mb&/name&
&value&106496&/value&
&/property&
&property&
&name&yarn.app.mapreduce.am.resource.mb&/name&
&value&2048&/value&
&/property&
将spark的日志基本调为DEBUG,并将log4j.logger.org.apache.hadoop设置为WARN建设不必要的输出,修改/etc/spark/conf/log4j.properties:
# Set everything to be logged to the console
log4j.rootCategory=DEBUG, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
# Settings to quiet third party logs that are too verbose
log4j.logger.org.eclipse.jetty=WARN
log4j.logger.org.apache.hadoop=WARN
log4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
接下来是运行测试程序,以官方自带的SparkPi例子为例, 下面主要测试client模式,至于cluster模式请参考下面的过程 。运行下面命令:
spark-submit --class org.apache.spark.examples.SparkPi \
--master yarn-client
--num-executors 4 \
--driver-memory 2g \
--executor-memory 3g \
--executor-cores 4 \
/usr/lib/spark/lib/spark-examples-1.3.0-cdh5.4.0-hadoop2.6.0-cdh5.4.0.jar \
观察输出日志(无关的日志被略去):
15/06/08 13:57:01 INFO SparkContext: Running Spark version 1.3.0
15/06/08 13:57:02 INFO SecurityManager: Changing view acls to: root
15/06/08 13:57:02 INFO SecurityManager: Changing modify acls to: root
15/06/08 13:57:03 INFO MemoryStore: MemoryStore started with capacity 1060.3 MB
15/06/08 13:57:04 DEBUG YarnClientSchedulerBackend: ClientArguments called with: --arg bj03-bi-pro-hdpnamenn:51568 --num-executors 4 --num-executors 4 --executor-memory 3g --executor-memory 3g --executor-cores 4 --executor-cores 4 --name Spark Pi
15/06/08 13:57:04 DEBUG YarnClientSchedulerBackend: [actor] handled message (24.52531 ms) ReviveOffers from Actor[akka://sparkDriver/user/CoarseGrainedScheduler#]
15/06/08 13:57:05 INFO Client: Requesting a new application from cluster with 4 NodeManagers
15/06/08 13:57:05 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (106496 MB per container)
15/06/08 13:57:05 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/06/08 13:57:05 INFO Client: Setting up container launch context for our AM
15/06/08 13:57:07 DEBUG Client: ===============================================================================
15/06/08 13:57:07 DEBUG Client: Yarn AM launch context:
15/06/08 13:57:07 DEBUG Client:
user class: N/A
15/06/08 13:57:07 DEBUG Client:
15/06/08 13:57:07 DEBUG Client:
CLASSPATH -& &CPS&/__spark__.jar&CPS&$HADOOP_CONF_DIR&CPS&$HADOOP_COMMON_HOME/*&CPS&$HADOOP_COMMON_HOME/lib/*&CPS&$HADOOP_HDFS_HOME/*&CPS&$HADOOP_HDFS_HOME/lib/*&CPS&$HADOOP_MAPRED_HOME/*&CPS&$HADOOP_MAPRED_HOME/lib/*&CPS&$HADOOP_YARN_HOME/*&CPS&$HADOOP_YARN_HOME/lib/*&CPS&$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*&CPS&$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*&CPS&:/usr/lib/spark/lib/spark-assembly.jar::/usr/lib/hadoop/lib/*:/usr/lib/hadoop/*:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop-mapreduce/lib/*:/usr/lib/hadoop-mapreduce/*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/*:/usr/lib/hive/lib/*:/usr/lib/flume-ng/lib/*:/usr/lib/paquet/lib/*:/usr/lib/avro/lib/*
15/06/08 13:57:07 DEBUG Client:
SPARK_DIST_CLASSPATH -& :/usr/lib/spark/lib/spark-assembly.jar::/usr/lib/hadoop/lib/*:/usr/lib/hadoop/*:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop-mapreduce/lib/*:/usr/lib/hadoop-mapreduce/*:/usr/lib/hadoop-yarn/lib/*:/usr/lib/hadoop-yarn/*:/usr/lib/hive/lib/*:/usr/lib/flume-ng/lib/*:/usr/lib/paquet/lib/*:/usr/lib/avro/lib/*
15/06/08 13:57:07 DEBUG Client:
SPARK_YARN_CACHE_FILES_FILE_SIZES -&
15/06/08 13:57:07 DEBUG Client:
SPARK_YARN_STAGING_DIR -& .sparkStaging/application_6_0001
15/06/08 13:57:07 DEBUG Client:
SPARK_YARN_CACHE_FILES_VISIBILITIES -& PRIVATE
15/06/08 13:57:07 DEBUG Client:
SPARK_USER -& root
15/06/08 13:57:07 DEBUG Client:
SPARK_YARN_MODE -& true
15/06/08 13:57:07 DEBUG Client:
SPARK_YARN_CACHE_FILES_TIME_STAMPS -& 9
15/06/08 13:57:07 DEBUG Client:
SPARK_YARN_CACHE_FILES -& hdfs://mycluster:8020/user/root/.sparkStaging/application_6_0001/spark-assembly-1.3.0-cdh5.4.0-hadoop2.6.0-cdh5.4.0.jar#__spark__.jar
15/06/08 13:57:07 DEBUG Client:
resources:
15/06/08 13:57:07 DEBUG Client:
__spark__.jar -& resource { scheme: "hdfs" host: "mycluster" port: 8020 file: "/user/root/.sparkStaging/application_6_0001/spark-assembly-1.3.0-cdh5.4.0-hadoop2.6.0-cdh5.4.0.jar" } size:
timestamp: 9 type: FILE visibility: PRIVATE
15/06/08 13:57:07 DEBUG Client:
15/06/08 13:57:07 DEBUG Client:
/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp '-Dspark.eventLog.enabled=true' '-Dspark.executor.instances=4' '-Dspark.executor.memory=3g' '-Dspark.executor.cores=4' '-Dspark.driver.port=51568' '-Dspark.serializer=org.apache.spark.serializer.KryoSerializer' '-Dspark.driver.appUIAddress=http://bj03-bi-pro-hdpnamenn:4040' '-Dspark.executor.id=&driver&' '-Dspark.kryo.classesToRegister=scala.collection.mutable.BitSet,scala.Tuple2,scala.Tuple1,org.apache.spark.mllib.recommendation.Rating' '-Dspark.driver.maxResultSize=8g' '-Dspark.jars=file:/usr/lib/spark/lib/spark-examples-1.3.0-cdh5.4.0-hadoop2.6.0-cdh5.4.0.jar' '-Dspark.driver.memory=2g' '-Dspark.eventLog.dir=hdfs://mycluster:8020/user/spark/applicationHistory' '-Dspark.app.name=Spark Pi' '-Dspark.fileserver.uri=http://X.X.X.X:49172' '-Dspark.tachyonStore.folderName=spark-81aef2-867b-65ee7c922357' -Dspark.yarn.app.container.log.dir=&LOG_DIR& org.apache.spark.deploy.yarn.ExecutorLauncher --arg 'bj03-bi-pro-hdpnamenn:51568' --executor-memory 3072m --executor-cores 4 --num-executors
4 1& &LOG_DIR&/stdout 2& &LOG_DIR&/stderr
15/06/08 13:57:07 DEBUG Client: ===============================================================================
从 Will allocate AM container, with 896 MB memory including 384 MB overhead 日志可以看到,AM占用了
896 MB 内存,除掉 384 MB 的overhead内存,实际上只有 512 MB ,即
spark.yarn.am.memory 的默认值,另外可以看到YARN集群有4个NodeManager,每个container最多有106496 MB内存。
Yarn AM launch context启动了一个Java进程,设置的JVM内存为 512m ,见 /bin/java -server -Xmx512m 。
这里为什么会取默认值呢?查看打印上面这行日志的代码,见org.apache.spark.deploy.yarn.Client:
private def verifyClusterResources(newAppResponse: GetNewApplicationResponse): Unit = {
val maxMem = newAppResponse.getMaximumResourceCapability().getMemory()
logInfo("Verifying our application has not requested more than the maximum " +
s"memory capability of the cluster ($maxMem MB per container)")
val executorMem = args.executorMemory + executorMemoryOverhead
if (executorMem & maxMem) {
throw new IllegalArgumentException(s"Required executor memory (${args.executorMemory}" +
s"+$executorMemoryOverhead MB) is above the max threshold ($maxMem MB) of this cluster!")
val amMem = args.amMemory + amMemoryOverhead
if (amMem & maxMem) {
throw new IllegalArgumentException(s"Required AM memory (${args.amMemory}" +
s"+$amMemoryOverhead MB) is above the max threshold ($maxMem MB) of this cluster!")
logInfo("Will allocate AM container, with %d MB memory including %d MB overhead".format(
amMemoryOverhead))
args.amMemory来自ClientArguments类,这个类中会校验输出参数:
private def validateArgs(): Unit = {
if (numExecutors &= 0) {
throw new IllegalArgumentException(
"You must specify at least 1 executor!\n" + getUsageMessage())
if (executorCores & sparkConf.getInt("spark.task.cpus", 1)) {
throw new SparkException("Executor cores must not be less than " +
"spark.task.cpus.")
if (isClusterMode) {
for (key &- Seq(amMemKey, amMemOverheadKey, amCoresKey)) {
if (sparkConf.contains(key)) {
println(s"$key is set but does not apply in cluster mode.")
amMemory = driverMemory
amCores = driverCores
for (key &- Seq(driverMemOverheadKey, driverCoresKey)) {
if (sparkConf.contains(key)) {
println(s"$key is set but does not apply in client mode.")
sparkConf.getOption(amMemKey)
.map(Utils.memoryStringToMb)
.foreach { mem =& amMemory = mem }
sparkConf.getOption(amCoresKey)
.map(_.toInt)
.foreach { cores =& amCores = cores }
从上面代码可以看到当 isClusterMode 为true时,则args.amMemory值为driverMemory的值;否则,则从 spark.yarn.am.memory 中取,如果没有设置该属性,则取默认值512m。isClusterMode 为true的条件是 userClass 不为空,
def isClusterMode: Boolean = userClass != null ,即输出参数需要有 --class 参数,而从下面日志可以看到ClientArguments的输出参数中并没有该参数。
15/06/08 13:57:04 DEBUG YarnClientSchedulerBackend: ClientArguments called with: --arg bj03-bi-pro-hdpnamenn:51568 --num-executors 4 --num-executors 4 --executor-memory 3g --executor-memory 3g --executor-cores 4 --executor-cores 4 --name Spark Pi
故,要想设置AM申请的内存值,要么使用cluster模式,要么在client模式中,是有 --conf 手动设置 spark.yarn.am.memory 属性,例如:
spark-submit --class org.apache.spark.examples.SparkPi \
--master yarn-client
--num-executors 4 \
--driver-memory 2g \
--executor-memory 3g \
--executor-cores 4 \
--conf spark.yarn.am.memory=1024m \
/usr/lib/spark/lib/spark-examples-1.3.0-cdh5.4.0-hadoop2.6.0-cdh5.4.0.jar \
打开YARN管理界面,可以看到:
a. Spark Pi 应用启动了5个Container,使用了18G内存、5个CPU core
b. YARN为AM启动了一个Container,占用内存为2048M
c. YARN启动了4个Container运行任务,每一个Container占用内存为4096M
为什么会是 2G +4G *4=18G 呢?第一个Container只申请了2G内存,是因为我们的程序只为AM申请了512m内存,而
yarn.scheduler.minimum-allocation-mb 参数决定了最少要申请2G内存。至于其余的Container,我们设置了executor-memory内存为3G,为什么每一个Container占用内存为4096M呢?
为了找出规律,多测试几组数据,分别测试并收集executor-memory为3G、4G、5G、6G时每个executor对应的Container内存申请情况:
executor-memory=3g:2G+4G * 4=18Gexecutor-memory=4g:2G+6G * 4=26Gexecutor-memory=5g:2G+6G * 4=26Gexecutor-memory=6g:2G+8G * 4=34G关于这个问题,我是查看源代码,根据org.apache.spark.deploy.yarn.ApplicationMaster -& YarnRMClient -& YarnAllocator的类查找路径找到YarnAllocator中有这样一段代码:
// Executor memory in MB.
protected val executorMemory = args.executorMemory
// Additional memory overhead.
protected val memoryOverhead: Int = sparkConf.getInt("spark.yarn.executor.memoryOverhead",
math.max((MEMORY_OVERHEAD_FACTOR * executorMemory).toInt, MEMORY_OVERHEAD_MIN))
// Number of cores per executor.
protected val executorCores = args.executorCores
// Resource capability requested for each executors
private val resource = Resource.newInstance(executorMemory + memoryOverhead, executorCores)
因为没有具体的去看YARN的源代码,所以这里猜测Container的大小是根据 executorMemory + memoryOverhead 计算出来的,大概的规则是每一个Container的大小必须为
yarn.scheduler.minimum-allocation-mb 值的整数倍,当 executor-memory=3g 时,
executorMemory + memoryOverhead 为3G+384M=3456M,需要申请的Container大小为
yarn.scheduler.minimum-allocation-mb * 2 =4096m=4G,其他依此类推。
Yarn always rounds up memory requirement to multiples of yarn.scheduler.minimum-allocation-mb , which by default is 1024 or 1GB.
Spark adds an overhead to SPARK_EXECUTOR_MEMORY/SPARK_DRIVER_MEMORY before asking Yarn for the amount.
另外,需要注意memoryOverhead的计算方法,当executorMemory的值很大时,memoryOverhead的值相应会变大,这个时候就不是384m了,相应的Container申请的内存值也变大了,例如:当executorMemory设置为90G时,memoryOverhead值为
math.max(0.07 * 90G, 384m)=6.3G ,其对应的Container申请的内存为98G。
回头看看给AM对应的Container分配2G内存原因,512+384=896,小于2G,故分配2G,你可以在设置 spark.yarn.am.memory 的值之后再来观察。
打开Spark的管理界面
,可以看到driver和Executor中内存的占用情况:
从上图可以看到Executor占用了1566.7 MB内存,这是怎样计算出来的?参考
这篇文章,totalExecutorMemory的计算方式为:
//yarn/common/src/main/scala/org/apache/spark/deploy/yarn/YarnSparkHadoopUtil.scala
val MEMORY_OVERHEAD_FACTOR = 0.07
val MEMORY_OVERHEAD_MIN = 384
//yarn/common/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala
protected val memoryOverhead: Int = sparkConf.getInt("spark.yarn.executor.memoryOverhead",
math.max((MEMORY_OVERHEAD_FACTOR * executorMemory).toInt, MEMORY_OVERHEAD_MIN))
val totalExecutorMemory = executorMemory + memoryOverhead
numPendingAllocate.addAndGet(missing)
logInfo(s"Will allocate $missing executor containers, each with $totalExecutorMemory MB " +
s"memory including $memoryOverhead MB overhead")
这里我们给executor-memory设置的3G内存,memoryOverhead的值为 math.max(0.07 * )=384 ,其最大可用内存通过下面代码来计算:
//core/src/main/scala/org/apache/spark/storage/BlockManager.scala
/** Return the total amount of storage memory available. */
private def getMaxMemory(conf: SparkConf): Long = {
val memoryFraction = conf.getDouble("spark.storage.memoryFraction", 0.6)
val safetyFraction = conf.getDouble("spark.storage.safetyFraction", 0.9)
(Runtime.getRuntime.maxMemory * memoryFraction * safetyFraction).toLong
即,对于executor-memory设置3G时,executor内存占用大约为 3072m * 0.6 * 0.9 = 1658.88m,注意:实际上是应该乘以
Runtime.getRuntime.maxMemory 的值,该值小于3072m。
上图中driver占用了1060.3 MB,此时driver-memory的值是位2G,故driver中存储内存占用为:2048m * 0.6 * 0.9 =1105.92m,注意:实际上是应该乘以
Runtime.getRuntime.maxMemory 的值,该值小于2048m。
这时候,查看worker节点CoarseGrainedExecutorBackend进程启动脚本:
46841 Worker
21894 CoarseGrainedExecutorBackend
21816 ExecutorLauncher
24300 NodeManager
38012 JournalNode
36929 QuorumPeerMain
$ ps -ef|grep 21894
21894 21892 99 17:28 ?
00:04:49 /usr/java/jdk1.7.0_71/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms3072m -Xmx3072m
-Djava.io.tmpdir=/data/yarn/local/usercache/root/appcache/application_6_0069/container_6_003/tmp -Dspark.driver.port=60235 -Dspark.yarn.app.container.log.dir=/data/yarn/logs/application_6_0069/container_6_003 org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://sparkDriver@bj03-bi-pro-hdpnamenn:60235/user/CoarseGrainedScheduler --executor-id 2 --hostname X.X.X.X --cores 4 --app-id application_6_0069 --user-class-path file:/data/yarn/local/usercache/root/appcache/application_6_0069/container_6_003/__app__.jar
可以看到每个CoarseGrainedExecutorBackend进程分配的内存为3072m,如果我们想查看每个executor的jvm运行情况,可以开启jmx。在/etc/spark/conf/spark-defaults.conf中添加下面一行代码:
spark.executor.extraJavaOptions -Dcom.sun.management.jmxremote.port=1099 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false
然后,通过jconsole监控jvm堆内存运行情况,这样方便调试内存大小。
由上可知,在client模式下,AM对应的Container内存由 spark.yarn.am.memory 加上 spark.yarn.am.memoryOverhead 来确定,executor加上spark.
yarn.executor.memoryOverhead 的值之后确定对应Container需要申请的内存大小,driver和executor的内存加上
spark.yarn.driver.memoryOverhead 或 spark.yarn.executor.memoryOverhead 的值之后再乘以0.54确定storage memory内存大小。在YARN中,Container申请的内存大小必须为
yarn.scheduler.minimum-allocation-mb 的整数倍。
下面这张图展示了Spark on YARN 内存结构,图片来自
至于cluster模式下的分析,请参考上面的过程。希望这篇文章对你有所帮助!
没有更多推荐了,不知道未来会是什么样子,虽然我依然在这条路上走着...
Hadoop YARN: 1/1 local-dirs are bad: /var/lib/hadoop-yarn/cache/yarn/nm-local- 1/1 log-dirs are
1/1 local-dirs are
bad: /var/lib/hadoop-yarn/cache/yarn/nm-local-dir;
1/1 log-dirs are
bad: /var/log/hadoop-yarn/containers
Node Manager logs
yarn.server.nodemanager.DirectoryCollection:
Directory /var/lib/hadoop-yarn/cache/yarn/nm-local-dir error,
used space above threshold of 90.0%, removing from list of valid directories
17:45:00,713 WARN org.apache.hadoop.yarn.server.nodemanager.DirectoryCollection: Directory
/var/log/hadoop-yarn/containers error,
used space above threshold of 90.0%, removing from list of valid directories
17:45:00,713 INFO org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService: Disk(s) failed: 1/1 local-dirs are
bad: /var/lib/hadoop-yarn/cache/yarn/nm-local-dir;
1/1 log-dirs are
bad: /var/log/hadoop
Resource Manager logs
16:57:07,301 INFO org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl: Node localhost:34650 reported UNHEALTHY with details: 1/1 local-dirs are
bad: /var/lib/hadoop-yarn/cache/yarn/nm-local-dir;
1/1 log-dirsare
bad: /var/log/hadoop-yarn/containers
遇到上面这样的问题,我的解决办法是:
在yarn-site.xml 里面配置
yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage
重启yarn即可解决!
yarn-daemon.sh stop nodemanager
yarn-daemon.sh start nodemanager
Hadoop YARN最近版本中增加的几个非常有用的特性,包括:
(1)ResourceManager HA
在apache hadoop 2.4或者CDH5.0.0版本之后,增加了ResourceManger HA特性,支持基于Zookeeper的热主备切换,具体配置参数可以参考Cloudera的文档:。
需要注意的是,ResourceManager HA只完成了第一个阶段的设计,即备ResourceManager启动后,会杀死之前正在运行的Application,然后从共享存储系统中读取这些Application的元数据信息,并重新提交这些Application。启动ApplicationMaster后,剩下的容错功能就交给ApplicationMaster实现了,比如MapReduce的ApplicationMaster会不断地将完成的任务信息写到HDFS上,这样,当它重启时,可以重新读取这些日志,进而只需重新运行那些未完成的任务。ResourceManager
HA第二个阶段的任务是,备ResourceManager接管主ResourceManager后,无需杀死那些正在运行的Application,让他们像任何事情没有发生一样运行下去。
(2) 磁盘容错
在apache hadoop 2.4或者CDH5.0.0版本之后,增加了几个对多磁盘非常友好地参数,这些参数允许YARN更好地使用NodeManager上的多块磁盘,相关jira为:,主要新增了三个参数:
yarn.nodemanager.disk-health-checker.min-healthy-disks:NodeManager上最少保证健康磁盘比例,当健康磁盘比例低于该值时,NodeManager不会再接收和启动新的Container,默认值是0.25,表示25%;
yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage:一块磁盘的最高使用率,当一块磁盘的使用率超过该值时,则认为该盘为坏盘,不再使用该盘,默认是100,表示100%,可以适当调低;
yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb:一块磁盘最少保证剩余空间大小,当某块磁盘剩余空间低于该值时,将不再使用该盘,默认是0,表示0MB。
(3)资源调度器
Fair Scheduler:Fair Scheduler增加了一个非常有用的新特性,允许用户在线将一个应用程序从一个队列转移到另外一个队列,比如将一个重要作业从一个低优先级队列转移到高优先级队列,操作命令是:bin/yarn application -movetoqueue appID -queue targetQueueName,相关jira为:。
Capacity Scheduler:Capacity Scheduler中资源抢占功能经过了充分的测试,可以使用了。
没有更多推荐了,

我要回帖

更多关于 yarn jar pom.xml 的文章

 

随机推荐