个人Hadoop 错误列表

错误1:Too many fetch-failures

Reduce task启动后第一个阶段是shuffle,即向map端fetch数据。每次fetch都可能因为connect超时,read超时,checksum错误等原因而失败。Reduce task为每个map设置了一个计数器,用以记录fetch该map输出时失败的次数。当失败次数达到一定阈值时,会通知JobTracker fetch该map输出操作失败次数太多了,并打印如下log:

Failed to fetch map-output from attempt_201105261254_102769_m_001802_0 even after MAX_FETCH_RETRIES_PER_MAP retries... reporting to the JobTracker

其中阈值计算方式为:

max(MIN_FETCH_RETRIES_PER_MAP,

getClosestPowerOf2((this.maxBackoff * 1000 / BACKOFF_INIT) + 1));

默认情况下MIN_FETCH_RETRIES_PER_MAP=2 maxBackoff=300 BACKOFF_INIT=4000因此默认阈值为6,可通过修改mapred.reduce.copy.backoff参数来调整。

当达到阈值后,Reduce task通过umbilical协议告诉TaskTracker,TaskTracker在下一次heartbeat时,通知JobTracker。当JobTracker发现超过50%的Reduce汇报fetch某个map的输出多次失败后,JobTracker会failed掉该map并重新调度,打印如下log:

"Too many fetch-failures for output of task: attempt_201105261254_102769_m_001802_0 ... killing it"

错误2:Task attempt failed to report status for 622 seconds. Killing

The description for mapred.task.timeout which defaults to 600s says "The number of milliseconds before a task will be terminated if it neither reads an input, writes an output, nor updates its status string. "

Increasing the value of mapred.task.timeout might solve the problem, but you need to figure out if more than 600s is actually required for the map task to complete processing the input data or if there is a bug in the code which needs to be debugged.

According to the Hadoop best practices, on average a map task should take a minute or so to process an InputSplit.

错误3:Hadoop: Blacklisted tasktracker

Put following config in conf/hdfs-site.xml:

<property>
  <name>dfs.hosts</name>
  <value>/full/path/to/whitelisted/node/file</value>
</property>

Use following command to ask Hadoop to refresh node status to based on configuration.

./bin/hadoop dfsadmin -refreshNodes

http://serverfault.com/questions/288440/hadoop-blacklisted-tasktracker