Namenode: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.NotReplicatedYetException): Not replicated yet

dfs.namenode.handler.count=64 by default in cluster
(system calculated by 20X{number of nodes}=60 for a 3 datanode cluster)
As the number of nodes increases, this property will not change automatically so have
to increase manually.
The Hadoop RPC server consists of a single RPC queue per port and multiple handler
(worker) threads that dequeue and process requests. If the number of handlers is
insufficient, then the RPC queue starts building up and eventually overflows.
You may start seeing task failures and eventually job failures and unhappy users.
It is recommended that the RPC handler count is set to 20 * log2(Cluster Size) with an
upper limit of 200.
e.g. for a 64 node cluster you should initialize this to 20 * log2(64) = 120. 

The RPC handler count can be configured with the following setting in hdfs-site.xml


There are no comments yet

Leave a comment

Your email address will not be published. Required fields are marked *