Home IT Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ HA-Federation-HDFS + Yarn cluster deployment     - Git Experience Sharing - Using a remote repository (Linux)

- Ubuntu 15.10 install the latest Arduino IDE 1.6.7 (Linux)

- Nginx configuration support f4v video format player (Server)

- Ubuntu installation 2.10.x version of Scala (Linux)

- Oracle users to automatically increase the partition table (Database)

- C language function pointer and a callback function (Programming)

- Java 8 stream parsed into SQL (Programming)

- Can not remember how to solve the problem under Ubuntu brightness setting (Linux)

- Ubuntu Linux use MAC binding against ARP attacks (Linux)

- RM Environment Database RMAN Backup Strategy Formulation (Database)

- Upgrade Goldengate 11.1.1.1.2 to 11.2.1.0.1 (Database)

- Under CentOS yum install Nginx smooth switch mounted to Tengine (Server)

- GitLab upgrade to 8.2.0 (Linux)

- Delete specific files using bash directory under Linux (Linux)

- Oracle PL / SQL selective basis (IF CASE), (LOOP WHILE FOR) (Database)

- Flask installation environment (Linux)

- Restrict console access to Linux servers to improve security (Linux)

- How to fix Ubuntu / Mint can not add PPA source of error (Linux)

- Installation Android IDE development tools, Android Studio 1.5 under Ubuntu (Linux)

- MySQL dual master configuration (Database)

 
         
  HA-Federation-HDFS + Yarn cluster deployment
     
  Add Date : 2018-11-21      
         
       
         
  After an afternoon of attempts, finally put this cluster build a good, complete ride did not feel much need, when is it to learn to do after building the foundation for a real environment.

The following structures are a Ha-Federation-hdfs + Yarn cluster deployment.

First talk about my configuration:

Started on four nodes are:

1.abctest117: active namenode,

2.abctest118 standby namenode, journalnode, datanode

3.abctest119 active namenode, journalnode, datanode

4.abctest120 standby namenode, journalnode, datanode

This is simply because the computer could not hold so the virtual machine, in fact, all of the nodes here should be on a different server. Simply put, that is, 117 and 119 do active namenode, 118 and 120 do standby namenode, were placed on the 118.119.120 datanode and journalnode.

Omitted here a million words, after a good variety of configurations. . Problems encountered and recorded as follows:

 1. Start journalnode, this journalnode saying I do not quite understand what he is doing ~~ subsequent study it. Started on each node journalnode:

[abctest @ abctest118 Hadoop-2.6.0] $ sbin / hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/abctest/hadoop-2.6.0/logs/hadoop-abctest-journalnode-abctest118.abctest.out
[abctest @ abctest118 hadoop-2.6.0] $ jps
11447 JournalNode
11485 Jps

 2. Format namenode being given :( last check out is not about firewalls ... Free password do not reflect off the firewall)

15/08/20 02:12:45 INFO ipc.Client: Retrying connect to server: abctest119 / 192.168.75.119: 8485 Already tried 8 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1000 MILLISECONDS).
15/08/20 02:12:46 INFO ipc.Client: Retrying connect to server: abctest118 / 192.168.75.118: 8485 Already tried 8 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1000 MILLISECONDS).
15/08/20 02:12:46 INFO ipc.Client: Retrying connect to server: abctest120 / 192.168.75.120:. 8485 Already tried 9 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1000 MILLISECONDS)
15/08/20 02:12:46 INFO ipc.Client: Retrying connect to server: abctest119 / 192.168.75.119:. 8485 Already tried 9 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1000 MILLISECONDS)
15/08/20 02:12:46 WARN namenode.NameNode: Encountered exception during format:
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting 2 exceptions thrown.:
192.168.75.120:8485: No Route to Host from 43.49.49.59.broad.ty.sx.dynamic.163data.com.cn/59.49.49.43 to abctest120: 8485 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see: http://wiki.apache.org/hadoop/NoRouteToHost
192.168.75.119:8485: No Route to Host from 43.49.49.59.broad.ty.sx.dynamic.163data.com.cn/59.49.49.43 to abctest119: 8485 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see: http://wiki.apache.org/hadoop/NoRouteToHost
    at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create (QuorumException.java:81)
    at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException (QuorumCall.java:223)
    at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData (QuorumJournalManager.java:232)
    at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat (Storage.java:884)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat (FSImage.java:171)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.format (NameNode.java:937)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode (NameNode.java:1379)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main (NameNode.java:1504)
15/08/20 02:12:47 INFO ipc.Client: Retrying connect to server: abctest118 / 192.168.75.118: 8485 Already tried 9 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1000 MILLISECONDS).
15/08/20 02:12:47 FATAL namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting 2 exceptions thrown.:
192.168.75.120:8485: No Route to Host from 43.49.49.59.broad.ty.sx.dynamic.163data.com.cn/59.49.49.43 to abctest120: 8485 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see: http://wiki.apache.org/hadoop/NoRouteToHost
192.168.75.119:8485: No Route to Host from 43.49.49.59.broad.ty.sx.dynamic.163data.com.cn/59.49.49.43 to abctest119: 8485 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see: http://wiki.apache.org/hadoop/NoRouteToHost
    at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create (QuorumException.java:81)
    at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException (QuorumCall.java:223)
    at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData (QuorumJournalManager.java:232)
    at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat (Storage.java:884)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat (FSImage.java:171)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.format (NameNode.java:937)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode (NameNode.java:1379)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main (NameNode.java:1504)
15/08/20 02:12:47 INFO util.ExitUtil: Exiting with status 1
15/08/20 02:12:47 INFO namenode.NameNode: SHUTDOWN_MSG:
/ ************************************************* ***********
SHUTDOWN_MSG: Shutting down NameNode at 43.49.49.59.broad.ty.sx.dynamic.163data.com.cn/59.49.49.43

 Formatting successful!

[abctest @ abctest117 hadoop-2.6.0] $ bin / hdfs namenode -format -clusterId hadoop-cluster

15/08/20 02:22:05 INFO namenode.FSNamesystem: Append Enabled: true
15/08/20 02:22:06 INFO util.GSet: Computing capacity for map INodeMap
15/08/20 02:22:06 INFO util.GSet: VM type = 64-bit
15/08/20 02:22:06 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
15/08/20 02:22:06 INFO util.GSet: capacity = 2 ^ 20 = 1048576 entries
15/08/20 02:22:06 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/08/20 02:22:06 INFO util.GSet: Computing capacity for map cachedBlocks
15/08/20 02:22:06 INFO util.GSet: VM type = 64-bit
15/08/20 02:22:06 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
15/08/20 02:22:06 INFO util.GSet: capacity = 2 ^ 18 = 262144 entries
15/08/20 02:22:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/08/20 02:22:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/08/20 02:22:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
15/08/20 02:22:06 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/08/20 02:22:06 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/08/20 02:22:06 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/08/20 02:22:06 INFO util.GSet: VM type = 64-bit
15/08/20 02:22:06 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
15/08/20 02:22:06 INFO util.GSet: capacity = 2 ^ 15 = 32768 entries
15/08/20 02:22:06 INFO namenode.NNConf: ACLs enabled false?
15/08/20 02:22:06 INFO namenode.NNConf: XAttrs enabled true?
15/08/20 02:22:06 INFO namenode.NNConf: Maximum size of an xattr: 16384
15/08/20 02:22:08 INFO namenode.FSImage: Allocated new BlockPoolId: BP-971817124-192.168.75.117-1440062528650
15/08/20 02:22:08 INFO common.Storage: Storage directory / home / abctest / hadoop / hdfs / name has been successfully formatted.
15/08/20 02:22:10 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid> = 0
15/08/20 02:22:10 INFO util.ExitUtil: Exiting with status 0
15/08/20 02:22:10 INFO namenode.NameNode: SHUTDOWN_MSG:
/ ************************************************* ***********
SHUTDOWN_MSG: Shutting down NameNode at abctest117 / 192.168.75.117
************************************************** ********** /

 3. Turn namenode:

[abctest @ abctest117 hadoop-2.6.0] $ sbin / hadoop-daemon.sh start namenode
starting namenode, logging to /home/abctest/hadoop-2.6.0/logs/hadoop-abctest-namenode-abctest117.out
[abctest @ abctest117 hadoop-2.6.0] $ jps
18550 NameNode
18604 Jps

4. Format standby namenode

[abctest @ abctest119 hadoop-2.6.0] $ bin / hdfs namenode -bootstrapStandby
15/08/20 02:36:26 INFO namenode.NameNode: STARTUP_MSG:
/ ************************************************* ***********
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = abctest119 / 192.168.75.119
STARTUP_MSG: args = [-bootstrapStandby]
STARTUP_MSG: version = 2.6.0
.....
.....
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21: 10Z
STARTUP_MSG: java = 1.8.0_51
************************************************** ********** /
15/08/20 02:36:26 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/08/20 02:36:26 INFO namenode.NameNode: createNameNode [-bootstrapStandby]
================================================== ===
About to bootstrap Standby ID nn2 from:
          Nameservice ID: hadoop-cluster1
        Other Namenode ID: nn1
  Other NN's HTTP address: http: // abctest117: 50070
  Other NN's IPC address: abctest117 / 192.168.75.117: 8020
            Namespace ID: 1244139539
            Block pool ID: BP-971817124-192.168.75.117-1440062528650
              Cluster ID: hadoop-cluster
          Layout version: -60
================================================== ===
15/08/20 02:36:28 INFO common.Storage: Storage directory / home / abctest / hadoop / hdfs / name has been successfully formatted.
15/08/20 02:36:29 INFO namenode.TransferFsImage: Opening connection to http: // abctest117: 50070 / imagetransfer getimage = 1 & txid = 0 & storageInfo = -60:? 1244139539: 0: hadoop-cluster
15/08/20 02:36:30 INFO namenode.TransferFsImage: Image Transfer timeout configured to 60000 milliseconds
15/08/20 02:36:30 INFO namenode.TransferFsImage: Transfer took 0.01s at 0.00 KB / s
15/08/20 02:36:30 INFO namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000000 size 352 bytes.
15/08/20 02:36:30 INFO util.ExitUtil: Exiting with status 0
15/08/20 02:36:30 INFO namenode.NameNode: SHUTDOWN_MSG:
/ ************************************************* ***********
SHUTDOWN_MSG: Shutting down NameNode at abctest119 / 192.168.75.119
************************************************** ********** /

5. Turn on standby namenode

[abctest @ abctest119 hadoop-2.6.0] $ sbin / hadoop-daemon.sh start namenode
starting namenode, logging to /home/abctest/hadoop-2.6.0/logs/hadoop-abctest-namenode-abctest119.out
[abctest @ abctest119 hadoop-2.6.0] $ jps
14401 JournalNode
15407 NameNode
15455 Jps
     
         
       
         
  More:      
 
- CentOS of NFS (Server)
- Linux environment to configure Apache + Django + wsgi (Server)
- iscsiadm command usage (Linux)
- Installation of Ubuntu Make under Ubuntu 15.10 (Linux)
- Linux account management add relevant directives (Linux)
- RVM installation instructions (Linux)
- Java interview questions in nine radio (Programming)
- C / C ++ various data types Conversion Summary (Programming)
- Install Ubuntu text editor KKEdit 0.2.10 (Linux)
- Linux system Perl Lite netstat (Linux)
- HDFS Hadoop Distributed File System Works (Server)
- Cobbler batch install Ubuntu / CentOS system (Linux)
- How x2g0 install Remote Desktop on Linux VPS (Server)
- The difference between Linux su and sudo commands (Linux)
- Python KNN algorithm of actual realization (Programming)
- About Linux iptables firewall interview questions and answers (Linux)
- See how --nand flash timing diagram of a read operation Comments (Programming)
- printf PHP string operations () built-in function usage (Programming)
- Common data structures and functions of Linux process scheduling (Programming)
- Linux Powerful command Awk Introduction (Linux)
     
           
     
  CopyRight 2002-2016 newfreesoft.com, All Rights Reserved.