Home IT Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ HA-Federation-HDFS + Yarn cluster deployment     - Ubuntu 14.04 LTS NTFS partition can not access solution (Linux)

- Java-based data source database access (Programming)

- Disk partition MBR (Linux)

- Fedora 23 How to install LAMP server (Server)

- Create a custom pixel format based on an existing image data BufferedImage (Programming)

- Ubuntu FAQ solutions (Linux)

- Ubuntu install driver manager Mint Driver Manager and Device Driver Manager (Linux)

- CentOS 6 Install Linux kernel source (Linux)

- Android child thread really do not update UI (Programming)

- Hadoop 1 and 2.x installation notes (Server)

- Linux system installation and usage instructions Wetty (Linux)

- To install Redis under Linux (Database)

- Simple comparison of MySQL and Oracle in a sql analytical details (Database)

- OpenGL shadow map (Programming)

- Debian GNU / Linux service list acquisition, shutting down services or run (Linux)

- Linux user directory (Linux)

- ElasticSearch basic usage and cluster structures (Server)

- Git Advanced Tutorial (Linux)

- Ubuntu Gnome and Fedora 22 Gnome desktop, extended to achieve global menu (Linux)

- Father of Python: Why Python zero-based index (Programming)

 
         
  HA-Federation-HDFS + Yarn cluster deployment
     
  Add Date : 2018-11-21      
         
       
         
  After an afternoon of attempts, finally put this cluster build a good, complete ride did not feel much need, when is it to learn to do after building the foundation for a real environment.

The following structures are a Ha-Federation-hdfs + Yarn cluster deployment.

First talk about my configuration:

Started on four nodes are:

1.abctest117: active namenode,

2.abctest118 standby namenode, journalnode, datanode

3.abctest119 active namenode, journalnode, datanode

4.abctest120 standby namenode, journalnode, datanode

This is simply because the computer could not hold so the virtual machine, in fact, all of the nodes here should be on a different server. Simply put, that is, 117 and 119 do active namenode, 118 and 120 do standby namenode, were placed on the 118.119.120 datanode and journalnode.

Omitted here a million words, after a good variety of configurations. . Problems encountered and recorded as follows:

 1. Start journalnode, this journalnode saying I do not quite understand what he is doing ~~ subsequent study it. Started on each node journalnode:

[abctest @ abctest118 Hadoop-2.6.0] $ sbin / hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/abctest/hadoop-2.6.0/logs/hadoop-abctest-journalnode-abctest118.abctest.out
[abctest @ abctest118 hadoop-2.6.0] $ jps
11447 JournalNode
11485 Jps

 2. Format namenode being given :( last check out is not about firewalls ... Free password do not reflect off the firewall)

15/08/20 02:12:45 INFO ipc.Client: Retrying connect to server: abctest119 / 192.168.75.119: 8485 Already tried 8 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1000 MILLISECONDS).
15/08/20 02:12:46 INFO ipc.Client: Retrying connect to server: abctest118 / 192.168.75.118: 8485 Already tried 8 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1000 MILLISECONDS).
15/08/20 02:12:46 INFO ipc.Client: Retrying connect to server: abctest120 / 192.168.75.120:. 8485 Already tried 9 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1000 MILLISECONDS)
15/08/20 02:12:46 INFO ipc.Client: Retrying connect to server: abctest119 / 192.168.75.119:. 8485 Already tried 9 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1000 MILLISECONDS)
15/08/20 02:12:46 WARN namenode.NameNode: Encountered exception during format:
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting 2 exceptions thrown.:
192.168.75.120:8485: No Route to Host from 43.49.49.59.broad.ty.sx.dynamic.163data.com.cn/59.49.49.43 to abctest120: 8485 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see: http://wiki.apache.org/hadoop/NoRouteToHost
192.168.75.119:8485: No Route to Host from 43.49.49.59.broad.ty.sx.dynamic.163data.com.cn/59.49.49.43 to abctest119: 8485 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see: http://wiki.apache.org/hadoop/NoRouteToHost
    at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create (QuorumException.java:81)
    at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException (QuorumCall.java:223)
    at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData (QuorumJournalManager.java:232)
    at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat (Storage.java:884)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat (FSImage.java:171)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.format (NameNode.java:937)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode (NameNode.java:1379)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main (NameNode.java:1504)
15/08/20 02:12:47 INFO ipc.Client: Retrying connect to server: abctest118 / 192.168.75.118: 8485 Already tried 9 time (s); retry policy is RetryUpToMaximumCountWithFixedSleep (maxRetries = 10, sleepTime = 1000 MILLISECONDS).
15/08/20 02:12:47 FATAL namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting 2 exceptions thrown.:
192.168.75.120:8485: No Route to Host from 43.49.49.59.broad.ty.sx.dynamic.163data.com.cn/59.49.49.43 to abctest120: 8485 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see: http://wiki.apache.org/hadoop/NoRouteToHost
192.168.75.119:8485: No Route to Host from 43.49.49.59.broad.ty.sx.dynamic.163data.com.cn/59.49.49.43 to abctest119: 8485 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see: http://wiki.apache.org/hadoop/NoRouteToHost
    at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create (QuorumException.java:81)
    at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException (QuorumCall.java:223)
    at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData (QuorumJournalManager.java:232)
    at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat (Storage.java:884)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat (FSImage.java:171)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.format (NameNode.java:937)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode (NameNode.java:1379)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main (NameNode.java:1504)
15/08/20 02:12:47 INFO util.ExitUtil: Exiting with status 1
15/08/20 02:12:47 INFO namenode.NameNode: SHUTDOWN_MSG:
/ ************************************************* ***********
SHUTDOWN_MSG: Shutting down NameNode at 43.49.49.59.broad.ty.sx.dynamic.163data.com.cn/59.49.49.43

 Formatting successful!

[abctest @ abctest117 hadoop-2.6.0] $ bin / hdfs namenode -format -clusterId hadoop-cluster

15/08/20 02:22:05 INFO namenode.FSNamesystem: Append Enabled: true
15/08/20 02:22:06 INFO util.GSet: Computing capacity for map INodeMap
15/08/20 02:22:06 INFO util.GSet: VM type = 64-bit
15/08/20 02:22:06 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
15/08/20 02:22:06 INFO util.GSet: capacity = 2 ^ 20 = 1048576 entries
15/08/20 02:22:06 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/08/20 02:22:06 INFO util.GSet: Computing capacity for map cachedBlocks
15/08/20 02:22:06 INFO util.GSet: VM type = 64-bit
15/08/20 02:22:06 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
15/08/20 02:22:06 INFO util.GSet: capacity = 2 ^ 18 = 262144 entries
15/08/20 02:22:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/08/20 02:22:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/08/20 02:22:06 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
15/08/20 02:22:06 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/08/20 02:22:06 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/08/20 02:22:06 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/08/20 02:22:06 INFO util.GSet: VM type = 64-bit
15/08/20 02:22:06 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
15/08/20 02:22:06 INFO util.GSet: capacity = 2 ^ 15 = 32768 entries
15/08/20 02:22:06 INFO namenode.NNConf: ACLs enabled false?
15/08/20 02:22:06 INFO namenode.NNConf: XAttrs enabled true?
15/08/20 02:22:06 INFO namenode.NNConf: Maximum size of an xattr: 16384
15/08/20 02:22:08 INFO namenode.FSImage: Allocated new BlockPoolId: BP-971817124-192.168.75.117-1440062528650
15/08/20 02:22:08 INFO common.Storage: Storage directory / home / abctest / hadoop / hdfs / name has been successfully formatted.
15/08/20 02:22:10 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid> = 0
15/08/20 02:22:10 INFO util.ExitUtil: Exiting with status 0
15/08/20 02:22:10 INFO namenode.NameNode: SHUTDOWN_MSG:
/ ************************************************* ***********
SHUTDOWN_MSG: Shutting down NameNode at abctest117 / 192.168.75.117
************************************************** ********** /

 3. Turn namenode:

[abctest @ abctest117 hadoop-2.6.0] $ sbin / hadoop-daemon.sh start namenode
starting namenode, logging to /home/abctest/hadoop-2.6.0/logs/hadoop-abctest-namenode-abctest117.out
[abctest @ abctest117 hadoop-2.6.0] $ jps
18550 NameNode
18604 Jps

4. Format standby namenode

[abctest @ abctest119 hadoop-2.6.0] $ bin / hdfs namenode -bootstrapStandby
15/08/20 02:36:26 INFO namenode.NameNode: STARTUP_MSG:
/ ************************************************* ***********
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = abctest119 / 192.168.75.119
STARTUP_MSG: args = [-bootstrapStandby]
STARTUP_MSG: version = 2.6.0
.....
.....
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21: 10Z
STARTUP_MSG: java = 1.8.0_51
************************************************** ********** /
15/08/20 02:36:26 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/08/20 02:36:26 INFO namenode.NameNode: createNameNode [-bootstrapStandby]
================================================== ===
About to bootstrap Standby ID nn2 from:
          Nameservice ID: hadoop-cluster1
        Other Namenode ID: nn1
  Other NN's HTTP address: http: // abctest117: 50070
  Other NN's IPC address: abctest117 / 192.168.75.117: 8020
            Namespace ID: 1244139539
            Block pool ID: BP-971817124-192.168.75.117-1440062528650
              Cluster ID: hadoop-cluster
          Layout version: -60
================================================== ===
15/08/20 02:36:28 INFO common.Storage: Storage directory / home / abctest / hadoop / hdfs / name has been successfully formatted.
15/08/20 02:36:29 INFO namenode.TransferFsImage: Opening connection to http: // abctest117: 50070 / imagetransfer getimage = 1 & txid = 0 & storageInfo = -60:? 1244139539: 0: hadoop-cluster
15/08/20 02:36:30 INFO namenode.TransferFsImage: Image Transfer timeout configured to 60000 milliseconds
15/08/20 02:36:30 INFO namenode.TransferFsImage: Transfer took 0.01s at 0.00 KB / s
15/08/20 02:36:30 INFO namenode.TransferFsImage: Downloaded file fsimage.ckpt_0000000000000000000 size 352 bytes.
15/08/20 02:36:30 INFO util.ExitUtil: Exiting with status 0
15/08/20 02:36:30 INFO namenode.NameNode: SHUTDOWN_MSG:
/ ************************************************* ***********
SHUTDOWN_MSG: Shutting down NameNode at abctest119 / 192.168.75.119
************************************************** ********** /

5. Turn on standby namenode

[abctest @ abctest119 hadoop-2.6.0] $ sbin / hadoop-daemon.sh start namenode
starting namenode, logging to /home/abctest/hadoop-2.6.0/logs/hadoop-abctest-namenode-abctest119.out
[abctest @ abctest119 hadoop-2.6.0] $ jps
14401 JournalNode
15407 NameNode
15455 Jps
     
         
       
         
  More:      
 
- Configuring a Linux operating system security management services (Linux)
- MySQL my.cnf sql_mode schema modifications (Database)
- Grub2: Save Your bootloader (Linux)
- Installation of Gitlab under Ubuntu (Linux)
- Installation Enpass secure password manager on Ubuntu (Linux)
- C # how to generate a folder or file automatically rename (Programming)
- AngularJS asynchronous service testing and Mocking (Programming)
- Ubuntu 14.10 / 14.04 how to install Quick Start tool Mutate 2.2 (Linux)
- Grub2 Boots the openSUSE installation image (Linux)
- Linux bash: scp: command not found the problem (Linux)
- Performance comparison Fibonacci recursive and non-recursive (Programming)
- Command filter MySQL slow query log (Database)
- Firewall chapter of Linux server security configuration (Linux)
- CentOS-based Kickstart automated installation practice (Linux)
- Ubuntu and Archlinux install Notepadqq 0.50.2 (Linux)
- Java foundation comb: Array (Programming)
- SecureCRT use the configuration detailed tutorial (Linux)
- Redis application of Sina Weibo (Database)
- HttpClient Tutorial (Programming)
- Configuring the remote Linux server SSH key authentication to automatically login in Mac OS X (Server)
     
           
     
  CopyRight 2002-2016 newfreesoft.com, All Rights Reserved.