Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Hadoop 2.7.1 installation configuration based on availability QJM     - Bitmap memory footprint of computing Android memory optimization (Linux)

- Configuration OpenOCD + FT2232 under Ubuntu (Linux)

- Linux System Getting Started Tutorial: how to find information on Linux-embedded module (Linux)

- NIC configuration parameters under Linux (Linux)

- Postgres-X2 deployment steps (Database)

- sed command (Linux)

- ARM assembler instruction debugging method (Programming)

- Java objects to garbage collection (Programming)

- Linux Thread Synchronization (Programming)

- Build ASP.NET 5 development environment in Ubuntu (Server)

- Video editing captions under Linux (Linux)

- CentOS6.5 install SVN & visual management tools iF.SVNAdmin (Server)

- Monitor log file (listener.log) (Database)

- Spring-depth understanding of the various annotations (Programming)

- Python uses multi-process pool (Programming)

- Free compiler install MySQL-5.6.14 (Database)

- MySQL binlog group to submit XA (two-phase commit) (Database)

- Forgot Linux root password (Linux)

- React Getting Started Tutorial (Linux)

- Linux remote wake the computer original code [C] (Linux)

 
         
  Hadoop 2.7.1 installation configuration based on availability QJM
     
  Add Date : 2018-11-21      
         
         
         
 

Hadoop 2.7.1 QMJ availability based installation configuration

1. Modify the hostname and hosts file

10.205.22.185 nn1 (primary) role namenode, resourcemanager, datanode, zk, hive, sqoop
10.205.22.186 nn2 (standby) role namenode, resourcemanager, datanode, zk
10.205.22.187 dn1       effect datanode, zk

1.1 Configuring ssh password Free

master password each node can be free from the node

ssh nn1
ssh nn2
ssh dn1

2. Installing jdk1.8 and zookeeper, hive, sqoop can successfully build and then install

2.1 Modify profile file, configure the environment variables

export JAVA_HOME = / usr / java / jdk1.8.0_65
export JRE_HOME = / usr / java / jdk1.8.0_65 / jre
export HADOOP_HOME = / app / hadoop-2.7.1
export HIVE_HOME = / app / hive
export SQOOP_HOME = / app / sqoop
export ZOOKEEPER_HOME = / app / zookeeper-3.4.6
export PATH = $ PATH: $ JAVA_HOME / bin: $ HADOOP_HOME / bin: $ HADOOP_HOME / sbin: $ ZOOKEEPER_HOME / bin: $ HIVE_HOME / bin: $ SQOOP_HOME / bin: $ MAVEN_HOME / bin
export CLASSPATH =:. $ JAVA_HOME / lib: $ JRE_HOME / lib
ulimit -SHn 65536

2.2 zookeeper modify configuration files zoo.cfg

Add:

server.1 = nn1: 2888: 3888
server.2 = nn2: 2888: 3888
server.3 = dn1: 2888: 3888

3. Install hadoop-2.7.1, modify the configuration file

create the appropriate directories

mkdir -p / home / hadoop / tmp
mkdir -p / home / hadoop / hdfs / data
mkdir -p / home / hadoop / journal
mkdir -p / home / hadoop / name

Modify slaves file

nn1
nn2
dn1

Modify hadoop-env.sh file

export JAVA_HOME = / usr / java / jdk1.8.0_65

3.1 Configuration hdfs-site.xml

< configuration >
        < property >
              < name > dfs.nameservices < / name >
              < value > masters < / value >
        < / property >
        < property >
              < name > dfs.ha.namenodes.masters < / name >
              < value > nn1, nn2 < / value >
        < / property >
        < property >
              < name > dfs.namenode.rpc-address.masters.nn1 < / name >
              < value > nn1: 9000 < / value >
        < / property >
        < property >
              < name > dfs.namenode.http-address.masters.nn1 < / name >
              < value > nn1: 50070 < / value >
        < / property >
        < property >
              < name > dfs.namenode.rpc-address.masters.nn2 < / name >
              < value > nn2: 9000 < / value >
        < / property >
        < property >
              < name > dfs.namenode.http-address.masters.nn2 < / name >
              < value > nn2: 50070 < / value >
        < / property >
        < property >
              < name > dfs.datanode.data.dir < / name >
              < value > file: / home / hadoop / hdfs / data < / value >
        < / property >
        < property >
      < name > dfs.replication < / name >
              < value > 2 < / value >
        < / property >
        < property >
              < name > dfs.namenode.name.dir < / name >
              < value > file: / home / hadoop / name < / value >
        < / property >
        < property >
              < name > dfs.namenode.shared.edits.dir < / name >
              < value > qjournal: // nn1: 8485; nn2: 8485; dn1: 8485 / masters < / value >
        < / property >
        < property >
              < name > dfs.journalnode.edits.dir < / name >
              < value > / home / hadoop / journal < / value >
        < / property >
        < property >
              < name > dfs.ha.automatic-failover.enabled < / name >
              < value > true < / value >
        < / property >
        < property >                                  
< name > dfs.client.failover.proxy.provider.masters < / name >            
< value > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider < / value >
        < / property >
        < property >
              < name > dfs.ha.fencing.methods < / name >
              < value > sshfence < / value >
        < / property >
        < property >
        < name > dfs.ha.fencing.ssh.private-key-files < / name >
              < value > /root/.ssh/id_rsa< / value >
        < / property >
        < property >
              < name > dfs.ha.fencing.ssh.connect-timeout < / name >
              < value > 30000 < / value >
        < / property >
< / configuration >

3.2 core-site.xml configuration file

< configuration >
  < property >
      < name > fs.defaultFS < / name >
      < value > hdfs: // masters < / value >
  < / property >
  < property >
      < name > hadoop.tmp. dir < / name >
      < value > / home / hadoop / tmp < / value >
  < / property >
  < property >
      < name > ha.zookeeper.quorum < / name >
      < value > nn1: 2181, nn2: 2181, dn1: 2181 < / value >
  < / property >

  < property >
      < name > io.compression.codecs < / name >
      < value > org.apache. hadoop.io.compress.GzipCodec, org.apache.hadoop.io.compress.DefaultCodec, com.hadoop.compression.lzo.LzoCodec, com.hadoop.compression.lzo.LzopCodec, org.apache.hadoop.io.compress. BZip2Codec < / value >
  < / property >
  < property >
      < name > io.compression.codec.lzo.class < / name >
      < value > com.hadoop.compression.lzo.LzoCodec < / value >
  < / property >
< / configuration >

3.3 yarn-site.xml configuration file

< configuration >
    < property >
      < name > yarn.resourcemanager.ha.enabled < / name >
      < value > true < / value >
    < / property >
    < property >
        < name > yarn.resourcemanager.cluster-id < / name >
        < value > rm-cluster < / value >
    < / property >
    < property >
        < name > yarn.resourcemanager.ha.rm-ids < / name >
        < value > rm1, rm2 < / value >
    < / property >
    < property >
        < name > yarn.resourcemanager.ha.automatic-failover.enabled < / name >
        < value > true < / value >
    < / property >
    < property >
        < name > yarn.resourcemanager.ha.automatic-failover.embedded < / name >
        < value > true < / value >
    < / property >
    < property >
        < name > yarn.resourcemanager.hostname.rm1 < / name >
        < value > nn1 < / value >
    < / property >
    < property >
        < name > yarn.resourcemanager.hostname.rm2 < / name >
        < value > nn2 < / value >
  < / property >
    < property >
      < name > yarn.resourcemanager.store.class < / name >
      < value > org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore < / value >
    < / property >
    < property >
      < name > yarn.resourcemanager.zk-address < / name >
      < value > nn1: 2181, nn2: 2181, dn1: 2181 < / value >
    < / property >
    < property >
      < name > yarn.resourcemanager.scheduler.address.rm1 < / name >
      < value > nn1: 8030 < / value >
    < / property >
    < property >
      < name > yarn.resourcemanager.scheduler.address.rm2 < / name >
      < value > nn2: 8030 < / value >
    < / property >
    < property >
        < name > yarn.resourcemanager.resource-tracker.address.rm1 < / name >
      < value > nn1: 8031 < / value >
    < / property >
    < property >
      < name > yarn.resourcemanager.resource-tracker.address.rm2 < / name >
      < value > nn2: 8031 < / value >
    < / property >
    < property >
      < name > yarn.resourcemanager.address.rm1 < / name >
      < value > nn1: 8032 < / value >
    < / property >
    < property >
      < name > yarn.resourcemanager.address.rm2 < / name >
      < value > nn2: 8032 < / value >
    < / property >
    < property >
      < name > yarn.resourcemanager.admin.address.rm1 < / name >
      < value > nn1: 8033 < / value >
    < / property >
    < property >
      < name > yarn.resourcemanager.admin.address.rm2 < / name >
      < value > nn2: 8033 < / value >
    < / property >
    < property >
      < name > yarn.resourcemanager.webapp.address.rm1 < / name >
      < value > nn1: 8088 < / value >
    < / property >
    < property >
      < name > yarn.resourcemanager.webapp.address.rm2 < / name >
      < value > nn2: 8088 < / value >
    < / property >
    < property >
      < name > yarn.nodemanager. aux-services < / name >
      < value > mapreduce_shuffle < / value >
    < / property >
    < property >
      < name > yarn.nodemanager.aux-services.mapreduce.shuffle.class < / name >
      < value > org.apache. hadoop.mapred.ShuffleHandler < / value >
    < / property >
    < property >
        < name > yarn.client.failover-proxy-provider < / name >
        < value > org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider < / value >
    < / property >
< / configuration >

3.4 mapred-site.xml configuration file

< configuration >
  < property >
      < name > mapreduce.framework.name < / name >
      < value > yarn < / value >
  < / property >
  < property >
      < name > mapreduce.jobhistory.address < / name >
      < value > nn1: 10020 < / value >
  < / property >
  < property >
      < name > mapreduce.jobhistory.webapp.address < / name >
      < value > nn2: 19888 < / value >
  < / property >

  < property >
      < name > mapred.compress.map.output < / name >
      < value > true < / value >
  < / property >
  < property >
      < name > mapred.map.output.compression.codec < / name >
      < value > com.hadoop.compression.lzo.LzoCodec < / value >
  < / property >
  < property >
      < name > mapred.child.env < / name >
      < value > LD_LIBRARY_PATH = / usr / local / lzo / lib < / value >
  < / property >
< / configuration >

3.5 sync hadoop to each node and configure the above-mentioned documents

4. Start the service

4.1 start zookeeper at each node, view the status

zkServer.sh start
zkServer.sh status

In the master format zookeeper

hdfs zkfc -formatZK


4.2 at each node start the logger

hadoop-daemon.sh start journalnode

4.3 in the main namenode node format hadoop

hadoop namenode -format


4.4 node in the master boot namenode namenode process

hadoop-daemon.sh start namenode


4.5 Run in standby node, this is the directory format prepared namenode node and the metadata from the master node synchronization over namenode

hdfs namenode – bootstrapStandby
hadoop-daemon.sh start namenode start namenode
yarn-daemon.sh start resourcemanager start resourcemanager

4.6 start other related services

start-dfs.sh
start-yarn.sh

4.7 Check availability status

hdfs haadmin -getServiceState nn1 / nn2 view namenode
yarn rmadmin -getServiceState rm1 / rm2 view resourcemanager

4.8 log in to view the status web

http: // nn1: 50070
http: // nn1: 8088

     
         
         
         
  More:      
 
- Using C / C ++ extensions Python (Programming)
- OpenGL Superb Learning Notes - Fragment Shader (Programming)
- Linux Defensive / mitigate DDOS attacks (Linux)
- Linux shell scripts bubble sort (Programming)
- PHP call a Python program (Programming)
- Deep understanding of C # generics (Programming)
- How to install Ubuntu California - the calendar application (Linux)
- SQL in the specific internal Oracle process (Database)
- Json data with double backslashes to a single backslash Json data processing (Programming)
- Tune in high resolution to 1280x800 in Fedora 14 (Linux)
- Oracle11g build physical standby database (Database)
- C / C ++ various data types Conversion Summary (Programming)
- GAMIT baseline solution using batch were counted (Linux)
- OpenWrt modify flash size (Linux)
- Subsequent binary search tree traversal sequence (Programming)
- Java generate two-dimensional code by Zxing (Programming)
- Linux character device - a simple character device model (Linux)
- A new method for Linux hidden files (Linux)
- Linux Nginx installation and configuration instructions (Server)
- 10 Codes of good practice PHP (Programming)
     
           
     
  CopyRight 2002-2020 newfreesoft.com, All Rights Reserved.