Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Hadoop + Zookeeper NameNode achieve high availability     - The Rabbitmq installation under CentOS 6.4 (Linux)

- How do I upgrade to Ubuntu 15.04 (Beta) (Linux)

- Ubuntu firewall installation and configuration (Linux)

- Connect to the Oracle Database Help class (Database)

- Verify the character set on MyCAT (Database)

- Cross server / client backup command: rsync use (Server)

- Linux system boot process detail (Linux)

- DM9000 timing settings (Programming)

- Security experience: to see how the experts deal with DDoS attacks (Linux)

- How x2go set up Remote Desktop on Linux (Linux)

- Ubuntu install Eclipse can not find JAVA_HOME problem (Linux)

- Linux find command detailing (Linux)

- Linux installation notes under GAMIT (Linux)

- Android timer use (Programming)

- Supervisor Installation and Configuration (Server)

- Ubuntu 14.04 LTS installed Hadoop 1.2.1 (distributed cluster mode) (Server)

- Build Golang development environment configuration on Ubuntu 14.04 (Linux)

- Python implementation Bursa transition model (Programming)

- To create a full command line Android Build System (Linux)

- CentOS 6.4 (64bit) install Python 2.7.5 (Linux)

 
         
  Hadoop + Zookeeper NameNode achieve high availability
     
  Add Date : 2018-11-21      
         
         
         
 

Hadoop + zookeepker installation and configuration:
 
add export JAVA environment variables
modify the name of the file in hadoop-env.sh the hostname, / etc / hosts file to configure the host the mapping between names and ip will mstaer, slave hostname and ip addresses are added to come
 
Free configuration ssh key configuration
Ssh-keygen – t rsa
in ./.ssh file, two files are generated id_rsa (private), id_rsa.pub (public key)
cat id_rsa.pub > .ssh / authorized_keys
scp authorized_keys user @ ipaddress: / home / user / id_rsa.pub
modify authorzed file permissions to 600
 
// availability Namenode between fact by journalNode cluster or nfs to achieve, both master and slave namenode nodes share a shared directory to achieve high availability, standy time machine synchronization of active namenode machine, namenode automatically switches generally used zookeeper cluster to achieve
 
namenode high availability configurations:
Core-site.Xml fs.defaultFS add attributes to hdfs: // mycluster
hdfs-site to add dfs.federation.nameservers mycluster
        add dfs.namenodes.mycluster value nn1 and nn2
        add dfs.namenode.rpc-address.mycluster.nn1 value hostname1: 8020
        add dfs.namenode.rpc-address.mysqlcluster.nn2 value hostname2: 8020
          value added dfs.namenode.http-address.mycluster.nn1 hostname1: 50070 // web namenode node configuration Check
add dfs.namenode.http-address.mycluster.nn1 value hostname1: 50070
add dfs.namenode.shared.edits.dir shared storage directory location, all of the slave port 8485
          value added dfs.client.failover.proxy.provider.mycluster org.apache.hadoop.hdfs server.namenode. .ha.ConfigureFailoverProxyProvider   // confirm hadoop client and java class active node communication, and use it to confirm whether active active
        value added dfs.ha.fencing.methods sshfence use ssh to switch
// any period must be in a namenode node, this configuration using ssh to connect to namenode node kill namenode the active state
 
Here is hadoop + zookepper all configuration:
configuration hdfs-site.Xml
< configuration >
  < property >
  < name > dfs.replication < / name >
  < value > 3 < / value >                       // copy the text copies of 3
  < / property >
  < property >
  < name > heartbeat.recheckinterval < / name >     // datanode heartbeat time is 10s
  < value > 10 < / value >
  < / property >
  < property >
  < name > dfs.name.dir < / name >            
  < value > file: / mnt / vdc / hadoopstore / hdfs / name < / value >   // OK yuan hdfs file system information stored in the directory, set for the multi directory, you can save the meta-information data multiple backups
  < / property >
  < property >
  < name > dfs.data.dir < / name >
  < value > file: / mnt / vdc / hadoopstore / hdfs / data < / value >   data storage // determine hdfs file system directory, you can hdfs built on a different partition
  < / property >
  < property >
  < name > dfs.webhdfs.enabled < / name >     // access hdfs the web in the ability
  < value > true < / value >
  < / property >
  < property >
  < name > dfs.nameservices < / name >     // define a nameserver family
  < value > mycluster < / value >
  < / property >
  < property >
  < name > dfs.ha.namenodes.mycluster < / name >     // support two so namenode nodes two nodes are namenode nn1, nn2.
  < value > nn1, nn2 < / value >
  < / property >
  < property >
  < name > dfs. namenode.rpc-address.mycluster.nn1 < / name > // first rpc communication address, port 8020
  < value > master1: 8020 < / value >
  < / property >
  < property >
  < name > dfs.namenode.rpc-address.mycluster.nn2 < / name > // second rpc communication address, port for the 8020
  < value > master2: 8020 < / value >
  < / property >
  < property >
  < name > dfs.namenode.http-address.mycluster.nn1 < / name >
  < value > master1: 50070 < / value >   // define a second namenode http port
  < / property >
  < property >
  < name > dfs.namenode.http-address.mycluster.nn2 < / name >
  < value > master2: 50070 < / value >   // define a second namenode the httpd port
  < / property >
  < property >
  < name > dfs.namenode.shared.edits.dir < / name >   < value > qjournal: // master1: 8485; master2: 8485; slave1: 8485; slave2: 8485; slave3: 8485; slave4: 8485; slave5: 8485; slave6: 8485; slave7: 8485; slave8: 8485; slave9: 8485; slave10: 8485 / mycluster < / value >
  < / property >   // shared datanode information
/ / client failover configuration
  < property >
  < name > dfs.client.failover.proxy.provider.mycluster < / name >
  < value > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider < / value >
  < / property >   // automatically switch automatically when implemented by a class to which
  < property >
    < name > dfs.ha.fencing.methods < / name >
    < value > sshfence < / value > // namenode / / switching time using ssh, etc. to operate
  < / property >
 
  < property >
  < name > dfs.ha. fencing.ssh.private-key-files < / name >
  < value > /home/kduser/.ssh/id_rsa< / value > // position
  < / property >
  < property >
  < name > dfs.ha.automatic-failover.enabled < / name >   // ? whether to add mycluster , when a fault occurs whether to automatically switch
  < value > true < / value >
  < / property >
  // this namenode node id configured nn1
< property >
  < name > dfs.ha.namenode.id < / name >
  < value > nn1 < / value >
  < / property >
< / configuration >
 
 
mapred-site.xml configuration file
< configuration >
  < property >
  < name > mapreduce.framework.name < / name >
  < value > yarn < / value > //hadoop2.x later versions of the framework for the yarn
  < / property >
  < property >
  < name > mapreduce.reduce.shuffle.input.buffer.percent < / name > // default configured for 0.7, to improve system system
  < value > 0.1 < value >
  < / property >
  < / configuration >
 
 
configuration yarn-site.xml
< property >
  < name > yarn.nodemanager.resource.memory-mb < / name > // nodemanager the total available physical memory
  < value > 10240 < / value >
  < / property >
  < property >
  < name > yarn.resourcemanager.address < / name >
// ResourceManager client exposure address. Client submitted through this address to RM applications, kill applications, etc.
        < value > master1: 8032 < / value >
        < / property >
        < property >
                < name > yarn.nodemanager. disk-health-checker.max-disk-utilization-per-disk-percentage < / name >
                < value > 95.0 < / value >
        < / property >
        < property >
                < name > yarn.resourcemanager.scheduler.address < / name >
                < value > master1: 8030 < / value >
        < / property >
        < property >
                < name > yarn.resourcemanager.resource-tracker.address < / name >
                < value > master1: 8031 < / value >
        < / property >
        < property >
                < name > yarn.nodemanager.aux-services < / name >
                < value > mapreduce_shuffle < / value >
        < / property >
    < property >
              < name > yarn .resourcemanager.admin.address < / name >
        < value > master1: 8033 < / value >
    < / property >
        < property >
                < name > yarn.nodemanager.aux-services.mapreduce.shuffle.class < / name >
                  < value > org.apache.hadoop.mapred.ShuffleHandler < / value >
        < / property >
        < property >                                                                                                                                                                                                                        
          < name > yarn.resourcemanager.webapp.address < / name >                                                                                                                                                                                
          < value > master1: 8088 < / value >                                                                                                                                                                                                      
        < / property >
 
configure core-site .xml configuration
< configuration >
  < property >
  < name > hadoop.native.lib < / name >
  < value > true < / value >
< description > Shouldnative hadoop libraries, if present, be used <. / description >
// set start the local library, defaults to the local library
  < / property >
< -!
  < property >
  < name > fs.default.name < / name >
  < value > hdfs: //0.0.0.0: 9000 < / value >     // url
  < / property >
- >
  < property >
  < name > hadoop.tmp.dir < / name >
  < value > / mnt / vdc / hadoopstore / tmp < / value >     // hdfs temporary files directory
  < / property >
  < property >
  < name > fs.defaultFS < / name >
  < value > hdfs: // mycluster < / value > // specify hdfs of nameservice is mycluster (two) as hadoop of namenode node high-availability configuration
  < / property >
  < property >
  < name > dfs.journalnode.edits.dir < / name >
  < value > / mnt / vdc / hadoopstore / journal / data < / value >
  < / property >
  < property >
  < name > ha.zookeeper.quorum.mycluster < / name >
  < value > master1: 2181, master2: 2181, slave1: 2181 < / value >
  < / property >
  < property >
  < name > hadoop.proxyuser.oozie.hosts < / name >
  < value > * < / value >
  < / property >
  < property >
  < name > hadoop.proxyuser.oozie.groups < / name >
  < value > * < / value >
  < / property >
  < property >
  < name > hadoop.proxyuser.hue.hosts < / name >
  < value > * < / value >
  < / property >
  < property >
  < name > hadoop.proxyuser.hue.groups < / name >
  < value > * < / value >
  < / property >
< / configuration >
 
the first time you start you need to format hadoop   namenode   – format
View cluster status when using jps to view
Hadoop dfsadmin -report
 
Zookeeper command Explanation:
configure the basic environment variables:
export ZOOKEEPER_HOME = /home/zookeeper-3.3.3
export PATH = $ PATH: $ ZOOKEEPER_HOME / bin: $ ZOOKEEPER_HOME / conf
zookeeper profile zoo.cfg
tickTime = 2000   // default every two seconds will send a heartbeat
dataDir = / diskl / zookeeper   // store location memory database snapshot
dataLogDir = / disk2 / zookeeper   // store directory diary
clientPort = 2181
initlimit = 5         // connection timeout beats, where 5 means that the 10s when it will exit
syncLimit = 2    
server.l = zookeeperl: 2888: 3888
server.2 = zookeeper2: 2888: 3888
server.3 = zookeeper3: 2888: 3888
 
zookeeper 2181 ports for customers end, 2888 port for connecting with the followers, port 3888 is used to modify the electoral
myid file that the configuration file for the dataDir 1,2,3
 
ZkServer. sh start / stop / status   enable / disable / status
zkCLi.sh – serveripaddress: 2181   // connect to a station zookeeper server          
use ls   / View node content
get   / xxxx to view the contents of the string inside
set / create / deletexxx settings / create / delete the contents of the node
but zookeeper mainly in the form of using api visit

     
         
         
         
  More:      
 
- C language binary tree counts words (Programming)
- Linux and Windows Security Topics (Linux)
- Using Vagrant to build multi-platform environment (Server)
- How to run Kali Linux 2.0 in Docker container (Linux)
- Piostat - Monitoring and Statistics Linux process (Linux)
- Linux host dual LAN transceiver package ARP problem (Linux)
- Vim configuration instructions (Linux)
- CentOS / Linux kernel upgrade (Linux)
- Linux device driver development small example --LED lights (Programming)
- Analyzing Linux server architecture is 32-bit / 64-bit (Server)
- Restore database fault encountered ORA-0600 (Database)
- Ubuntu PPA install SMPlayer 14.9 (Linux)
- Java polymorphism and exception handling (Programming)
- Linux Network Programming - non-blocking program (Programming)
- Monitoring network traffic with Iptraf in Linux environment (Linux)
- PXE + Kickstart automatically install CentOS 6.5 (Linux)
- RedHat / CentOS ext4 partition can not be formatted large supplementary ext4 formatting (Linux)
- Linux bash: scp: command not found the problem (Linux)
- Linux Getting Started tutorial: build your own Vim (Linux)
- Linux use logs to troubleshoot (Linux)
     
           
     
  CopyRight 2002-2020 newfreesoft.com, All Rights Reserved.