Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Hadoop + Zookeeper NameNode achieve high availability     - linux remote control software (Linux)

- Daemon under Linux (Linux)

- Java source implementation of the observer pattern instance (Programming)

- Analysis of Java in the deep copy and shallow copy (Programming)

- Python closure and function objects (Programming)

- Linux kernel VLAN study notes (Programming)

- Linux using DenyHosts prevents ssh cracks (Linux)

- 2016, the new Node project Precautions (Programming)

- PULL operation mechanism parsing XML Comments (Programming)

- MySQL stored procedures execute dynamic sql statement (Database)

- Tomcat session clustering and server session (Server)

- How to install Eclipse Luna IDE on CentOS 7 / RHEL 7 (Linux)

- Drawing from the Android source code analysis View (Programming)

- Ubuntu firewall installation and configuration (Linux)

- Grub2: Save Your bootloader (Linux)

- OpenCV 3.0 + Python 2.7 installation and testing under Ubuntu 14.04 (Linux)

- command-line tool for send e-mail (Linux)

- Linux 0.12 kernel and modern kernels difference in memory management (Linux)

- How to install Visual Studio Code on Ubuntu (Linux)

- Manually create Oracle Database Explanations (Database)

 
         
  Hadoop + Zookeeper NameNode achieve high availability
     
  Add Date : 2018-11-21      
         
         
         
 

Hadoop + zookeepker installation and configuration:
 
add export JAVA environment variables
modify the name of the file in hadoop-env.sh the hostname, / etc / hosts file to configure the host the mapping between names and ip will mstaer, slave hostname and ip addresses are added to come
 
Free configuration ssh key configuration
Ssh-keygen – t rsa
in ./.ssh file, two files are generated id_rsa (private), id_rsa.pub (public key)
cat id_rsa.pub > .ssh / authorized_keys
scp authorized_keys user @ ipaddress: / home / user / id_rsa.pub
modify authorzed file permissions to 600
 
// availability Namenode between fact by journalNode cluster or nfs to achieve, both master and slave namenode nodes share a shared directory to achieve high availability, standy time machine synchronization of active namenode machine, namenode automatically switches generally used zookeeper cluster to achieve
 
namenode high availability configurations:
Core-site.Xml fs.defaultFS add attributes to hdfs: // mycluster
hdfs-site to add dfs.federation.nameservers mycluster
        add dfs.namenodes.mycluster value nn1 and nn2
        add dfs.namenode.rpc-address.mycluster.nn1 value hostname1: 8020
        add dfs.namenode.rpc-address.mysqlcluster.nn2 value hostname2: 8020
          value added dfs.namenode.http-address.mycluster.nn1 hostname1: 50070 // web namenode node configuration Check
add dfs.namenode.http-address.mycluster.nn1 value hostname1: 50070
add dfs.namenode.shared.edits.dir shared storage directory location, all of the slave port 8485
          value added dfs.client.failover.proxy.provider.mycluster org.apache.hadoop.hdfs server.namenode. .ha.ConfigureFailoverProxyProvider   // confirm hadoop client and java class active node communication, and use it to confirm whether active active
        value added dfs.ha.fencing.methods sshfence use ssh to switch
// any period must be in a namenode node, this configuration using ssh to connect to namenode node kill namenode the active state
 
Here is hadoop + zookepper all configuration:
configuration hdfs-site.Xml
< configuration >
  < property >
  < name > dfs.replication < / name >
  < value > 3 < / value >                       // copy the text copies of 3
  < / property >
  < property >
  < name > heartbeat.recheckinterval < / name >     // datanode heartbeat time is 10s
  < value > 10 < / value >
  < / property >
  < property >
  < name > dfs.name.dir < / name >            
  < value > file: / mnt / vdc / hadoopstore / hdfs / name < / value >   // OK yuan hdfs file system information stored in the directory, set for the multi directory, you can save the meta-information data multiple backups
  < / property >
  < property >
  < name > dfs.data.dir < / name >
  < value > file: / mnt / vdc / hadoopstore / hdfs / data < / value >   data storage // determine hdfs file system directory, you can hdfs built on a different partition
  < / property >
  < property >
  < name > dfs.webhdfs.enabled < / name >     // access hdfs the web in the ability
  < value > true < / value >
  < / property >
  < property >
  < name > dfs.nameservices < / name >     // define a nameserver family
  < value > mycluster < / value >
  < / property >
  < property >
  < name > dfs.ha.namenodes.mycluster < / name >     // support two so namenode nodes two nodes are namenode nn1, nn2.
  < value > nn1, nn2 < / value >
  < / property >
  < property >
  < name > dfs. namenode.rpc-address.mycluster.nn1 < / name > // first rpc communication address, port 8020
  < value > master1: 8020 < / value >
  < / property >
  < property >
  < name > dfs.namenode.rpc-address.mycluster.nn2 < / name > // second rpc communication address, port for the 8020
  < value > master2: 8020 < / value >
  < / property >
  < property >
  < name > dfs.namenode.http-address.mycluster.nn1 < / name >
  < value > master1: 50070 < / value >   // define a second namenode http port
  < / property >
  < property >
  < name > dfs.namenode.http-address.mycluster.nn2 < / name >
  < value > master2: 50070 < / value >   // define a second namenode the httpd port
  < / property >
  < property >
  < name > dfs.namenode.shared.edits.dir < / name >   < value > qjournal: // master1: 8485; master2: 8485; slave1: 8485; slave2: 8485; slave3: 8485; slave4: 8485; slave5: 8485; slave6: 8485; slave7: 8485; slave8: 8485; slave9: 8485; slave10: 8485 / mycluster < / value >
  < / property >   // shared datanode information
/ / client failover configuration
  < property >
  < name > dfs.client.failover.proxy.provider.mycluster < / name >
  < value > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider < / value >
  < / property >   // automatically switch automatically when implemented by a class to which
  < property >
    < name > dfs.ha.fencing.methods < / name >
    < value > sshfence < / value > // namenode / / switching time using ssh, etc. to operate
  < / property >
 
  < property >
  < name > dfs.ha. fencing.ssh.private-key-files < / name >
  < value > /home/kduser/.ssh/id_rsa< / value > // position
  < / property >
  < property >
  < name > dfs.ha.automatic-failover.enabled < / name >   // ? whether to add mycluster , when a fault occurs whether to automatically switch
  < value > true < / value >
  < / property >
  // this namenode node id configured nn1
< property >
  < name > dfs.ha.namenode.id < / name >
  < value > nn1 < / value >
  < / property >
< / configuration >
 
 
mapred-site.xml configuration file
< configuration >
  < property >
  < name > mapreduce.framework.name < / name >
  < value > yarn < / value > //hadoop2.x later versions of the framework for the yarn
  < / property >
  < property >
  < name > mapreduce.reduce.shuffle.input.buffer.percent < / name > // default configured for 0.7, to improve system system
  < value > 0.1 < value >
  < / property >
  < / configuration >
 
 
configuration yarn-site.xml
< property >
  < name > yarn.nodemanager.resource.memory-mb < / name > // nodemanager the total available physical memory
  < value > 10240 < / value >
  < / property >
  < property >
  < name > yarn.resourcemanager.address < / name >
// ResourceManager client exposure address. Client submitted through this address to RM applications, kill applications, etc.
        < value > master1: 8032 < / value >
        < / property >
        < property >
                < name > yarn.nodemanager. disk-health-checker.max-disk-utilization-per-disk-percentage < / name >
                < value > 95.0 < / value >
        < / property >
        < property >
                < name > yarn.resourcemanager.scheduler.address < / name >
                < value > master1: 8030 < / value >
        < / property >
        < property >
                < name > yarn.resourcemanager.resource-tracker.address < / name >
                < value > master1: 8031 < / value >
        < / property >
        < property >
                < name > yarn.nodemanager.aux-services < / name >
                < value > mapreduce_shuffle < / value >
        < / property >
    < property >
              < name > yarn .resourcemanager.admin.address < / name >
        < value > master1: 8033 < / value >
    < / property >
        < property >
                < name > yarn.nodemanager.aux-services.mapreduce.shuffle.class < / name >
                  < value > org.apache.hadoop.mapred.ShuffleHandler < / value >
        < / property >
        < property >                                                                                                                                                                                                                        
          < name > yarn.resourcemanager.webapp.address < / name >                                                                                                                                                                                
          < value > master1: 8088 < / value >                                                                                                                                                                                                      
        < / property >
 
configure core-site .xml configuration
< configuration >
  < property >
  < name > hadoop.native.lib < / name >
  < value > true < / value >
< description > Shouldnative hadoop libraries, if present, be used <. / description >
// set start the local library, defaults to the local library
  < / property >
< -!
  < property >
  < name > fs.default.name < / name >
  < value > hdfs: //0.0.0.0: 9000 < / value >     // url
  < / property >
- >
  < property >
  < name > hadoop.tmp.dir < / name >
  < value > / mnt / vdc / hadoopstore / tmp < / value >     // hdfs temporary files directory
  < / property >
  < property >
  < name > fs.defaultFS < / name >
  < value > hdfs: // mycluster < / value > // specify hdfs of nameservice is mycluster (two) as hadoop of namenode node high-availability configuration
  < / property >
  < property >
  < name > dfs.journalnode.edits.dir < / name >
  < value > / mnt / vdc / hadoopstore / journal / data < / value >
  < / property >
  < property >
  < name > ha.zookeeper.quorum.mycluster < / name >
  < value > master1: 2181, master2: 2181, slave1: 2181 < / value >
  < / property >
  < property >
  < name > hadoop.proxyuser.oozie.hosts < / name >
  < value > * < / value >
  < / property >
  < property >
  < name > hadoop.proxyuser.oozie.groups < / name >
  < value > * < / value >
  < / property >
  < property >
  < name > hadoop.proxyuser.hue.hosts < / name >
  < value > * < / value >
  < / property >
  < property >
  < name > hadoop.proxyuser.hue.groups < / name >
  < value > * < / value >
  < / property >
< / configuration >
 
the first time you start you need to format hadoop   namenode   – format
View cluster status when using jps to view
Hadoop dfsadmin -report
 
Zookeeper command Explanation:
configure the basic environment variables:
export ZOOKEEPER_HOME = /home/zookeeper-3.3.3
export PATH = $ PATH: $ ZOOKEEPER_HOME / bin: $ ZOOKEEPER_HOME / conf
zookeeper profile zoo.cfg
tickTime = 2000   // default every two seconds will send a heartbeat
dataDir = / diskl / zookeeper   // store location memory database snapshot
dataLogDir = / disk2 / zookeeper   // store directory diary
clientPort = 2181
initlimit = 5         // connection timeout beats, where 5 means that the 10s when it will exit
syncLimit = 2    
server.l = zookeeperl: 2888: 3888
server.2 = zookeeper2: 2888: 3888
server.3 = zookeeper3: 2888: 3888
 
zookeeper 2181 ports for customers end, 2888 port for connecting with the followers, port 3888 is used to modify the electoral
myid file that the configuration file for the dataDir 1,2,3
 
ZkServer. sh start / stop / status   enable / disable / status
zkCLi.sh – serveripaddress: 2181   // connect to a station zookeeper server          
use ls   / View node content
get   / xxxx to view the contents of the string inside
set / create / deletexxx settings / create / delete the contents of the node
but zookeeper mainly in the form of using api visit

     
         
         
         
  More:      
 
- Ubuntu amend resolv.conf restart failure problem (Linux)
- Linux System Getting Started Tutorial: Using the Linux common commands (Linux)
- Linux operating system ARP Spoofing Defense (Linux)
- Job achieve automation in Ubuntu 14.04 - Using Cron (Linux)
- Ubuntu 14.04 Install WordPress on Nginx (Server)
- How to handle special characters in JSON (Programming)
- Using VMware vSphere Client Linux virtual machine installation CentOS6.4 system (Linux)
- Forwarding module with Apache reverse proxy server (Server)
- DRBD + Heartbeat solve NFS single point of failure (Server)
- Ubuntu arm-none-eabi-gcc compiler treated with STM32F10x (Linux)
- Oracle 11g statistics collection - collection of multi-column statistics (Database)
- Ordinary users how to use the firewall software (Linux)
- Linux Shell Understanding and Learning (Linux)
- TypeScript basic grammar (Programming)
- Bash code injection attacks through a special environment variable (Linux)
- SA weak password security system of the security risks posed (Linux)
- C language macro definition #define Usage (Programming)
- MySQL binary packages install for RedHat Linux Enterprise 6.4 (Database)
- Difference Docker mirror and containers (Server)
- Linux, Oracle listen address modification (Database)
     
           
     
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.