Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Hadoop + Zookeeper NameNode achieve high availability     - How VirtualBox and VMware virtual machine conversion (Linux)

- To resolve Ubuntu 14.04 Unity Desktop Environment Login freeze problem (Linux)

- SYN attack hacker attack and defense of the basic principles and prevention technology (Linux)

- CoreOS quick installation to get started on a PC (Linux)

- Java thread pool: ExecutorService, Executors (Programming)

- Linux RPM default installation path (Linux)

- How UTorrent download the ISO image from the command line (Linux)

- Mac OS X command line to submit the local project to Git (Server)

- How to install Linux Go Language (Linux)

- Swift 2.0 brief (Linux)

- Java Foundation - Variables and data types (Programming)

- CentOS 5.10 installed Oracle 11G R2 (Database)

- Lucene Getting Started Tutorial (Server)

- Fatal NI connect error 12170 error in Alert Log (Database)

- Ubuntu is not in the sudoers file problem solving (Linux)

- systemd Power Management (Linux)

- VMware Workstation virtual machine startup error: Could not open / dev / vmmon in CentOS 6 (Linux)

- Using Linux strace command trace / debug a program commonly used options (Linux)

- 6 common PHP security attacks (Linux)

- RedHat command line and graphical interface switching (Linux)

 
         
  Hadoop + Zookeeper NameNode achieve high availability
     
  Add Date : 2018-11-21      
         
         
         
 

Hadoop + zookeepker installation and configuration:
 
add export JAVA environment variables
modify the name of the file in hadoop-env.sh the hostname, / etc / hosts file to configure the host the mapping between names and ip will mstaer, slave hostname and ip addresses are added to come
 
Free configuration ssh key configuration
Ssh-keygen – t rsa
in ./.ssh file, two files are generated id_rsa (private), id_rsa.pub (public key)
cat id_rsa.pub > .ssh / authorized_keys
scp authorized_keys user @ ipaddress: / home / user / id_rsa.pub
modify authorzed file permissions to 600
 
// availability Namenode between fact by journalNode cluster or nfs to achieve, both master and slave namenode nodes share a shared directory to achieve high availability, standy time machine synchronization of active namenode machine, namenode automatically switches generally used zookeeper cluster to achieve
 
namenode high availability configurations:
Core-site.Xml fs.defaultFS add attributes to hdfs: // mycluster
hdfs-site to add dfs.federation.nameservers mycluster
        add dfs.namenodes.mycluster value nn1 and nn2
        add dfs.namenode.rpc-address.mycluster.nn1 value hostname1: 8020
        add dfs.namenode.rpc-address.mysqlcluster.nn2 value hostname2: 8020
          value added dfs.namenode.http-address.mycluster.nn1 hostname1: 50070 // web namenode node configuration Check
add dfs.namenode.http-address.mycluster.nn1 value hostname1: 50070
add dfs.namenode.shared.edits.dir shared storage directory location, all of the slave port 8485
          value added dfs.client.failover.proxy.provider.mycluster org.apache.hadoop.hdfs server.namenode. .ha.ConfigureFailoverProxyProvider   // confirm hadoop client and java class active node communication, and use it to confirm whether active active
        value added dfs.ha.fencing.methods sshfence use ssh to switch
// any period must be in a namenode node, this configuration using ssh to connect to namenode node kill namenode the active state
 
Here is hadoop + zookepper all configuration:
configuration hdfs-site.Xml
< configuration >
  < property >
  < name > dfs.replication < / name >
  < value > 3 < / value >                       // copy the text copies of 3
  < / property >
  < property >
  < name > heartbeat.recheckinterval < / name >     // datanode heartbeat time is 10s
  < value > 10 < / value >
  < / property >
  < property >
  < name > dfs.name.dir < / name >            
  < value > file: / mnt / vdc / hadoopstore / hdfs / name < / value >   // OK yuan hdfs file system information stored in the directory, set for the multi directory, you can save the meta-information data multiple backups
  < / property >
  < property >
  < name > dfs.data.dir < / name >
  < value > file: / mnt / vdc / hadoopstore / hdfs / data < / value >   data storage // determine hdfs file system directory, you can hdfs built on a different partition
  < / property >
  < property >
  < name > dfs.webhdfs.enabled < / name >     // access hdfs the web in the ability
  < value > true < / value >
  < / property >
  < property >
  < name > dfs.nameservices < / name >     // define a nameserver family
  < value > mycluster < / value >
  < / property >
  < property >
  < name > dfs.ha.namenodes.mycluster < / name >     // support two so namenode nodes two nodes are namenode nn1, nn2.
  < value > nn1, nn2 < / value >
  < / property >
  < property >
  < name > dfs. namenode.rpc-address.mycluster.nn1 < / name > // first rpc communication address, port 8020
  < value > master1: 8020 < / value >
  < / property >
  < property >
  < name > dfs.namenode.rpc-address.mycluster.nn2 < / name > // second rpc communication address, port for the 8020
  < value > master2: 8020 < / value >
  < / property >
  < property >
  < name > dfs.namenode.http-address.mycluster.nn1 < / name >
  < value > master1: 50070 < / value >   // define a second namenode http port
  < / property >
  < property >
  < name > dfs.namenode.http-address.mycluster.nn2 < / name >
  < value > master2: 50070 < / value >   // define a second namenode the httpd port
  < / property >
  < property >
  < name > dfs.namenode.shared.edits.dir < / name >   < value > qjournal: // master1: 8485; master2: 8485; slave1: 8485; slave2: 8485; slave3: 8485; slave4: 8485; slave5: 8485; slave6: 8485; slave7: 8485; slave8: 8485; slave9: 8485; slave10: 8485 / mycluster < / value >
  < / property >   // shared datanode information
/ / client failover configuration
  < property >
  < name > dfs.client.failover.proxy.provider.mycluster < / name >
  < value > org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider < / value >
  < / property >   // automatically switch automatically when implemented by a class to which
  < property >
    < name > dfs.ha.fencing.methods < / name >
    < value > sshfence < / value > // namenode / / switching time using ssh, etc. to operate
  < / property >
 
  < property >
  < name > dfs.ha. fencing.ssh.private-key-files < / name >
  < value > /home/kduser/.ssh/id_rsa< / value > // position
  < / property >
  < property >
  < name > dfs.ha.automatic-failover.enabled < / name >   // ? whether to add mycluster , when a fault occurs whether to automatically switch
  < value > true < / value >
  < / property >
  // this namenode node id configured nn1
< property >
  < name > dfs.ha.namenode.id < / name >
  < value > nn1 < / value >
  < / property >
< / configuration >
 
 
mapred-site.xml configuration file
< configuration >
  < property >
  < name > mapreduce.framework.name < / name >
  < value > yarn < / value > //hadoop2.x later versions of the framework for the yarn
  < / property >
  < property >
  < name > mapreduce.reduce.shuffle.input.buffer.percent < / name > // default configured for 0.7, to improve system system
  < value > 0.1 < value >
  < / property >
  < / configuration >
 
 
configuration yarn-site.xml
< property >
  < name > yarn.nodemanager.resource.memory-mb < / name > // nodemanager the total available physical memory
  < value > 10240 < / value >
  < / property >
  < property >
  < name > yarn.resourcemanager.address < / name >
// ResourceManager client exposure address. Client submitted through this address to RM applications, kill applications, etc.
        < value > master1: 8032 < / value >
        < / property >
        < property >
                < name > yarn.nodemanager. disk-health-checker.max-disk-utilization-per-disk-percentage < / name >
                < value > 95.0 < / value >
        < / property >
        < property >
                < name > yarn.resourcemanager.scheduler.address < / name >
                < value > master1: 8030 < / value >
        < / property >
        < property >
                < name > yarn.resourcemanager.resource-tracker.address < / name >
                < value > master1: 8031 < / value >
        < / property >
        < property >
                < name > yarn.nodemanager.aux-services < / name >
                < value > mapreduce_shuffle < / value >
        < / property >
    < property >
              < name > yarn .resourcemanager.admin.address < / name >
        < value > master1: 8033 < / value >
    < / property >
        < property >
                < name > yarn.nodemanager.aux-services.mapreduce.shuffle.class < / name >
                  < value > org.apache.hadoop.mapred.ShuffleHandler < / value >
        < / property >
        < property >                                                                                                                                                                                                                        
          < name > yarn.resourcemanager.webapp.address < / name >                                                                                                                                                                                
          < value > master1: 8088 < / value >                                                                                                                                                                                                      
        < / property >
 
configure core-site .xml configuration
< configuration >
  < property >
  < name > hadoop.native.lib < / name >
  < value > true < / value >
< description > Shouldnative hadoop libraries, if present, be used <. / description >
// set start the local library, defaults to the local library
  < / property >
< -!
  < property >
  < name > fs.default.name < / name >
  < value > hdfs: //0.0.0.0: 9000 < / value >     // url
  < / property >
- >
  < property >
  < name > hadoop.tmp.dir < / name >
  < value > / mnt / vdc / hadoopstore / tmp < / value >     // hdfs temporary files directory
  < / property >
  < property >
  < name > fs.defaultFS < / name >
  < value > hdfs: // mycluster < / value > // specify hdfs of nameservice is mycluster (two) as hadoop of namenode node high-availability configuration
  < / property >
  < property >
  < name > dfs.journalnode.edits.dir < / name >
  < value > / mnt / vdc / hadoopstore / journal / data < / value >
  < / property >
  < property >
  < name > ha.zookeeper.quorum.mycluster < / name >
  < value > master1: 2181, master2: 2181, slave1: 2181 < / value >
  < / property >
  < property >
  < name > hadoop.proxyuser.oozie.hosts < / name >
  < value > * < / value >
  < / property >
  < property >
  < name > hadoop.proxyuser.oozie.groups < / name >
  < value > * < / value >
  < / property >
  < property >
  < name > hadoop.proxyuser.hue.hosts < / name >
  < value > * < / value >
  < / property >
  < property >
  < name > hadoop.proxyuser.hue.groups < / name >
  < value > * < / value >
  < / property >
< / configuration >
 
the first time you start you need to format hadoop   namenode   – format
View cluster status when using jps to view
Hadoop dfsadmin -report
 
Zookeeper command Explanation:
configure the basic environment variables:
export ZOOKEEPER_HOME = /home/zookeeper-3.3.3
export PATH = $ PATH: $ ZOOKEEPER_HOME / bin: $ ZOOKEEPER_HOME / conf
zookeeper profile zoo.cfg
tickTime = 2000   // default every two seconds will send a heartbeat
dataDir = / diskl / zookeeper   // store location memory database snapshot
dataLogDir = / disk2 / zookeeper   // store directory diary
clientPort = 2181
initlimit = 5         // connection timeout beats, where 5 means that the 10s when it will exit
syncLimit = 2    
server.l = zookeeperl: 2888: 3888
server.2 = zookeeper2: 2888: 3888
server.3 = zookeeper3: 2888: 3888
 
zookeeper 2181 ports for customers end, 2888 port for connecting with the followers, port 3888 is used to modify the electoral
myid file that the configuration file for the dataDir 1,2,3
 
ZkServer. sh start / stop / status   enable / disable / status
zkCLi.sh – serveripaddress: 2181   // connect to a station zookeeper server          
use ls   / View node content
get   / xxxx to view the contents of the string inside
set / create / deletexxx settings / create / delete the contents of the node
but zookeeper mainly in the form of using api visit

     
         
         
         
  More:      
 
- CentOS 7 version how to achieve the power to start the graphical interface (Linux)
- Linux / BSD firewall M0n0wall Profile (Linux)
- Oracle 11g tracking and monitoring system-level triggers to drop misuse (Database)
- XenServer Virtual Machine Installation --- first ISO image file storage expansion (Linux)
- Node.js Getting the basics: Helloworld! (Linux)
- Linux hard drive failure Case Studies (Linux)
- File upload via AngularJS and ASP.NET MVC5 (Programming)
- AngularJS (Programming)
- How to create an alternative Android / iOS connected wireless hotspot AP in Ubuntu 15.04 (Linux)
- ASP.NET 5 (vNext) Linux deployment (Server)
- How SSHfs mount a remote file system on Linux (Linux)
- How to find an IP address through the command line (Linux)
- Performance Optimization: Using Ramlog transfer log files to memory (Linux)
- Run two MySQL service on one server (Database)
- The basic principle of pointers in C ++ (Programming)
- MySQL restart process can not be taken lightly (Database)
- Linux - use chroot command (Linux)
- Tmux create the perfect terminal management tool under CentOS (Linux)
- Linux system started to learn: how to view the Linux thread of a process (Linux)
- To assign multiple IP addresses NIC on the CentOS 7 (Linux)
     
           
     
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.