Home IT Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ HBase cluster installation and deployment     - Linux systems use logwatch log file monitoring (Linux)

- 20 open source / commercial Linux server management control panel (Server)

- OpenGL Superb Learning Notes - Depth Texture and Shadows (Programming)

- Linux FAQ - How to fix tar:Exitingwith failure status due to previous errors (Linux)

- Analysis of MySQL High Availability (Database)

- Use MySQL optimization of security to prevent misuse of aliases (Database)

- Linux terminal interface font color settings (Linux)

- Elasticsearch 2.20 Beginners: aggregation (Server)

- SELinux security permissions HTTP + PHP service changes (Server)

- GRUB how to load Linux kernel (Linux)

- To solve the Mac in question invalid BASH under configuration environment variable (Linux)

- ARM runtime environment built from scratch using QEMU emulator (Linux)

- C ++ Const breaking rules (Programming)

- Spring inject a type of object to enumerate (Programming)

- Setting Derby as Linux / Windows running as a service from the start (Server)

- Using PHP MySQL library (Programming)

- Ubuntu 14.04 / 12.04 subscribe users to install software Liferea 1.10.10 (Linux)

- Unix average load average load calculation method (Server)

- Qt shared memory interprocess communication (Programming)

- Linux firewall rules example Extracts (Linux)

 
         
  HBase cluster installation and deployment
     
  Add Date : 2018-11-21      
         
       
         
  Software Environment

OS: Ubuntu 14.04.1 LTS (GNU / Linux 3.13.0-32-generic x86_64)
Java: jdk1.7.0_75
Hadoop: hadoop-2.6.0
Hbase: hbase-1.0.0

Cluster Machine:

IP HostName Mater RegionServer
10.4.20.30 master yes no
10.4.20.31 slave1 no yes
10.4.20.32 slave2 no yes
ready
Suppose you have installed and deployed the Hadoop cluster Java, you can refer to Spark on YARN Cluster Deployment Guide article.

Download extract
HBase can download the latest version from the official download address, recommended stable binary version directory. I downloaded hbase-1.0.0-bin.tar.gz. Make sure you download the version compatible with your existing Hadoop version (compatibility list) and JDK version supported (HBase 1.0.x is no longer supported by the JDK 6).

unzip

tar -zxvf hbase-1.0.0-bin.tar.gz
cd hbase-1.0.0

Configuring HBase
Edit hbase-env.sh file, modify the JAVA_HOME for your path.

# The java implementation to use. Java 1.7+ required.
export JAVA_HOME = / home / spark / workspace / jdk1.7.0_75

Edit conf / hbase-site.xml file:

< Configuration>
  < Property>
    < Name> hbase.rootdir < / name>
    < Value> hdfs: // master: 9000 / hbase < / value>
  < / Property>
  < Property>
    < Name> hbase.cluster.distributed < / name>
    < Value> true < / value>
  < / Property>
  < Property>
    < Name> hbase.zookeeper.quorum < / name>
    < Value> master, slave1, slave2 < / value>
  < / Property>
  < Property>
    < Name> hbase.zookeeper.property.dataDir < / name>
    < Value> / home / spark / workspace / zookeeper / data < / value>
  < / Property>
< / Configuration>

The first attribute specifies the machine hbase storage directory, you must configure the Hadoop cluster core-site.xml file is consistent; the second attribute specifies hbase operating mode, true representatives of the whole distribution pattern; a third attribute specifies Zookeeper management machine, usually an odd number; the fourth attribute is the path of data stored. The default HBase comes Zookeeper I use here.

Configuring regionservers, add the following in regionservers file:

slave1
slave2

regionservers file lists all the running hbase machines (ie HRegionServer). Hadoop configuration of this file and the file is very similar to the slaves, each line specifies the host name of a machine. When HBase start time, all machines listed in this file will start. Also true when closed. Our configuration means on slave1, slave2, slave3 will start RegionServer.

Good hbase configuration file distributed to slave

scp -r hbase-1.0.0 spark @ slave1: ~ / workspace /
scp -r hbase-1.0.0 spark @ slave2: ~ / workspace /

Modify ulimit restrictions
HBase will open a large number of file handles and processes at the same time, more than the default limit Linux, resulting in the following error may occur.

2010-04-06 03: 04: 37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException
2010-04-06 03: 04: 37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901

So edit /etc/security/limits.conf file, add the following two lines, and the number of processes increase the number of handles that can be opened. Note that the spark into your running HBase username.

spark - nofile 32768
spark - nproc 32000

Also you need to add this line in /etc/pam.d/common-session:

session required pam_limits.so

Otherwise, the configuration will not take effect on /etc/security/limits.conf.

Finally logout (logout or exit) and then log in, the configuration to take effect! Use ulimit -n -u command to check the maximum number of files and processes are changed. Remember to run on each machine to install HBase oh.

Run HBase
Running on the master

cd ~ / workspace / hbase-1.0.0
bin / start-hbase.sh

Verify successful installation HBase
In the master should be run jps HMaster process. Running on the respective slave jps should be HQuorumPeer, HRegionServer two processes.

In the browser, type http: // master: 16010 see HBase Web UI.
     
         
       
         
  More:      
 
- Ten to improve the efficiency of the Linux bash tricks (Linux)
- Linux file system management partition, format, mount - label mount (Linux)
- Linux user directory (Linux)
- Linux Getting Started tutorial: build your own Vim (Linux)
- Impact test noatime Linux file access time (Linux)
- Transfer MySQL database to MariaDB (Database)
- How to create a remote (Linux)
- Android Studio Getting Started Hello World (Programming)
- FastDFS installation and deployment (Server)
- Digital jQuery scrolling effect (Programming)
- The formatted Linux hard drive and mount (Linux)
- Linux common commands MEMO (Linux)
- Nginx1.8 version upgrade method AMH4.2 Free manually compile (Server)
- Linux installation Jetty deployment under RedHat5 8 (Linux)
- Linux script commands - terminal recorder (Linux)
- Security Knowledge: How to hide a backdoor PHP file tips (Linux)
- Deb package installation method under ubuntu (Linux)
- Linux process group, session daemon (Linux)
- Graphical interface for the CentOS 6.4 installed and connected by remote VNC (Linux)
- Installation and deployment of MariaDB under CentOS (Database)
     
           
     
  CopyRight 2002-2016 newfreesoft.com, All Rights Reserved.