Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ How to install Hadoop on CentOS7     - Github with .gitignore ignore specified file (Linux)

- Apache Tomcat integration and resin (Server)

- Openfire Hazelcast cluster Detailed (Server)

- Installation and Configuration rsync server under CentOS 6.3 (Server)

- Php and MySQL command add to the environment variable method in Linux system (Linux)

- File encryption and decryption of Linux security mechanisms (Linux)

- Drawing from the Android source code analysis View (Programming)

- Awk include binding capacity larger than the specified size of all files directory (Linux)

- When Linux virtual machine to another copy of the operating system, a static IP NAT mode Invalid (Linux)

- How to install Bugzilla 4.4 on Ubuntu / CentOS 6.x (Linux)

- A simple shell script for monitoring in Linux (Linux)

- Oracle query start with connect by tree (Database)

- To control based on IP address routing policy under Linux (Linux)

- How to download GOG games in Linux command line (Linux)

- Text editing and viewing text Linux command (Linux)

- Element content of Java HashSet change issues (Programming)

- FreeRadius installation process record (Linux)

- Oracle table space is too large processing time (Database)

- Bash Automated Customization Linux belongs to its own CentOS system (Linux)

- Bubble Sort Algorithms (Programming)

 
         
  How to install Hadoop on CentOS7
     
  Add Date : 2018-11-21      
         
         
         
  Hadoop is a distributed system infrastructure, he enables users in distributed without knowing the underlying details of the development of distributed applications.

The important core Hadoop: HDFS and MapReduce. HDFS responsible for the storage, MapReduce responsible for the calculation.

The following describes how to install Hadoop focus:

In fact, do not bother to install Hadoop, several major condition precedent requires the following, if the following condition precedent somehow, according to the official website launch configuration is very simple.

1, Java runtime environment, we recommend that Sun releases

2, SSH public key authentication secret Free

Above environment get, the rest is just Hadoop configuration, this part of the configuration might have different versions in detail with reference to the official documentation.

surroundings

VM: VMWare10.0.1 build-1379776

Operating System: CentOS7 64 bit

Install Java environment

Download: http: //www.Oracle.com/technetwork/cn/java/javase/downloads/jdk8-downloads-2133151-zhs.html

Own operating system version, select the appropriate download package according to the package if it is supported by rpm, rpm direct download, or use rpm address

rpm -ivh http://download.oracle.com/otn-pub/java/jdk/8u20-b26/jdk-8u20-linux-x64.rpm

JDK is continually updated, so the need to install the latest version of the JDK Quguan your own network to obtain the latest installation package rpm address.

Configuring SSH public key authentication secret Free

CentOS comes with a default openssh-server, openssh-clients and rsync, if your system does not, then look for its own installation.

Create a common account

Create hadoop (custom name) account, the password is also set to hadoop unified on all machines

useradd -d / home / hadoop -s / usr / bin / bash -g wheel hadoop

passwd hadoop

SSH configuration

vi / etc / ssh / sshd_config

Find the following three configuration items, and change the following settings. If a comment, just remove the front # uncomment the configuration.

RSAAuthentication yes

PubkeyAuthentication yes

# The default is to check both .ssh / authorized_keys and .ssh / authorized_keys2

# But this is overridden so installations will only check .ssh / authorized_keys

AuthorizedKeysFile .ssh / authorized_keys

.ssh / authorized_keys is the public key storage path.

Public key generation

With hadoop account login.

cd ~

ssh-keygen -t rsa -P ''

Save the resulting ~ / .ssh / id_rsa.pub file to ~ / .ssh / authorized_keys

cp ~ / .ssh / id_rsa.pub ~ / .ssh / authorized_keys

.ssh Directory using scp command to copy to the other machine, lazy approach so that all the machines are the same keys, shared public key.

scp ~ / .ssh / * hadoop @ slave1: ~ / .ssh /

Taken to ensure that ~ / .ssh / id_rsa access must be 600 to prohibit access to other users.

Hadoop installation

Referring to official configuration documentation
     
         
         
         
  More:      
 
- Keepalived + Nginx Installation and Configuration (Server)
- Java deserialization test (Programming)
- Hadoop 2.0 Detailed Configuration Tutorial (Server)
- Linux Apache server security (Linux)
- How to import JNI resulting .so libraries in Android Studio (Programming)
- Linux Routine Task Scheduler (Linux)
- Linux console password solution (Programming)
- shellinabox: one uses AJAX Web-based terminal emulator (Linux)
- RedHat virtual machine to install VMware Tools (Linux)
- Install Ruby on Rails in Ubuntu 15.04 in (Linux)
- Why is the ibdata1 file growing in MySQL? (Database)
- Redis is installed in Ubuntu 14.04 (Database)
- SQL in the specific internal Oracle process (Database)
- Squid proxy server configuration under Linux (Server)
- By way of a binary installation innobackupex (Database)
- SUSE Linux firewall configuration notes (Linux)
- DM9000 timing settings (Programming)
- Using Ruby to build a simple HTTP service and sass environment (Server)
- Win7 + Ubuntu Kylin + CentOS 6.5 installed three systems (Linux)
- Can not remember how to solve the problem under Ubuntu brightness setting (Linux)
     
           
     
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.