Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Hadoop 2.7.1 Installation and Configuration under RedHat Linux 6.5     - How to become a better Node.js developers in 2016 (Programming)

- Job achieve automation in Ubuntu 14.04 - Using Cron (Linux)

- How to configure FirewallD in RHEL / CentOS 7 and Fedora in (Linux)

- Install and use automated tools Ansible in CentOS 7 (Linux)

- Java development environment to build under Ubuntu (Linux)

- Oracle to start to solve the error ORA-27102 (Database)

- How to use the beta / unstable version of the software in Debian library (Linux)

- Java to create a table in the database SYBase (Database)

- Eclipse installs support for Java 8 (Linux)

- Apache Kafka: the next generation of distributed messaging system (Server)

- Mistakenly deleted redo log file group being given the lead to start the database ORA-03113 (Database)

- Linux command -nohup & (Linux)

- Linux operation and maintenance of the actual file system, file links (Linux)

- Hadoop 2.7.1 Installation and Configuration under RedHat Linux 6.5 (Server)

- Docker Basic Concepts (Linux)

- Android main thread message system (Handler Looper) (Linux)

- Ubuntu Thunderbird 24.4.0 (Linux)

- Lua and C ++ (Programming)

- Programmer editor Vim (Linux)

- Using BBED repair ORA-01190 error (Database)

 
         
  Hadoop 2.7.1 Installation and Configuration under RedHat Linux 6.5
     
  Add Date : 2018-11-21      
         
         
         
  1, Set up a Linux environment

  I am prepared environment is VM RedHat Linux 6.5 64bit
    Set a fixed IP
              vim / etc / sysconfig / network-scripts / ifcfg-eth0

              The IP address is set to 192.168.38.128

  Modify the host name: vim / etc / hosts

              The host name to itbuilder1

2, the installation JDK

    Configured JDK environment variable

3, install Hadoop environment

    Download version 2.7.1 from the Apache core package hadoop official website

    Address: http: //archive.apache.org/dist/hadoop/core/stable2/hadoop-2.7.1.tar.gz

    3.1 Unzip the package to develop directory

      First, create a directory: mkdir / usr / local / hadoop

        Will extract the files to / usr / local / hadoop directory: tar -zxvf hadoop-2.7.1.tar.gz -C / usr / local / hadoop

    3.2 modify the configuration file

          hadoop2.7.1 version 5 need to modify the configuration file, as follows

            1, hadoop-env.sh

            2, core-site.xml

            3, hdfs-site.xml

            4, mapred-site.xml (mapred-site.xml.template)

            5, yarn-site.xml

        This 5 files are in etc hadoop Kinoshita, the specific directory: /usr/local/hadoop/hadoop-2.7.1/etc/hadoop/

      3.2.1 modify environment variables (hadoop-env.sh)

            Use vim command to open a file hadoop-env.sh

            Provided at the place designated JavaHome good JDK root directory

  export JAVA_HOME = / usr / java / jdk1.8.0_20

      3.2.2 core-site.xml configuration, specify the HDFS namenode address and temporary files

          < Configuration>
        < ! - Specifies HDFS boss (NameNode) address ->
            < Property>
                    < Name> fs.defaultFS < / name>
                    < Value> hdfs: // itbuilder1: 9000 < / value>
            < / Property>
        < ! - Generated file specified hadoop runtime store directory ->
            < Property>
                    < Name> hadoop.tmp.dir < / name>
                    < Value> /usr/local/hadoop/hadoop-2.7.1/tmp < / value>
            < / Property>
      < / Configuration>

      3.2.3 hdfs-site.xml (specify the number of copies)

        < ! - Save data to develop HDFS number of copies ->
        < Configuration>
          < Property>
                < Name> dfs.replication < / name>
                < Value> 1 < / value>
            < / Property>
        < / Configuration>

        3.2.4 mapred-site.xml tell hadoop after MR running on yarn

          < Configuration>
                < Property>
                      < Name> mapreduce.framework.name < / name>
                        < Value> yarn < / value>
                  < / Property>
            < / Configuration>

        3.2.5 yarn-site.xml

            < Configuration>
                  < ! - Tell nodemanager data acquisition mode is shuffle the way ->
                  < Property>
                              < Name> yarn.nodemanager.aux-services < / name>
                                < Value> mapreduce_shuffle < / value>
                    < / Property>

                    < ! - Developed yarn boss (ResourceManager) address ->
                    < Property>
                              < Name> yarn.resourcemanager.hostname < / name>
                                < Value> itbuilder1 < / value>
                    < / Property>

            < / Configuration>

4, will be added to the environment variable hadoop

vim / etc / profile

export JAVA_HOME = / usr / java / jdk1.8.0_20
export HADOOP_HOME = / usr / local / hadoop / hadoop-2.7.1
export PATH = $ PATH: $ JAVA_HOME / bin: $ HADOOP_HOME / bin

# Refresh / etc / profile
 source / etc / profile

5, initialization (format) File System (HDFS)
    #hadoop namenode -format (obsolete)
    hdfs namenode -format (latest wait a long time)

6, start hadoop (hdfs yarn)
Instead look ./start-all.sh (out of date, you need to repeatedly enter and confirm your password linux) after two commands
./start-hdfs.sh
./start-yarn.sh

By jps command to view the currently open process

[Root @ abctest ~] # jps
3461 ResourceManager
3142 DataNode
3751 NodeManager
3016 NameNode
5034 Jps
3307 SecondaryNameNode

Access management interface:
http://192.168.38.128:50070 (hdfs Management Interface)
http://192.168.38.128:8088 (mr Management Interface)

Both interface opens, the installation is successful
     
         
         
         
  More:      
 
- Spark source code compiler package (Linux)
- httpd-2.4 feature (Server)
- Log device files under Linux - logger (Linux)
- Features and prevention methods elaborate network security grayware (Linux)
- To achieve a two-way static NAT stateless available modules on Linux (Linux)
- Ubuntu and derivatives installation Atom 0.104.0 (Linux)
- Mac OS X 10.9 compiler OCI8 module (Programming)
- Single list summarizes the basic operation (Programming)
- Linux Powerful command Awk Introduction (Linux)
- Linux Systems Getting Started Learning: Configuration PCI passthrough on a virtual machine (Linux)
- Through eight skills to let you become a super Linux end-user (Linux)
- zBackup: A versatile tool to remove duplicate backup (Linux)
- C language files update in real time (Programming)
- FFmpeg compiled with only H264 decoding library (Programming)
- Each catalog Detailed Linux (Linux)
- Shell script on the variables with double quotation marks grep small problem (Programming)
- Deploy OpenStack Juno on Ubuntu 14.04 (Linux)
- MySQL High Availability plan several options (Database)
- Security experience: to see how the experts deal with DDoS attacks (Linux)
- Linux file system management partition, format, mount - label mount (Linux)
     
           
     
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.