Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Hadoop 2.5 Pseudo distribution installation     - JDK installation under CentOS (Linux)

- Ordinary users how the Linux system shutdown (Linux)

- DM9000 bare Driver Design (Programming)

- Nginx version of helloworld (Server)

- Practical Handbook: 130+ improve the efficiency of commonly used commands Vim (Linux)

- Distributed Firewall Design on Linux platform (Linux)

- Schema snapshot rollback (Database)

- CentOS 6.6 install rsync server (Server)

- The formatted Linux hard drive and mount (Linux)

- PHP call a Python program (Programming)

- extundelete: the Linux-based open source data recovery tools (Linux)

- Oracle how to assess the true concurrent session (Database)

- exp / imp Export Import version of the problem and the ORA-6550 error (Database)

- Linux POST fstab configuration file read-only variable can not be modified problem (Linux)

- To configure linux transparent firewall (Linux)

- Linux, Oracle listen address modification (Database)

- How to enable fbcon in Debian (Linux)

- JITwatch installation under Linux (Linux)

- Linux Getting Started tutorial: build your own Vim (Linux)

- What is a logical partition management LVM, how to use in Ubuntu (Linux)

 
         
  Hadoop 2.5 Pseudo distribution installation
     
  Add Date : 2018-11-21      
         
         
         
  The latest installation directory Hadoop2.5 do some modifications, the installation becomes a little simpler

First Installation Preparation Tools

 $ Sudo apt-get install ssh
  $ Sudo apt-get install rsync configuration ssh $ ssh localhostIf you can not ssh to localhost without a passphrase, execute the following commands: $ ssh-keygen -t dsa -P '' -f ~ / .ssh / id_dsa
  $ Cat ~ / .ssh / id_dsa.pub >> ~ / .ssh / authorized_keys

Enter etc / hadoop / hadoop-env.sh configure the operating environment

  # Set to the root of your Java installation
  export JAVA_HOME = / usr / java / latest

  # Assuming your installation directory is / usr / local / hadoop
  export HADOOP_PREFIX = / usr / local / hadoop

Hdfs port configuration and backup data

etc / hadoop / core-site.xml:

< Configuration>
    < Property>
        < Name> fs.defaultFS < / name>
        < Value> hdfs: // localhost: 9000 < / value>
    < / Property> < property> #ClientDatanodeProtocol when calling getBlockLocalPathInfo
          < Name> dfs.block.local-path-access.user < / name>
          < Value> infomorrow < / value>
    < / Property>
    < Property>
        < Name> dfs.replication < / name>
        < Value> 1 < / value>
    < / Property>
    < Property>
        < Name> hadoop.tmp.dir < / name>
        < Value> / home / infomorrow / hadoop-tmp < / value>
    < / Property> < / configuration>
etc / hadoop / hdfs-site.xml:

< Configuration>
    < Property>
        < Name> dfs.replication < / name>
        < Value> 1 < / value>
    < / Property>
< / Configuration>

Configuration uses yarn

etc / hadoop / mapred-site.xml:

< Configuration>
    < Property>
        < Name> mapreduce.framework.name < / name>
        < Value> yarn < / value>
    < / Property>
< / Configuration>
etc / hadoop / yarn-site.xml:

NodeManager shuffle server load at start-up, the shuffle server is actually Jetty / Netty Server, Reduce Task server through the remote copy intermediate results Map Task resulting from the respective NodeManager

< Configuration>
    < Property>
        < Name> yarn.nodemanager.aux-services < / name>
        < Value> mapreduce_shuffle < / value>
    < / Property>
< / Configuration>

Start the process:

hdfs

  $ Bin / hdfs namenode -format (first use) $ sbin / start-dfs.sh enter the monitoring page view - http: // localhost: 50070 / create a folder on hdfs

  $ Bin / hdfs dfs -mkdir / user
  $ Bin / hdfs dfs -mkdir / user / View on hdfs create a folder bin / hadoop fs -ls / yarn

$ Sbin / start-yarn.sh enter the monitoring page view - http: // localhost: 8088 / Off:

 $ Sbin / stop-dfs.sh $ sbin / stop-yarn.sh

bin / hadoop dfsadmin -safemode leave exit safe mode

To use the spark just install scala, spark in the cluster nodes, and add the configuration in spark-env.sh

export SCALA_HOME = / home / juxinli / scala-2.11.5
export JAVA_HOME = / usr / lib / jvm / java-8-sun
export HADOOP_HOME = / home / juxinli / hadoop-2.5.0
export YARN_CONF_DIR = $ HADOOP_HOME / etc / hadoop-2.5.0
export SPARK_JAR = / home / juxinli / spark-1.2.0-bin-hadoop2.4 / lib / spark-assembly-1.2.0-hadoop2.4.0.jar

In slaves add a node hostname
     
         
         
         
  More:      
 
- To execute the cp command prompt type skip folder under CentOS (Linux)
- Depth understanding of the TCP protocol (Database)
- UNIX how to restrict users by IP Telnet (Linux)
- A key installation Gitlab 7 on RHEL6.4 and Setup Mail TX (Linux)
- Getting Started with Linux system to learn: how to install the Shrew Soft IPsec VPN on Linux (Linux)
- Present Situation and Development Trend of firewall products (Linux)
- grep command Series: grep command to search for multiple words (Linux)
- How to force Linux users to change the initial password the first time you log in (Linux)
- Modifying the system registry protection server security (Linux)
- Linux basic introductory tutorial ---- regex basis (Linux)
- OpenGL Superb Learning Notes - Fragment Shader (Programming)
- VNC configuration detailed analysis under Linux (Linux)
- C ++ function object (Programming)
- How to make Linux a non-root user uses less than 1024 ports (Linux)
- Ubuntu Gnome and Fedora 22 Gnome desktop, extended to achieve global menu (Linux)
- Sorting algorithm of dichotomy (binary) insertion sort algorithm (Programming)
- Ten to improve the efficiency of the Linux bash tricks (Linux)
- Mac Docker deploy development environment (Server)
- CentOS 6.5 Linux System Customization and Packaging Quick Implementation Script (Linux)
- Mybatis + binding Struts2: achieving user to insert and find (Programming)
     
           
     
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.