Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Apache Spark1.1.0 deployment and development environment to build     - 30 minutes with your Quick Start MySQL Tutorial (Database)

- In addition to wget and curl, what better alternatives (Linux)

- Httpclient4.4 of principle (Http execution context) (Programming)

- How to enhance the Nagios server security (Linux)

- Linux System Getting Started Learning: The Linux anacron command (Linux)

- Linux platform to prevent hackers to share practical skills (Linux)

- Upgrading to Debian 7.6 glibc 2.15 (Linux)

- Spring classic face questions Share (Programming)

- JavaScript closures and the scope chain (Programming)

- Linux maximum number of threads and limit the number of queries the current thread (Linux)

- Spring + Log4j + ActiveMQ remote logging - Analysis of combat (Server)

- Boost notes --Thread - problems encountered in the initial use on Ubuntu (Programming)

- The ActiveMQ JMS installation and application examples for Linux (Linux)

- Linux learning portal: CentOS 6.4 system kernel upgrade (Linux)

- Java object serialization (Programming)

- Disk storage structure and file recovery experiment (FAT file system) (Linux)

- Getting Started with Linux system to learn: how to use tcpdump to capture TCP SYN, ACK and FIN packets (Linux)

- Eclipse distributed management using GitHub project development (Linux)

- Java concurrent programming combat (using synchronized synchronization method) (Programming)

- Java Concurrency -volatile keywords (Programming)

 
         
  Apache Spark1.1.0 deployment and development environment to build
     
  Add Date : 2018-11-21      
         
         
         
  Spark is the Apache-based company introduced a Hadoop Distributed File System (HDFS) parallel computing architecture. And MapReduce different, Spark is not limited to the preparation of map and reduce two methods, which provide a more powerful computing memory (in-memory computing) model, so that the user can be programmed to read the data into memory among the cluster, and users can easily and quickly repeated queries, ideally suited for machine learning algorithms. This article introduces Apache Spark1.1.0 deployment and development environment to build.
0. Prepare

For learning purposes, this article Spark deployed in a virtual machine, select the virtual machine VMware WorkStation. In the virtual machine, you need to install the following software:

Ubuntu 14.04.1 LTS 64-bit desktop version
hadoop-2.4.0.tar.gz
jdk-7u67-linux-x64.tar.gz
scala-2.10.4.tgz
spark-1.1.0-bin-hadoop2.4.tgz
Spark's development environment, the paper choose Windows7 platform, IDE choose IntelliJ IDEA. In Windows, you need to install the following software:

IntelliJ IDEA 13.1.4 Community Edition
apache-maven-3.2.3-bin.zip (installation process is relatively simple, the reader self-installation)
1. Install JDK

Unzip jdk installation package to / usr / lib directory:

sudo cp jdk-7u67-linux-x64.gz / usr / lib
cd / usr / lib
sudo tar -xvzf jdk-7u67-linux-x64.gz
sudo gedit / etc / profile

Add the environment variable to the end of / etc / profile file:

export JAVA_HOME = / usr / lib / jdk1.7.0_67
export JRE_HOME = / usr / lib / jdk1.7.0_67 / jre
export PATH = $ JAVA_HOME / bin: $ JRE_HOME / bin: $ PATH
export CLASSPATH =:. $ JAVA_HOME / lib: $ JRE_HOME / lib: $ CLASSPATH
Save and update the / etc / profile:

source / etc / profile
Jdk test whether the installation was successful:

java -version



 

2. Install and configure SSH

sudo apt-get update
sudo apt-get install openssh-server
sudo /etc/init.d/ssh start
Build and add keys:

ssh-keygen -t rsa -P ""
cd /home/hduser/.ssh
cat id_rsa.pub >> authorized_keys
ssh login:

ssh localhost


 

3. Install hadoop2.4.0

Pseudo-distributed mode installation hadoop2.4.0. Unzip hadoop2.4.0 to / usr / local directory:

sudo cp hadoop-2.4.0.tar.gz / usr / local /
sudo tar -xzvf hadoop-2.4.0.tar.gz
Add the environment variable to the end of / etc / profile file:

export HADOOP_HOME = / usr / local / hadoop-2.4.0
export PATH = $ HADOOP_HOME / bin: $ HADOOP_HOME / sbin: $ PATH

export HADOOP_COMMON_LIB_NATIVE_DIR = $ HADOOP_HOME / lib / native
export HADOOP_OPTS = "- Djava.library.path = $ HADOOP_HOME / lib"
Save and update the / etc / profile:

source / etc / profile
Modify jdk path located /usr/local/hadoop-2.4.0/etc/hadoop of hadoop-env.sh and yarn-env.sh file:

cd /usr/local/hadoop-2.4.0/etc/hadoop
sudo gedit hadoop-env.sh
sudo gedit yarn-evn.sh
hadoop-env.sh:

#The java implementation to use.
export JAVA_HOME=/usr/lib/jdk1.7.0_67

yarn-env.sh:

#some Java parameters
#export JAVA_HOME=/home/y/libexec/jdk1.6.0/
export JAVA_HOME=/usr/lib/jdk1.7.0_67


Modify core-site.xml:

sudo gedit core-site.xml
In the < configuration > < / configuration > add between: