Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Ubuntu under Spark development environment to build     - Cacti installation deployment under CentOS 6.6 (Server)

- How to install Kernel 4.0.2 on CentOS 7 (Linux)

- Use source packages compiled and installed GCC5.1 in Mac OS X 10.10.3 (Linux)

- Help you to see Linux system architecture type 5 Common Commands (Linux)

- Linux file permissions chmod chown (Linux)

- Linux file system data file deletion problem space is not freed (Database)

- How to build Mono 3.4.0 / 3.4.1 on Windows (Linux)

- When Linux virtual machine to another copy of the operating system, a static IP NAT mode Invalid (Linux)

- Monitoring services are running properly and will email alert (Server)

- Simple to use multi-threaded programming under Linux mutex and condition variable (Programming)

- About phpwind 5.01-5.3 0day analysis of the article (Linux)

- 5 fast Node.js application performance tips (Programming)

- Depth understanding of Python character set encoding (Programming)

- Linux (SUSE) mount NTFS mobile hard practice (Linux)

- Fedora 20, Fedora 19, CentOS 6 and RHEL6 users how to install Wine 1.7.15 (Linux)

- Ubuntu install virtual machine software VirtualBox 4.3.10 (Linux)

- To setup the Swift language learning environment under linux (Linux)

- CentOS7 install MySQL5.6.22 (Linux)

- iOS constants, variables, properties and characteristics (Programming)

- Java static internal class (Programming)

 
         
  Ubuntu under Spark development environment to build
     
  Add Date : 2018-11-21      
         
         
         
  Use the following configuration Ubuntu Python application development Spark

Ubuntu 64-bit Basic environment configuration

Install JDK, download jdk-8u45-linux-x64.tar.gz, extract to /opt/jdk1.8.0_45

Download: http: //www.Oracle.com/technetwork/java/javase/downloads/index.html

Install scala, download scala-2.11.6.tgz, extract to /opt/scala-2.11.6

Shimoji Address: http://www.scala-lang.org/

Install Spark, download spark-1.3.1-bin-Hadoop2.6.tgz, extract it to / opt / spark-hadoop

Download: http: //spark.apache.org/downloads.html,



Configuration environment variable, edit the / etc / profile, execute the following command

python @ ubuntu: ~ $ sudo gedit / etc / profile

                      In most file add:

          #Seeting JDK JDK environment variable

            export JAVA_HOME = / opt / jdk1.8.0_45

            export JRE_HOME = $ {JAVA_HOME} / jre

            export CLASSPATH =:. $ {JAVA_HOME} / lib: $ {JRE_HOME} / lib

            export PATH = $ {JAVA_HOME} / bin: $ {JRE_HOME} / bin: $ PATH

         #Seeting Scala Scala environment variable

            export SCALA_HOME = / opt / scala-2.11.6

            export PATH = $ {SCALA_HOME} / bin: $ PATH

         #setting Spark Spark environment variable

           export SPARK_HOME = / opt / spark-hadoop /

        #PythonPath The Spark in pySpark module increases Python environment

         export PYTHONPATH = / opt / spark-hadoop / python

 Restart the computer, so that / etc / profile permanent, temporary take effect, open a command window and execute source / etc / profile to take effect in the current window

Test results of the installation

Open a command window, switch to the root directory of Spark


Executive ./bin/spark-shell, Scala to open the connection window Spark


    Startup error message appears scala>, a successful start

  Executive ./bin/pyspark, open the Python connection to Spark window

    Ubuntu under Spark development environment to build


 Startup error, as shown above appears when the successful launch.

   Browser accessed via: The following page

   

           Test SPark available.

Python An application made Spark

Previously set PYTHONPATH, will pyspark added to Python's search path

Open Spark installation directory, Python- "build folder py4j, complex Python to the next directory, as shown:


Open a command line window, enter the python, Python version 2.7.6, as shown in note Spark does not support Python3


Enter the import pyspark, as shown below, prove that the development work is completed before


  Use Pycharm new key projects, the use of red box code testing
     
         
         
         
  More:      
 
- Linux install Samba file sharing server (Server)
- Java polymorphic methods inside constructors complete analysis (Programming)
- Use the TC flow control test under Linux (Linux)
- Vim Getting Started Tutorial (Linux)
- stat - Get more information than ls (Linux)
- Advanced permissions Linux file system settings (Linux)
- Oracle partition table data migration, process management automation (Database)
- To install Xen in Ubuntu 12.04 (Linux)
- How to find on Linux and delete duplicate files: FSlint (Linux)
- Tune in high resolution to 1280x800 in Fedora 14 (Linux)
- Update GAMIT10.6 command (Linux)
- CentOS 7.0 Enable iptables firewall (Linux)
- Oracle physical storage structure outline (Database)
- HashMap in Android and Java different implementations (Programming)
- Generated characters using Java Videos (Programming)
- C # DateTime structure common method (Programming)
- Ubuntu and derived versions of the user how to install G Mic 1.5.8.5 (Linux)
- Several start-up mode of Tomcat (Server)
- Briefly explain the invasion of the four-level denial of service attack DoS Linux server (Linux)
- Linux server disk expansion and Oracle tablespace file migration operations (Database)
     
           
     
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.