Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Linux \ Using IntelliJ IDEA Import Spark Spark latest source code and compile the source code     - Squid proxy server (Server)

- How to manage your to-do list with the Go For It on Ubuntu (Linux)

- Linux-based Heartbeat high availability configuration httpd service (Server)

- Java multithreading easy to confuse the concept (Programming)

- Jetty JNDI Development combat (Linux)

- Ubuntu and derivatives installation Atom 0.104.0 (Linux)

- linux firewall configuration (Linux)

- Linux using DenyHosts prevents ssh cracks (Linux)

- How to contribute code on GitHub uploads (Linux)

- pscp use Detailed Windows and Linux each file transfer tool (Linux)

- Related to optimize the use of Btrfs file system on SSD (Linux)

- How do you know your public IP terminal in Linux (Linux)

- Linux system performance tuning of Analysis (Linux)

- Linux at command (Linux)

- Installation Yarock 1.1.4 Music Player in Ubuntu (Linux)

- Job achieve automation in Ubuntu 14.04 - Using Cron (Linux)

- Linux system find command Detailed (Linux)

- Oracle to use full-text indexing (Database)

- Archlinux installation tutorial (Linux)

- Several Methods of SSH Auto - login (Linux)

 
         
  Using IntelliJ IDEA Import Spark Spark latest source code and compile the source code
     
  Add Date : 2017-08-31      
         
         
         
  After a certain experience of Spark, to be able to follow the development progress Spark source code, detailed analysis of its source code to read, this article details how to use IntelliJ IDEA to import the latest Spark source code from Github, and compile it.

Ready to work

First you need to install the system in JDK 1.6+, and install the Scala. After downloading the latest version of IntelliJ IDEA, the first installation (first opened will recommend you install) Scala plugin, related methods do not say. At this point, your system should be able to run Scala on the command line. My system environment is as follows:

1. Mac OS X (10.9.5)

2. JDK 1.7.71

3. Scala 2.10.4

4. IntelliJ IDEA 14

In addition, it is recommended that you finally started to use the pre-built Spark, Spark on the run, using the methods understand, write some Spark expand the application after reading the source code and try to modify the source code, compile manually.

Import Spark project from Github

After opening IntelliJ IDEA, the menu bar, select VCS-> Check out from Version Control-> Git, then fill in the address Spark project in Git Repository URL, and appoint good local path

Click on the window of the Clone, began clone from Github in the project, the process speed test you may be, takes about 3-10 minutes.

Compile Spark

When the clone is complete, IntelliJ IDEA will automatically prompt you that the project has a corresponding pom.xml file is open. Here directly select Open the pom.xml file, then the system will automatically resolve dependencies in a project related to this step will be because of your network and systems environments required for different times.

After this step is complete, manually edit Spark pom.xml file in the root directory, find where the specified java version of the line (java.version), depending on your system environment, if you are using jdk1.7, then perhaps you need its value is changed to 1.7 (default is 1.6).

After opening the terminal shell, at the command line to enter the spark just imported the project root directory, execute

sbt / sbt assembly

The compile command will all use the default configuration to compile Spark, if you want to specify the version related components, you can view the official website of Spark Build-Spark (http://spark.apache.org/docs/latest/building-spark.html ) to see all the commonly used compiler options. This process is currently no VPN to complete, in order to estimate the time needed to compile, you can open a new shell terminal, continue to view the size of the spark project directory, I end up using the default configuration, compiled after the success of the spark directory size 2.0G.

Conclusion

Thus, in order to compile the results of your test, you can enter the spark / bin directory, run the spark-shell on the command line, if everything is normal start, then compile successfully. Spark if you modify the source code, you can re-use the sbt to compile, and the compiler will not be like the first time so long to compile. If you have any questions, comments welcome exchange!
     
         
         
         
  More:      
 
- Java generate two-dimensional code by Zxing (Programming)
- MySQL stored procedures and triggers (Database)
- Git use and interpretation of common commands (Linux)
- MySQL primary and secondary replicate data inconsistencies (Database)
- top command causes the system load increases (Linux)
- Summary of Docker mounted directory (Server)
- Oracle SQL statement to retrieve data paging table (Database)
- C language print various graphic (Programming)
- Java loop list to solve the problem of Joseph ring (Programming)
- How do I cancel (almost) any operations in Git, (Linux)
- CentOS7 install JDK (Linux)
- Realize screen recording and playback via Linux command (Linux)
- Bootable ISO image using GRUB (Linux)
- DupeGuru- find and remove duplicate files (Linux)
- Installation and operation GAMIT software under Linux operating system (Linux)
- Using monitoring tool dsniff (Linux)
- ORA-00020: No more process state objects available (Database)
- Getting Started with Linux system to learn: how to check memory usage of Linux (Linux)
- How to upgrade to Oracle 11g Oracle 12c (Database)
- Linux administrator should command: sed and awk (Linux)
     
           
     
  CopyRight 2002-2020 newfreesoft.com, All Rights Reserved.