|
Early into the virtual machine is powered Hadoop, an exception occurs, reformat look normal, of unknown origin (Format: Execute bin / hdfs namenode -format in hadoop directory)
java.net.ConnectException: Call From chenph-Ubuntu / 127.0.1.1 to localhost: 9000 failed on connection
Try making the outcome of yesterday --eclipse plug, according to the example online configures failure phenomenon is unable to connect to the virtual machine Hadoop, then test the following situation
See the virtual machine ip, enter ifconfig, give 192.168.203.136, can ping the win7 ip192.168.101.120
In win7 can not ping to 192.168.203.136, because a network adapter I created a virtual machine is disabled, enabled by running ipconfig -all in win7 can be seen in a 192.168.203.1 ip, this is the two systems win7 and ubuntu network segment ip, so that you can ping each other through the
But the eclipse was not remotely connect to hadoop, continue to find ways to modify the hosts file in win7, I found my hosts file is not in the system, and later found (not hidden, but set to protect the system, remove the protection on the line ), a corresponding increase in information 192.168.203.136 localhost
The eclipse or not, this time to execute ipconfig ubuntu again, ip turned into 192.168.203.137, my vertigo? ? ?
The configuration ip unified into 192.168.203.137, test again, still not
This time I'm targeting the hadoop configuration, all configurations localhost place unified into a machine name, restart the service still does not work
Because their own ip becomes a problem (may be automatically assigned dhcp issues now, back to try to set fixed), so I use the machine name in the previous step, this time I changed ip unity
Yesterday afternoon, the development and testing continue to eclipse between Hadoop, very smooth, the virtual machine ip really changed again, set it manually on the virtual machine in the network can not, check the parameters there is no way on not just can not get it, can not change the old ip on the line
In win7 environment, the use of plug-in I eclipse4.3.2 plus hadoop2.4 day before I made the test, reported abnormal
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO $ Windows.access0 (Ljava / lang / String; I) Z
Online related to the introduction, you need to download hadoop.dll, winutils.exe (Download) these two files into the bin win under your hadoop directory folder (online, said the direct replacement of the existing bin, my this problem still occurs, the problem may be a version of it, after all I use is 2.4, the upper part of the two documents is version 2.2)
Put away, rerun the program (I use the Run Application), or on top of the newspaper was wrong to find the source, the source code into my project was carried debug, find the line 571, to direct return true;
Re-run the program again, successfully, on Photo
hdfs directory after you configure the installed plug-in displays than facilitate that in Ubuntu, you can add, delete, view file
On top of that mistake mentioning, I directly modify this class
This should note that if you hold log4j configuration file, you will miss a lot of useful information being given, I just started to put this document is not a waste of at least 5,6 hours detours
The input and output files path defined in the file needs to be said here is that if your output folder already exists, run time will complain again
Operation information, the file in the output folder to see the results after the success of
Debugging process, with several crashes, because it did not add log4j in the project, and finally see the light because the eclipse in ubuntu debugging success (the eclipse plugin I use the Internet to download the plug-in 2.2)
win7 next, I do not have cygwin, using my own translation eclipse 2.4 plugins from the Internet under the dll and exe directly into the bin directory hadoop, eclipse the hadoop hadoop also configured root directory, online at best considered reference data
After the previous study, basically a small test chopper write small programs to play with, and in a number of preparatory work to do before this
Clear what I want Hadoop
Generally learn about mapreduce
Ubuntu After the restart, and then start hadoop problem will be reported abnormal connection
answer:
Data mining, data exploration, data mining
map = chopped, reduce = merge
Will be cleared after the restart tmp folder, default namenode exist here, the need to increase the core-site.xml file (do not forget to create a folder, without permission to do so, you need to create with root privileges and put into 777):
|
|
|
|