Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Hadoop upload files error solved     - Linux under DB2SQL1024N A database connection does not exist. SQLS (Database)

- Traffic monitor Linux Python Version (Programming)

- Object-C in the preparation of multi-parameter function parameter is omitted (Programming)

- Android Qemu GPS module (Programming)

- Thunderbird 24.4.2 has been released for use with PPA updates (Linux)

- Linux install Eclipse for C / C ++ Development (Linux)

- Ordinary users how the Linux system shutdown (Linux)

- MySQL IO SSD attempt at optimization (Database)

- After CentOS configure SSH password Free, still prompted for a password (Linux)

- Java inner classes (Programming)

- impdp error ORA-39001, ORA-39000, ORA-31619 (Database)

- Examples of RAID levels and achieve Operational Details (Linux)

- Linux add a new hard disk (Linux)

- Partition and file system under Linux (Linux)

- Bootstrap 3.3.5 release download, Web front-end UI framework (Linux)

- Ubuntu 14.04 LTS to compile the source code Android4.4.2 (Linux)

- Installation and Configuration Tomcat environment CentOS 6.6 (Server)

- Configure the ASM process on Red Hat Linux 6.5 (Database)

- How to create a new file system / partitions under Linux terminal (Linux)

- iOS used in the development --UITabBarController tag controller (Programming)

 
         
  Hadoop upload files error solved
     
  Add Date : 2018-11-21      
         
         
         
  Said firewalls, datanode did not start problems, but check out are normal, and later still to find a solution on the way foreigners site

/etc/security/limits.conf Modified file uploaded successfully

These are being given Hadoop somehow, from this log can not see this is a problem, but it seems to slow the accumulation of their own

* Soft nofile 65536

* Hard nofile 65536

hadoop dfs -put 1.txt / input /

Error log is as follows:

15/06/24 14:45:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform ... using builtin-java classes where applicable

15/06/24 14:45:41 INFO hdfs.DFSClient: Exception in createBlockOutputStream

java.io.EOFException: Premature EOF: no length prefix available

 at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed (PBHelper.java:2103)

 at org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.createBlockOutputStream (DFSOutputStream.java:1380)

 at org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.nextBlockOutputStream (DFSOutputStream.java:1302)

 at org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.run (DFSOutputStream.java:536)

15/06/24 14:45:41 INFO hdfs.DFSClient: Abandoning BP-651950990-127.0.0.1-1435153229562: blk_1073741839_1015

15/06/24 14:45:41 INFO hdfs.DFSClient: Excluding datanode 10.25.5.102:50010

15/06/24 14:45:41 INFO hdfs.DFSClient: Exception in createBlockOutputStream

java.io.EOFException: Premature EOF: no length prefix available

 at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed (PBHelper.java:2103)

 at org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.createBlockOutputStream (DFSOutputStream.java:1380)

 at org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.nextBlockOutputStream (DFSOutputStream.java:1302)

 at org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.run (DFSOutputStream.java:536)

15/06/24 14:45:41 INFO hdfs.DFSClient: Abandoning BP-651950990-127.0.0.1-1435153229562: blk_1073741840_1016

15/06/24 14:45:41 INFO hdfs.DFSClient: Excluding datanode 10.25.5.101:50010

15/06/24 14:45:41 WARN hdfs.DFSClient: DataStreamer Exception

org.apache.hadoop.ipc.RemoteException (java.io.IOException): File /input/1.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (= 1) There are 2 datanode (s) running and. 2 node (s) are excluded in this operation.

 at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget (BlockManager.java:1492)

 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock (FSNamesystem.java:3027)

 at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock (NameNodeRpcServer.java:614)

 at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock (AuthorizationProviderProxyClientProtocol.java:188)

 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock (ClientNamenodeProtocolServerSideTranslatorPB.java:476)

 at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos $ ClientNamenodeProtocol $ 2.callBlockingMethod (ClientNamenodeProtocolProtos.java)

 at org.apache.hadoop.ipc.ProtobufRpcEngine $ Server $ ProtoBufRpcInvoker.call (ProtobufRpcEngine.java:587)

 at org.apache.hadoop.ipc.RPC $ Server.call (RPC.java:1026)

 at org.apache.hadoop.ipc.Server $ Handler $ 1.run (Server.java:2013)

 at org.apache.hadoop.ipc.Server $ Handler $ 1.run (Server.java:2009)

 at java.security.AccessController.doPrivileged (Native Method)

 at javax.security.auth.Subject.doAs (Subject.java:422)

 at org.apache.hadoop.security.UserGroupInformation.doAs (UserGroupInformation.java:1614)

 at org.apache.hadoop.ipc.Server $ Handler.run (Server.java:2007)

 

 at org.apache.hadoop.ipc.Client.call (Client.java:1411)

 at org.apache.hadoop.ipc.Client.call (Client.java:1364)

 at org.apache.hadoop.ipc.ProtobufRpcEngine $ Invoker.invoke (ProtobufRpcEngine.java:206)

 at com.sun.proxy. $ Proxy9.addBlock (Unknown Source)

 at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock (ClientNamenodeProtocolTranslatorPB.java:391)

 at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)

 at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)

 at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)

 at java.lang.reflect.Method.invoke (Method.java:497)

 at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod (RetryInvocationHandler.java:187)

 at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke (RetryInvocationHandler.java:102)

 at com.sun.proxy. $ Proxy10.addBlock (Unknown Source)

 at org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.locateFollowingBlock (DFSOutputStream.java:1473)

 at org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.nextBlockOutputStream (DFSOutputStream.java:1290)

 at org.apache.hadoop.hdfs.DFSOutputStream $ DataStreamer.run (DFSOutputStream.java:536)

put: File /input/1.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (= 1) There are 2 datanode (s) running and 2 node (s) are excluded in this operation..
     
         
         
         
  More:      
 
- Use MySQL optimization of security to prevent misuse of aliases (Database)
- Drawing from the Android source code analysis View (Programming)
- Python basis: os module on the file / directory using methods commonly used functions (Programming)
- 64-bit Oracle Linux recompiled Hadoop-2.2.0 (Server)
- Oracle Enterprise Linux 64-bit install apache-tomcat-7.0.53 step (Server)
- RocketMQ Message Queuing simple deployment (Linux)
- System Safety: Windows and Linux platforms (Linux)
- Linux SVN installation and configuration graphic tutorials (Server)
- Java objects to garbage collection (Programming)
- Linux configuration startup mount: fstab file (Linux)
- React Native (Programming)
- impdp error ORA-39001, ORA-39000, ORA-31619 (Database)
- Linux Troubleshooting: How to save the status of the SSH session is closed (Linux)
- How to recover deleted files in Linux systems (Linux)
- MySQL IO SSD attempt at optimization (Database)
- Ubuntu 14.04 installation and configuration environment variable JDK1.8.0_25 (Linux)
- Oracle Listener can not start (TNS-12555, TNS-12560, TNS-00525) (Database)
- Java Builder mode (Programming)
- Java input and output common class Scanner (Programming)
- PostgreSQL vacuum principle of a function and parameters (Database)
     
           
     
  CopyRight 2002-2020 newfreesoft.com, All Rights Reserved.