在 Eclipse 6.91 中运行 Hadoop 0.20.2 M/R 应用。
我在执行后收到这些错误和警告:
13/07/24 16:52:52 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
13/07/24 16:52:52 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/07/24 16:52:52 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
13/07/24 16:52:52 INFO input.FileInputFormat: Total input paths to process : 1
13/07/24 16:52:54 INFO mapred.JobClient: Running job: job_local_0001
13/07/24 16:52:54 INFO input.FileInputFormat: Total input paths to process : 1
13/07/24 16:52:54 INFO mapred.MapTask: io.sort.mb = 100
13/07/24 16:52:54 INFO mapred.MapTask: data buffer = 79691776/99614720
13/07/24 16:52:54 INFO mapred.MapTask: record buffer = 262144/327680
13/07/24 16:52:55 INFO mapred.MapTask: Starting flush of map output
13/07/24 16:52:55 INFO mapred.MapTask: Finished spill 0
13/07/24 16:52:55 INFO mapred.TaskRunner: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
13/07/24 16:52:55 INFO mapred.LocalJobRunner:
13/07/24 16:52:55 INFO mapred.TaskRunner: Task 'attempt_local_0001_m_000000_0' done.
13/07/24 16:52:55 INFO mapred.LocalJobRunner:
13/07/24 16:52:55 INFO mapred.Merger: Merging 1 sorted segments
13/07/24 16:52:55 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 7204 bytes
13/07/24 16:52:55 INFO mapred.LocalJobRunner:
13/07/24 16:52:55 WARN mapred.LocalJobRunner: job_local_0001
java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:197)
at java.io.DataInputStream.readLong(DataInputStream.java:416)
at java.io.DataInputStream.readDouble(DataInputStream.java:468)
at Continents$CountryPropertiesWritable.readFields(Continents.java:62)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)
at org.apache.hadoop.mapreduce.ReduceContext.nextKeyValue(ReduceContext.java:116)
at org.apache.hadoop.mapreduce.ReduceContext.nextKey(ReduceContext.java:92)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:175)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:566)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:408)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:216)
13/07/24 16:52:55 INFO mapred.JobClient: map 100% reduce 0%
13/07/24 16:52:55 INFO mapred.JobClient: Job complete: job_local_0001
13/07/24 16:52:55 INFO mapred.JobClient: Counters: 13
13/07/24 16:52:55 INFO mapred.JobClient: FileSystemCounters
13/07/24 16:52:55 INFO mapred.JobClient: HDFS_BYTES_READ=29783
13/07/24 16:52:55 INFO mapred.JobClient: FILE_BYTES_WRITTEN=24461
13/07/24 16:52:55 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=17085
13/07/24 16:52:55 INFO mapred.JobClient: Map-Reduce Framework
13/07/24 16:52:55 INFO mapred.JobClient: Reduce input groups=0
13/07/24 16:52:55 INFO mapred.JobClient: Combine output records=0
13/07/24 16:52:55 INFO mapred.JobClient: Map input records=252
13/07/24 16:52:55 INFO mapred.JobClient: Reduce shuffle bytes=0
13/07/24 16:52:55 INFO mapred.JobClient: Reduce output records=0
13/07/24 16:52:55 INFO mapred.JobClient: Spilled Records=251
13/07/24 16:52:55 INFO mapred.JobClient: Map output bytes=6700
13/07/24 16:52:55 INFO mapred.JobClient: Combine input records=0
13/07/24 16:52:55 INFO mapred.JobClient: Map output records=251
13/07/24 16:52:55 INFO mapred.JobClient: Reduce input records=0
13/07/24 16:52:55 ERROR hdfs.DFSClient: Exception closing file /user/ww- pc/cyg_server/exitHouseProperties1/_temporary/_attempt_local_0001_r_000000_0/part-r-00000 : org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could not complete write to file /user/ww- pc/cyg_server/exitHouseProperties1/_temporary/_attempt_local_0001_r_000000_0/part-r-00000 by DFSClient_1179141130
at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could not complete write to file /user/ww-pc/cyg_server/exitHouseProperties1/_temporary/_attempt_local_0001_r_000000_0/part-r-00000 by DFSClient_1179141130
at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
at org.apache.hadoop.ipc.Client.call(Client.java:740)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at com.sun.proxy.$Proxy0.complete(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at com.sun.proxy.$Proxy0.complete(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3264)
at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3188)
at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:1043)
at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:237)
at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:269)
at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:1424)
at org.apache.hadoop.fs.FileSystem.closeAll(FileSystem.java:217)
at org.apache.hadoop.fs.FileSystem$ClientFinalizer.run(FileSystem.java:202)
我认为 WARN mapred.LocalJobRunner: job_local_0001 java.io.EOFException 是错误的原因。我应该在代码中更改什么?键和值的输入和输出类型是否可能存在错误?
相关代码:
public static class Map extends Mapper<LongWritable, Text, Text, HousePropertiesWritable> {
private Text housekey = new Text();
@Override
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
......
......
context.write(housekey, new HousePropertiesWritable(...variables..)
}
}
public static class Reduce extends Reducer<Text, HousePropertiesWritable, Text, Text> {
@Override
public void reduce(Text key, Iterable<HousePropertiesWritable> values, Context context)
throws IOException, InterruptedException {
....
....
context.write(key, new Text(output));
}
}
来自Main的相关代码,配置Job:
Job job = new Job(conf, "HouseCalculation");
job.setJarByClass(House.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(HousePropertiesWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
最佳答案
感谢 Chris White,我在 HousePropertiesWritable 类中发现了 readFields 和 write 方法的错误。
我对字符串变量使用了 in.readLine() 和 out.writeBytes()。 现在我更改 in.readUTF() 和 out.writeUTF() 并且一切正常。
再次感谢,
干杯
关于java - Hadoop Map/Reduce WARN mapred.LocalJobRunner : job_local_0001 java. io.EOFException?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/17839267/
我真的很习惯使用Ruby编写以下代码:my_hash={}my_hash['test']=1Java中对应的数据结构是什么? 最佳答案 HashMapmap=newHashMap();map.put("test",1);我假设? 关于java-等价于Java中的RubyHash,我们在StackOverflow上找到一个类似的问题: https://stackoverflow.com/questions/22737685/
我正在尝试使用boilerpipe来自JRuby。我看过guide从JRuby调用Java,并成功地将它与另一个Java包一起使用,但无法弄清楚为什么同样的东西不能用于boilerpipe。我正在尝试基本上从JRuby中执行与此Java等效的操作:URLurl=newURL("http://www.example.com/some-location/index.html");Stringtext=ArticleExtractor.INSTANCE.getText(url);在JRuby中试过这个:require'java'url=java.net.URL.new("http://www
我只想对我一直在思考的这个问题有其他意见,例如我有classuser_controller和classuserclassUserattr_accessor:name,:usernameendclassUserController//dosomethingaboutanythingaboutusersend问题是我的User类中是否应该有逻辑user=User.newuser.do_something(user1)oritshouldbeuser_controller=UserController.newuser_controller.do_something(user1,user2)我
什么是ruby的rack或python的Java的wsgi?还有一个路由库。 最佳答案 来自Python标准PEP333:Bycontrast,althoughJavahasjustasmanywebapplicationframeworksavailable,Java's"servlet"APImakesitpossibleforapplicationswrittenwithanyJavawebapplicationframeworktoruninanywebserverthatsupportstheservletAPI.ht
这篇文章是继上一篇文章“Observability:从零开始创建Java微服务并监控它(一)”的续篇。在上一篇文章中,我们讲述了如何创建一个Javaweb应用,并使用Filebeat来收集应用所生成的日志。在今天的文章中,我来详述如何收集应用的指标,使用APM来监控应用并监督web服务的在线情况。源码可以在地址 https://github.com/liu-xiao-guo/java_observability 进行下载。摄入指标指标被视为可以随时更改的时间点值。当前请求的数量可以改变任何毫秒。你可能有1000个请求的峰值,然后一切都回到一个请求。这也意味着这些指标可能不准确,你还想提取最小/
HashMap中为什么引入红黑树,而不是AVL树呢1.概述开始学习这个知识点之前我们需要知道,在JDK1.8以及之前,针对HashMap有什么不同。JDK1.7的时候,HashMap的底层实现是数组+链表JDK1.8的时候,HashMap的底层实现是数组+链表+红黑树我们要思考一个问题,为什么要从链表转为红黑树呢。首先先让我们了解下链表有什么不好???2.链表上述的截图其实就是链表的结构,我们来看下链表的增删改查的时间复杂度增:因为链表不是线性结构,所以每次添加的时候,只需要移动一个节点,所以可以理解为复杂度是N(1)删:算法时间复杂度跟增保持一致查:既然是非线性结构,所以查询某一个节点的时候
遍历文件夹我们通常是使用递归进行操作,这种方式比较简单,也比较容易理解。本文为大家介绍另一种不使用递归的方式,由于没有使用递归,只用到了循环和集合,所以效率更高一些!一、使用递归遍历文件夹整体思路1、使用File封装初始目录,2、打印这个目录3、获取这个目录下所有的子文件和子目录的数组。4、遍历这个数组,取出每个File对象4-1、如果File是否是一个文件,打印4-2、否则就是一个目录,递归调用代码实现publicclassSearchFile{publicstaticvoidmain(String[]args){//初始目录Filedir=newFile("d:/Dev");Datebeg
我基本上来自Java背景并且努力理解Ruby中的模运算。(5%3)(-5%3)(5%-3)(-5%-3)Java中的上述操作产生,2个-22个-2但在Ruby中,相同的表达式会产生21个-1-2.Ruby在逻辑上有多擅长这个?模块操作在Ruby中是如何实现的?如果将同一个操作定义为一个web服务,两个服务如何匹配逻辑。 最佳答案 在Java中,模运算的结果与被除数的符号相同。在Ruby中,它与除数的符号相同。remainder()在Ruby中与被除数的符号相同。您可能还想引用modulooperation.
Java的Collections.unmodifiableList和Collections.unmodifiableMap在Ruby标准API中是否有等价物? 最佳答案 使用freeze应用程序接口(interface):Preventsfurthermodificationstoobj.ARuntimeErrorwillberaisedifmodificationisattempted.Thereisnowaytounfreezeafrozenobject.SeealsoObject#frozen?.Thismethodretur
我正在尝试在我的远程服务器上运行以下命令(通过capistrano或ssh):bundleexecRAILS_ENV=productionscript/delayed_jobstart但我收到此错误消息:bundler:notexecutable:script/delayed_job以前从未见过这个,谷歌也没有适合我的东西。知道可能是什么问题吗? 最佳答案 也许它没有运行权限?尝试运行这个命令chmod+xscript/delayed_job然后再次执行文件。 关于ruby-on-rai