注册 登录 |
hadoop jellyfish_server:share_feedback表source_id定义 网页端/H5: 1:微博 2:QQ 3:QQ空间 5:微信 6:腾讯微博 7:人人网 客户端: -1: 全显示 1:新浪 2:QQ 3:QQ空间 4:微信好友 5:微信朋友圈 |
|||
hadoop 运维 报错 BUG Logging initialized using configuration in file:/etc/emr/hive-conf-1.0.1/hive-log4j.properties SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/apps/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/apps/apache-hive-1.0.1-bin/lib/hive-jdbc-1.0.1-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] Added [/home/hadoop/udf/Hive_Function_v2.jar] to class path Added resources: [/home/hadoop/udf/Hive_Function_v2.jar] OK Time taken: 0.321 seconds OK Time taken: 0.33 seconds Query ID = hadoop_20160613103131_07dc1024-73a6-4a19-95c6-80b3bc922afc Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_1465730615273_7971, Tracking URL = http://iZ23v79xiomZ:20888/proxy/application_1465730615273_7971/ Kill Command = /usr/lib/hadoop-current/bin/hadoop job -kill job_1465730615273_7971 Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0 2016-06-13 10:32:14,947 Stage-1 map = 0%, reduce = 0% Ended Job = job_1465730615273_7971 with errors Error during job, obtaining debugging information... Job Tracking URL: http://emr-header-1.cluster-9247:8088/cluster/app/application_1465730615273_7971 FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched: Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 FAIL Total MapReduce CPU Time Spent: 0 msec BUG:java.io.IOException: Rename cannot overwrite non empty destination directory /mnt/disk1/yarn/usercache/hadoop/filecache/0/161702 这是2.6.0 2.7.0已知的bug 参考:https://issues.apache.org/jira/browse/YARN-2624 清一下usercache目录 |
|||
hadoop hdfs 路径 hive导入HDFS数据,提示路径错误 http://blog.csdn.net/z363115269/article/details/39048589 |
|||
Linux ssh 不能登录 权限 http://blog.itpub.net/137293/viewspace-896312/ 设置好ssh,然后ssh 192.168.0.90结果还是要密码。 在w5的secure日志里硬是提示 Authentication refused: bad ownership or modes for directory /home/www 一开始一直是检查w5上的.ssh目录。n小时后才发现w5上的www目录权限居然为775 总结一下查错步骤 用户目录权限为 755 或者 700就是不能是77x .ssh目录权限必须为755 rsa_id.pub 及authorized_keys权限必须为644 rsa_id权限必须为600 |
|||
Spark Spark-1.3.1与Hive整合实现查询分析 Spark编译Hadoop-2.2.0 http://itindex.net/detail/53456-spark-1.3.1-hive |
|||
集群 集群中配置多台计算机之间ssh无密码登录的一种简便方法 http://blog.csdn.net/zzu09huixu/article/details/36416007 |
|||
hadoop hadoop-2.2.0配置说明 http://blog.csdn.net/jiushuai/article/details/16817311 |