hue oozie再踩坑,workflow,coordinator终于都可以跑了

前边总结 了些hue下sqoop1,oozie,hbase的一些坑,今日项目到期,一定要搞定oozie工作流和定时调度执行,以是skr skr skr ....

1.前边 的sqoop mysql 导入出的坑已都踩过了,后来发现除了cdh(5.15)没有自动配置好sqoop1之外也无关紧要,手动配置后,按装sharelib后在拷些不全的包(如 sqoop,hbase,mysql,oozie等),基本是也可以在hue里跑的(hue 用oozie跑sqoop ,python写的xml 转义bug不能带引号之类),开始一直找不到驱动,后边按网上OOZIE 下各lib libext libtools 和其它sqoop lib的目录下加了mysql驱动后依然不行,后来改了下hdfs 下的 core-site.xml 的代理用户后就好了:

<property><name>hadoop.proxyuser.hue.hosts</name><value>*</value></property>
<property><name>hadoop.proxyuser.hue.groups</name><value>*</value></property>
<property><name>hadoop.proxyuser.oozie.hosts</name><value>*</value></property><property><name>hadoop.proxyuser.ozzie.groups</name><value>*</value></property>

sqoop1手动配置:

Sqoop 1 Client 服务环境高级配置代码段(安全阀):

SQOOP_CONF_DIR=/etc/sqoop/conf
HADOOP_COMMON_HOME=/opt/cloudera/parcels/CDH/lib/hadoop
HADOOP_MAPRED_HOME=/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce
HBASE_HOME=/opt/cloudera/parcels/CDH/lib/hbase
HIVE_HOME=/opt/cloudera/parcels/CDH/lib/hive
ZOOCFGDIR=/opt/cloudera/parcels/CDH/lib/zookeeper

sqoop-conf/sqoop-env.sh 的 Sqoop 1 Client 客户端高级配置代码段(安全阀):

#Set path to where bin/hadoop is available
export HADOOP_COMMON_HOME=/opt/cloudera/parcels/CDH/lib/hadoop
#Set path to where hadoop-*-core.jar is available
export HADOOP_MAPRED_HOME=/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce
#set the path to where bin/hbase is available
export HBASE_HOME=/opt/cloudera/parcels/CDH/lib/hbase
#Set the path to where bin/hive is available
export HIVE_HOME=/opt/cloudera/parcels/CDH/lib/hive
#Set the path for where zookeper config dir is
export ZOOCFGDIR=/opt/cloudera/parcels/CDH/lib/zookeeper
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH::/opt/cloudera/parcels/SQOOP_TERADATA_CONNECTOR1.7c5//lib/tdgssconfig.jar:/opt/cloudera/parcels/SQOOP_TERADATA_CONNECTOR-1.7c5//lib/terajdbc4.jar:
export SQOOP_CONF_DIR=/etc/sqoop/conf

2.导入因为是关系数据库有修改,所以导到hbase解决数据修改后增量同步过来的数据重复,保证数据一致性,然后有些类型转化,如int float(11,2)datetime 要转成java类型,特别是int ,有的关系数据库是tinyint 不转到hbase会变成true false,后在hive数仓,还不报错, 到impala下才提示

3.数据 有了,hive建hbase外表入仓,后开始计算,因要增量计算所以要定时删除创建表并导出到Mysql,导出的坑也说 了,-columns 类型 null转化参数,和彻底解决类型问题自定义class,另一坑是因导出表在hdfs ,各人在hue 操作权限不同,虽hue 里有share 功能,但只能脚本共享,不能正确执行,还报hdfs权限错误,后来和上边一样core-site。xml加所有组员为代理用户,虽不报错了,但还是有问题,如dorp table ,因别人创建的hive里删了,hdfs 下目录并没有删,数据创建等都有问题,结论是hue 还是自己跑自己的程序和脚本,上线用一个帐号全部拷到一起在跑就没问题了,不能真正协作,只能过程中脚本共享查看,并不能一块在一流里,相互调用,因权限不同,一些临时目录创建,代理用户操作不了的

4.导入计算输出脚本sql都有了,要做成流并定时调用,但oozie开始一直只能在命令行可跑和sqoop1 一样,后来测试可跑Workflow,按上说的在workspase下改配置后跑并不起作用,而且配置又还原了,了解到HUE里的好多配置并不像开源安装的一样直接可以改配置文件,因它可能就不是用的那个配置,而自己有自己的目录,一些关键的参数都有默认值 ,如:oozie.wf.application.path oozie.coord.application.path等,且从workflow设计页的菜单里生成的cornd调度根本提交不了,提示undefined --!,后来排除以上所有其它问题后,从头去跑OOZIE-examples,结果命令行下依然可以,拷上来的examples 在hue里也不能跑,试了各种方式,后埋头看了几遍OOZIE 原理操作,有几只得提下:http://shiyanjun.cn/archives/684.html

https://www.cnblogs.com/en-heng/p/5581331.html,最后了解workflow 和cornd是等同的不是包含从属关系,后就从hue菜单建了个新的cornd 后再选workflow,结果 竟然可以跑定时了,我擦 ,那work flow里的cornd 调整就是坑货,网上一些哪贴根本没能在hue 里跑,只是在hue里按命令行下写workflow 文件,而实测hue4.1 (cdh5.15.1)里根本不让跑,不是参数重了, 就 是提交后会还原你的配置

其它还有些小坑,时区大都网上还是utc,然而按官网改oozie-site.xml 的 Oozie Server 高级配置代码段(安全阀)

<property><name>oozie.processing.timezone</name><value>GMT+0800</value></property>

cornd里变成上海时间:

<coordinator-app name="MY_APP" frequency="${coord:minutes(2)}" start="${start}" end="${end}" timezone="Asia/Shanghai" xmlns="uri:oozie:coordinator:0.2">
 <action>
 <workflow>
 <app-path>${workflowAppUri}</app-path>
 </workflow>
 </action>
</coordinator-app>

而提交时,提示格式不对要加 +0800 --!

Error: E1003 : E1003: Invalid coordinator application attributes, parameter [start] = [2018-09-19T16:35] must be Date in GMT+08:00 format (yyyy-MM-dd'T'HH:mm+0800). Parsing error java.text.ParseException: Could not parse [2018-09-19T16:35] using [yyyy-MM-dd'T'HH:mm+0800] mask

于是job.properties:

oozie.use.system.libpath=true
security_enabled=False
dryrun=False
send_email=False
jobTracker=master:8032
start=2018-09-27T16:35+0800
nameNode=hdfs://master:8020
end=2018-09-27T18:35+0800
workflowAppUri=${nameNode}/user/hue/oozie/apps/sqoop#自生成的wf根本不写这两行
oozie.coord.application.path=${nameNode}/user/hue/oozie/apps/sqoop#自生成的wf根本不写这两行
<workflow-app name="My Workflow" xmlns="uri:oozie:workflow:0.5">
 <start to="sqoop-ace0"/>
 <kill name="Kill">
 <message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
 </kill>
 <action name="sqoop-ace0">
 <sqoop xmlns="uri:oozie:sqoop-action:0.2">
 <job-tracker>${jobTracker}</job-tracker>
 <name-node>${nameNode}</name-node>
 <command> list-databases --connect jdbc:mysql://master:3306/ --username bigdata --password xxxxx </command>
 </sqoop>
 <ok to="End"/>
 <error to="Kill"/>
 </action>
 <end name="End"/>
</workflow-app>

hue里创建的workflow 和job.properties

<workflow-app name="Workflow-1" xmlns="uri:oozie:workflow:0.5">
 <start to="hive-d593"/>
 <kill name="Kill">
 <message>Action failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
 </kill>
 <action name="hive-d593" cred="hive2">
 <hive2 xmlns="uri:oozie:hive2-action:0.1">
 <job-tracker>${jobTracker}</job-tracker>
 <name-node>${nameNode}</name-node>
 <jdbc-url>jdbc:hive2://master:10000/default</jdbc-url>
 <script>${wf:appPath()}/hive-d593.sql</script>
 </hive2>
 <ok to="hive-9e6a"/>
 <error to="Kill"/>
 </action>
 <action name="hive-9e6a" cred="hive2">
 <hive2 xmlns="uri:oozie:hive2-action:0.1">
 <job-tracker>${jobTracker}</job-tracker>
 <name-node>${nameNode}</name-node>
 <jdbc-url>jdbc:hive2://master:10000/default</jdbc-url>
 <script>${wf:appPath()}/hive-9e6a.sql</script>
 </hive2>
 <ok to="hive-016c"/>
 <error to="Kill"/>
 </action>
 <action name="hive-016c" cred="hive2">
 <hive2 xmlns="uri:oozie:hive2-action:0.1">
 <job-tracker>${jobTracker}</job-tracker>
 <name-node>${nameNode}</name-node>
 <jdbc-url>jdbc:hive2://master:10000/default</jdbc-url>
 <script>${wf:appPath()}/hive-016c.sql</script>
 </hive2>
 <ok to="hive-02ec"/>
 <error to="Kill"/>
 </action>
 <action name="hive-3c77" cred="hive2">
 <hive2 xmlns="uri:oozie:hive2-action:0.1">
 <job-tracker>${jobTracker}</job-tracker>
 <name-node>${nameNode}</name-node>
 <jdbc-url>jdbc:hive2://master:10000/default</jdbc-url>
 <script>${wf:appPath()}/hive-3c77.sql</script>
 </hive2>
 <ok to="hive-6ffa"/>
 <error to="Kill"/>
 </action>
 <action name="hive-02ec" cred="hive2">
 <hive2 xmlns="uri:oozie:hive2-action:0.1">
 <job-tracker>${jobTracker}</job-tracker>
 <name-node>${nameNode}</name-node>
 <jdbc-url>jdbc:hive2://master:10000/default</jdbc-url>
 <script>${wf:appPath()}/hive-02ec.sql</script>
 </hive2>
 <ok to="hive-3c77"/>
 <error to="Kill"/>
 </action>
 <action name="hive-6ffa" cred="hive2">
 <hive2 xmlns="uri:oozie:hive2-action:0.1">
 <job-tracker>${jobTracker}</job-tracker>
 <name-node>${nameNode}</name-node>
 <jdbc-url>jdbc:hive2://master:10000/default</jdbc-url>
 <script>${wf:appPath()}/hive-6ffa.sql</script>
 </hive2>
 <ok to="hive-bbd4"/>
 <error to="Kill"/>
 </action>
 <action name="hive-bbd4" cred="hive2">
 <hive2 xmlns="uri:oozie:hive2-action:0.1">
 <job-tracker>${jobTracker}</job-tracker>
 <name-node>${nameNode}</name-node>
 <jdbc-url>jdbc:hive2://master:10000/default</jdbc-url>
 <script>${wf:appPath()}/hive-bbd4.sql</script>
 </hive2>
 <ok to="End"/>
 <error to="Kill"/>
 </action>
 <end name="End"/>
</workflow-app>
oozie.use.system.libpath=True
security_enabled=False
dryrun=False
start_date=2018-09-27T17:28
end_date=2018-09-27T18:28

还有自动生的cornd

<coordinator-app name="Schedule-1"
 frequency="0,33,40 * * * *"
 start="${start_date}" end="${end_date}" timezone="Asia/Shanghai"
 xmlns="uri:oozie:coordinator:0.2"
 >
 <controls>
 <execution>FIFO</execution>
 </controls>
 <action>
 <workflow>
 <app-path>${wf_application_path}</app-path>
 <configuration>
 <property>
 <name>oozie.use.system.libpath</name>
 <value>True</value>
 </property>
 <property>
 <name>start_date</name>
 <value>${start_date}</value>
 </property>
 <property>
 <name>end_date</name>
 <value>${end_date}</value>
 </property>
 </configuration>
 </workflow>
 </action>
</coordinator-app>

oozie.use.system.libpathTruesecurity_enabledFalseoozie.coord.application.pathhdfs://master:8020/user/hue/oozie/deployments/_hue_-oozie-3509-1538040624.53dryrunFalseend_date2018-09-27T18:28+0800jobTrackermaster:8032mapreduce.job.user.namehueuser.namehuehue-id-c3509nameNodehdfs://master:8020wf_application_pathhdfs://master:8020/user/hue/oozie/workspaces/hue-oozie-1537954075.59start_date2018-09-27T17:28+0800

hue oozie再踩坑,workflow,coordinator终于都可以跑了

相关推荐