oozie学习笔记

网友投稿 203 2022-11-23

oozie学习笔记

#################################################################################################################################################################################################################### Oozie安装部署:1、安装需求 System Requirements Unix (tested in Linux and Mac OS X)Java 1.6+HadoopApache Hadoop (tested with 1.0.0 & 0.23.1)ExtJS library (optional, to enable Oozie webconsole)ExtJS 2.2The Java 1.6+ bin directory should be in the command path. 2、上传文件并且解压安装包:[hadoop@db01 softwares]$ pwd/opt/softwares[hadoop@db01 softwares]$ tar -zxvf oozie-4.0.0-cdh5.3.6.tar.gz -C /opt/cdh-5.3.6/ 3、在hadoop的core-site.xml配置文件中添加如下内容,重启hadoop集群:       hadoop.proxyuser.[OOZIE_SERVER_USER].hosts    [OOZIE_SERVER_HOSTNAME]        hadoop.proxyuser.[OOZIE_SERVER_USER].groups    [USER_GROUPS_THAT_ALLOW_IMPERSONATION]      --------------------------------------------------------                hadoop.proxyuser.hadoop.hosts        db01                hadoop.proxyuser.hadoop.groups        *    4、在oozie安装目录解压hadooplibs jar包:     [hadoop@db01 oozie-4.0.0]$ tar -zxvf oozie-hadooplibs-4.0.0-cdh5.3.6.tar.gz     5、创建libext目录 [hadoop@db01 oozie-4.0.0]$ pwd/opt/cdh-5.3.6/oozie-4.0.0[hadoop@db01 oozie-4.0.0]$ mkdir libext/ 6、cp4步骤解压目录下相应hadooplibs下jar包到5步骤创建libext目录下:[hadoop@db01 oozie-4.0.0]$ cp -r oozie-4.0.0-cdh5.3.6/hadooplibs/hadooplib-2.5.0-cdh5.3.6.oozie-4.0.0-cdh5.3.6/* libext/ 7、If using the ExtJS library copy the ZIP file to the libext/ directory.[hadoop@db01 oozie-4.0.0]$ cp /opt/softwares/ext-2.2.zip libext/ 8、打包操作 [hadoop@db01 oozie-4.0.0]$ bin/oozie-setup.sh prepare-war 9、启动hadoop服务 略 10、[hadoop@db01 oozie-4.0.0]$ cp /opt/cdh-5.3.6/hadoop-2.5.0/etc/hadoop/core-site.xml /opt/cdh-5.3.6/hadoop-2.5.0/etc/hadoop/hdfs-site.xml /opt/cdh-5.3.6/oozie-4.0.0/conf/ 11、 ----------------------------[hadoop@db01 oozie-4.0.0]$ bin/oozie-setup.sh sharelib create -fs hdfs://db01:8020 -locallib oozie-sharelib-4.0.0-cdh5.3.6-yarn.tar.gz  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m"log4j:WARN No appenders could be found for logger (org.apache.hadoop.util.Shell).log4j:WARN Please initialize the log4j system properly.log4j:WARN See for more info.SLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in [jar:file:/opt/cdh-5.3.6/oozie-4.0.0/libtools/slf4j-simple-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/opt/cdh-5.3.6/oozie-4.0.0/libtools/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: Found binding in [jar:file:/opt/cdh-5.3.6/oozie-4.0.0/libext/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J: See for an explanation.SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]the destination path for sharelib is: /user/hadoop/share/lib/lib_20170324165042--------------------------------------------------------------------------------------- 12、创建数据库bin/ooziedb.sh create -sqlfile oozie.sql -run DB Connection 13、启动oozie[hadoop@db01 oozie-4.0.0]$ bin/oozied.sh start 14、oozie-site.xml配置hdoop conf目录:        oozie.service.HadoopAccessorService.hadoop.configurations        *=/opt/cdh-5.3.6/hadoop-2.5.0/etc/hadoop                    Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the HOST:PORT of            the Hadoop service (JobTracker, HDFS). The wildcard '*' configuration is            used when there is no exact match for an authority. The HADOOP_CONF_DIR contains            the relevant Hadoop *-site.xml files. If the path is relative is looked within            the Oozie configuration directory; though the path can be absolute (i.e. to point            to Hadoop client conf/ directories in the local filesystem.                15、重新启动oozie[hadoop@db01 oozie-4.0.0]$ bin/oozied.sh stop[hadoop@db01 oozie-4.0.0]$ bin/oozied.sh start 登录console [hadoop@db01 oozie-4.0.0]$ bin/oozie admin -oozie -statusSystem mode: NORMAL 16、oozie使用mysql数据库存储元数据1)修改配置文件:              oozie.service.JPAService.jdbc.driver        com.mysql.jdbc.Driver                    JDBC driver class.                        oozie.service.JPAService.jdbc.url        jdbc:mysql://db01:3306/oozie                    JDBC URL.                        oozie.service.JPAService.jdbc.username        root                    DB user name.                        oozie.service.JPAService.jdbc.password        mysql                    DB user password.             IMPORTANT: if password is emtpy leave a 1 space string, the service trims the value,                       if empty Configuration assumes it is NULL.                2)cp mysql 驱动到libext/下:cp /opt/cdh-5.3.6/hive-0.13.1/lib/mysql-connector-java-5.1.27-bin.jar /opt/cdh-5.3.6/oozie-4.0.0/libext/3)穿件数据库:bin/ooziedb.sh create -sqlfile oozie.sql -run DB Connection4)打包上传hdfsbin/oozie-setup.sh prepare-warbin/oozie-setup.sh sharelib create -fs hdfs://db01:8020 -locallib oozie-sharelib-4.0.0-cdh5.3.6-yarn.tar.gz4)重启即可[hadoop@db01 oozie-4.0.0]$ bin/oozied.sh stop[hadoop@db01 oozie-4.0.0]$ bin/oozied.sh start********************************************************************************************Examples: bin/oozie job -oozie -config examples/apps/map-reduce/job.properties -run insert overwrite directory '/user/hadoop/hive/output'select empno,ename,mgr,job,sal,comm,deptno from chavin.emp; bin/sqoop import --connect jdbc:mysql://db01:3306/chavin --username root --password mysql --table emp --target-dir ${nameNode}/${oozieDataRoot}/${outputDir} --num-mappers 1 --as-parquetfile regsvr32.exe C:\Program Files (x86)\IDM Computer Solutions\UltraEdit\wodFtpDLX.dll   import --connect jdbc:mysql://db01:3306/chavin --username root --password mysql --table emp --target-dir ${nameNode}/${oozieDataRoot}/${outputDir} --num-mappers 1 --fields-terminated-by "/t" export --connect jdbc:mysql://chavin.king:3306/chavin --username root --password mysql --table emp --num-mappers 1 --fields-terminated-by "/t" --export-dir /user/hadoop/oozie/datas/bi-select-emp/output             db.hsqldb.properties#db.hsqldb.properties            db.hsqldb.script#db.hsqldb.script   bin/sqoop export \--connect jdbc:mysql://db01:3306/chavin \--username root \--password mysql \--table emp01 \--export-dir /user/hadoop/sqoop/import/emp create table chavin.emp02(EMPNO    int,ENAME    string,JOB      string,) row format delimited fields terminated by '\t';   create table chavin.emp01(EMPNO    int,ENAME    string,JOB      string) row format delimited fields terminated by '\t'; #########################################################################################################

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:RT-Thread DFS 组件的主要功能特点
下一篇:Hive:解决Hive创建文件数过多的问题
相关文章

 发表评论

暂时没有评论,来抢沙发吧~