linux怎么查看本机内存大小
366
2022-10-14
AWS Redshift ETL的几个性能最佳实践
Amazon Redshift 是一个支持SQL查询的、快速、可扩展的列式存储数据库,它支持PB级的数量查询,是适用于企业级的数据仓库。同时Redshift支持大规模并发查询、支持结果集缓存,响应查询时间最快至亚秒,比起其他数据仓库快将近十倍。借助 Redshift,您的等待时间更少,可将更多时间用于获取数据见解。
ETL在计算机领域是一个很流行的概念,意指将数据从一个或多个源头复制到目标系统的一个过程,其中包含三个步骤:1,Extract 从数据源中选择/提取需要导出的数据2,Transform 将导出的数据根据业务需要进行必要的格式/表现形式上的转换3,Load 将转换后的数据导入目标系统
在使用Redshift之前需要将数据导入Redshift,即Redshift的ETL。例如数据库的迁移,将旧数据库中的数据转移到Redshift等等。本文旨在分享我们bosicloud在日常工作中关于Redshift ETL方面的一些技巧及建议:
Redshift的WLM设置界面
3,使用”BEGIN…COMMIT”减少COMMIT次数前面我们提到ELT是一个多步骤的任务,每个步骤最后往往需要执行一个COMMIT,而COMMIT又是一个昂贵的操作。所以我们bosicloud建议使用BEGIN…END 将能够合并的多个步骤尽量合并为一个步骤并只执行一次COMMIT,例如:BeginCREATE temporary staging_table;INSERT INTO staging_table SELECT .. FROM source;DELETE FROM table1 WHERE ???=???;INSERT INTO table1 SELECT .. FROM staging_table;DELETE FROM table2 WHERE ???=???;INSERT INTO table2 SELECT .. FROM staging_table;Commit
4,使用Redshift Spectrum for ad-hoc查询在以往,我们可以会为了做ad-hoc查询而将S3上的数据ETL加载到Redshift进行查询。如果仅为了偶尔的一两次查询而进行ETL,这个ETL将显得非常昂贵、不划算。别忘了AWS最近推出了Redshift Spectrum新功能,即您可以直接利用Redshift Spectrum查询S3上的数据而无需将数据加载到Redshift中,虽然Specturm查询相比普通Redshift查询较慢,但比起ETL来说查询速度仍然是大大提升了。
5,关于ETL健康检查的SQL实用脚本: 返回过去1天内queue的统计信息,例如最大队列长度和队列时间select startqueue,node, datediff(ms,startqueue,startwork) as queue_time, datediff(ms, startwork, endtime) as commit_time, queuelen from stl_commit_stats where startqueue >= dateadd(day, -1, current_Date)order by queuelen desc , queue_time desc; 返回一个星期内所执行的COPY的相关信息,如COPY的开始时间(Starttime),所在queue的ID(query),SQL语句(querytxt),COPY的文件数量(n_files)及文件大小(size_mb)等等:select q.starttime, s.query, substring(q.querytxt,1,120) as querytxt,s.n_files, size_mb, s.time_seconds,s.size_mb/decode(s.time_seconds,0,1,s.time_seconds) as mb_per_sfrom (select query, count() as n_files,sum(transfer_size/(10241024)) as size_MB, (max(end_Time) -min(start_Time))/(1000000) as time_seconds , max(end_time) as end_timefrom stl_s3client where = 'GET' and query > 0and transfer_time > 0 group by query ) as sLEFT JOIN stl_Query as q on q.query = s.querywhere s.end_Time >= dateadd(day, -7, current_Date)order by s.time_Seconds desc, size_mb desc, s.end_time desclimit 50;
建立view视图查看每个表空间使用情况,请考虑将空间增长较快的表的内容unload到S3.
CREATE OR REPLACE VIEW admin.v_space_used_per_tblAS with info_table as ( SELECT TRIM(pgdb.datname) AS dbase_name,TRIM(pgn.nspname) as schemaname,TRIM(pgc.relname) AS tablename,id AS tbl_oid,b.mbytes AS megabytes,CASE WHEN pgc.reldiststyle = 8THEN a.rows_all_distELSE a.rows END AS rowcount,CASE WHEN pgc.reldiststyle = 8THEN a.unsorted_rows_all_distELSE a.unsorted_rows END AS unsorted_rowcount,CASE WHEN pgc.reldiststyle = 8THEN decode( det.n_sortkeys,0, NULL,DECODE( a.rows_all_dist,0,0, (a.unsorted_rows_all_dist::DECIMAL(32)/a.rows_all_dist)100))::DECIMAL(20,2)ELSE decode( det.n_sortkeys,0, NULL,DECODE( a.rows,0,0, (a.unsorted_rows::DECIMAL(32)/a.rows)100))::DECIMAL(20,2) ENDAS pct_unsortedFROM ( SELECTdb_id,id,name,MAX(ROWS) AS rows_all_dist,MAX(ROWS) - MAX(sorted_rows) AS unsorted_rows_all_dist,SUM(rows) AS rows,SUM(rows)-SUM(sorted_rows) AS unsorted_rowsFROM stv_tbl_permGROUP BY db_id, id, name) AS aINNER JOINpg_class AS pgcON pgc.oid = a.idINNER JOINpg_namespace AS pgnON pgn.oid = pgc.relnamespaceINNER JOINpg_database AS pgdbON pgdb.oid = a.db_idINNER JOIN (SELECT attrelid,MIN(CASE attisdistkey WHEN 't' THEN attname ELSE NULL END) AS "distkey",MIN(CASE attsortkeyord WHEN 1 THEN attname ELSE NULL END) AS head_sort,MAX(attsortkeyord) AS n_sortkeys,MAX(attencodingtype) AS max_enc,SUM(case when attencodingtype <> 0 then 1 else 0 end)::DECIMAL(20,3)/COUNT(attencodingtype)::DECIMAL(20,3) 100.00 as pct_encFROM pg_attributeGROUP BY 1) AS det ON det.attrelid = a.idLEFT OUTER JOIN( SELECTtbl,COUNT() AS mbytesFROM stv_blocklistGROUP BY tbl) AS bON a.id=b.tblWHERE pgc.relowner > 1)select info.*,CASE WHEN info.rowcount = 0 THEN 'n/a'WHEN info.pct_unsorted >= 20 THEN 'VACUUM SORT recommended'ELSE 'n/a'END AS recommendationfrom info_table info;
找出本周内最费时间的前50个SQL查询(多个相同的sql查询时间合并计算)-- query runtimesselect trim(database) as DB, count(query) as n_qry, max(substring (qrytext,1,80)) as qrytext, min(run_seconds) as "min" , max(run_seconds) as "max", avg(run_seconds) as "avg", sum(run_seconds) as total, max(query) as max_query_id, max(starttime)::date as last_run, aborted,listagg(event, ', ') within group (order by query) as eventsfrom (select userid, label, stl_query.query, trim(database) as database, trim(querytxt) as qrytext, md5(trim(querytxt)) as qry_md5, starttime, endtime, datediff(seconds, starttime,endtime)::numeric(12,2) as run_seconds, aborted, decode(alrt.event,'Very selective query filter','Filter','Scanned a large number of deleted rows','Deleted','Nested Loop Join in the query plan','Nested Loop','Distributed a large number of rows across the network','Distributed','Broadcasted a large number of rows across the network','Broadcast','Missing query planner statistics','Stats',alrt.event) as eventfrom stl_query left outer join ( select query, trim(split_part(event,':',1)) as event from STL_ALERT_EVENT_LOG where event_time >= dateadd(day, -7, current_Date) group by query, trim(split_part(event,':',1)) ) as alrt on alrt.query = stl_query.querywhere userid <> 1 -- and (querytxt like 'SELECT%' or querytxt like 'select%' ) -- and database = ''and starttime >= dateadd(day, -7, current_Date)) group by database, label, qry_md5, abortedorder by total desc limit 50;
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~