您现在的位置是:首页 > 数据处理

kudu

batsom2022-03-21数据处理

简介kudu

 创建分区表
 
 CREATE TABLE odata.batsom_test2 (
   uuid       STRING,
   ceshi_time STRING,
   sys_time   STRING,
   sd_total    BIGINT,
   PRIMARY KEY (uuid)                                                             
 )                                                                                 
 PARTITION BY HASH (uuid) PARTITIONS 3                                            
 STORED AS KUDU;
 
impala 或hive 给指定kudu库中的表添加列,修改列并调整列位置
alter TABLE spark_kudu_test add columns(column_name string COMMENT '字段名称');
alter TABLE spark_kudu_test CHANGE column_name column_name STRING ;

ALTER TABLE ods_data.aa RENAME TO ods_data.aa;
 
ALTER TABLE dwd_data.aa DROP COLUMN sell_way_desc;
 
attunity的kudu效率
select rowid ,from_unixtime(cast(ceshi_time/1000  as int),'yyyyMMddHHmmss') ceshi_time,substr(sys_time,1,14) sys_time,
cast(ceshi_time/1000  as int)-unix_timestamp(from_unixtime(unix_timestamp(substr(sys_time,1,14),'yyyyMMddHHmmss'))) time_xiangcha from
(select rowid ,cast(ceshi_time as bigint) ceshi_time,sys_time  from odata.inkudutest001) tmp order by ceshi_time desc;


袋鼠云kudu效率:
select uuid,from_unixtime(unix_timestamp(substr(flink_run_time,1,14),'yyyyMMddHHmmss')+28800) flink_run_time,
from_unixtime(unix_timestamp(substr(mysql_insert_time,1,14),'yyyyMMddHHmmss')) mysql_insert_time ,
unix_timestamp(from_unixtime(unix_timestamp(substr(flink_run_time,1,14),'yyyyMMddHHmmss')+28800))-
unix_timestamp(from_unixtime(unix_timestamp(substr(mysql_insert_time,1,14),'yyyyMMddHHmmss'))) tmp1
from odata.batsom_test2 where mysql_insert_time<>'test' order by flink_run_time desc limit 10;

郑重声明:

本站所有活动均为互联网所得,如有侵权请联系本站删除处理

随便看看

文章排行

本栏推荐

栏目更新