-
-
-
-随后边缘网关按照配置的采集规则进行采集,目前可以通过边缘端InfluxDB的Web UI查看数据:
-
-
-
-采集的数据会通过MQTT消息发送到服务端,见下节(采集数据实时预览)。
-
-同事,在平台更改采集配置(部署)后,通过 POST http://localhost:8088/edge/002/sync 可以触发网关进行配置同步。
-
-
-
-### 采集数据实时预览
-
-DAC采集的数据会实时推送到服务器MQTT上,服务端进行**入库**操作,并支持WebSocket像前端接口**推送**。
-
-ws地址:ws://localhost:8088/edge/ws/{device}
-
-实时数据预览界面:http://localhost:8088/edge/rt/{device}
-
-
-
-
-
-### 绑定包含振动设备的结构物
-
- 新建包含振动设备的结构物,测试如下:
-
-
-
-同上,执行结构物绑定网关操作。
-
-
-
-模拟振动设备连接到网关,通过日志可以看到网关开始采集振动传感器:
-
-
-
-振动数据存储在本地,通过数据库的定时聚集功能(CQ),生成分钟级聚集数据。查看实时数据如下:
-
-
-
-
-
-### 动态数据实时预览
-
-振动的实时数据**默认不会**直接推送到平台。
-
-前端打开振动设备实时数据界面,将发布WS订阅,此时会通知设备开始上报数据(类似视频推流服务的实现),之后类似普通数据的处理方式。
-
-实时数据刷新界面如下:
-
-
-
-WS订阅退出后,会通知设备关闭实时推流(节约流量、性能和服务端存储)。
-
-后面会实现云端保存最近一段播放历史、设备上的历史数据回放功能。
-
-
-
-### 作单机振动采集软件使用
-
-包含振动采集的配置、采集、计算、存储、转发功能。可以替换某些场景下本地工控机上的DAAS软件。
-
-> 注:云端工作模式,访问设备上的Vib界面,可以查看配置,但是不能进行修改。
-
-
-
-振动设备配置:http://10.8.30.244:8828/vib
-
- 
-
-振动通道配置:
-
- 
-
-IP设置:
-
- 
-
-网关侧实时数据预览:
-
- 
\ No newline at end of file
diff --git a/doc/技术文档/EDGE-V0.2功能计划.md b/doc/技术文档/EDGE-V0.2功能计划.md
deleted file mode 100644
index 7001f7e..0000000
--- a/doc/技术文档/EDGE-V0.2功能计划.md
+++ /dev/null
@@ -1 +0,0 @@
-1. 历史数据查询
\ No newline at end of file
diff --git a/doc/技术文档/EDGE-V0.2调试手册.md b/doc/技术文档/EDGE-V0.2调试手册.md
deleted file mode 100644
index e843237..0000000
--- a/doc/技术文档/EDGE-V0.2调试手册.md
+++ /dev/null
@@ -1,286 +0,0 @@
-## 部署启动
-
-### EDGE
-
-**设备型号**:ok-3399C
-
-**系统**:ubuntu-18.02
-
-**默认用户**:forlinx / forlinx
-
-**网络**: 通过netplan (apply)设置网络地址
-
-**安装程序:**
-
-```sh
-#通过串口线连接Console口,或者设置好网络后通过IP地址,远程SSH到板子上
-# 安装目前只支持在线模式,设备必须接入因特网
-# 1. 安装docker
-$ sudo apt-get update
-$ sudo apt-get upgrade
-$ curl -fsSL test.docker.com -o get-docker.sh && sh get-docker.sh
-$ sudo usermod -aG docker $USER
-$ sudo apt install gnupg2 pass
-
-# 2. 安装程序
-# 复制disk包到网关上
-$ chmox +x docker-compose
-$ docker-compose up -d
-```
-
-
-
-安装完成之后,在浏览器中访问 http://ip:8828 ,进入如下界面,表示设备初始化成功
-
-
-
-
-
-
-
-
-
-### SERVER
-
-**基础服务**
-
-+ Emqx
-
- 启动MQTT代理服务, emqx start
-
-+ Prometheus
-
- 配置抓取设备指标
-
- ```yaml
- scrape_configs:
- - job_name: "edge-server"
- static_configs:
- - targets: ["localhost:19202"]
- # 调试使用(抓取内网设备上的监控指标)
- - job_name: "dac"
- static_configs:
- - targets: ["10.8.30.244:19201"]
- ```
-
- 默认UI地址: http://localhost:9090/
-
-+ Grafana
-
- 配合Prometheus显示EDGE状态和性能指标。
-
-+ 其他
-
- + 连接测试Iota数据库 `postgres://postgres:postgres@10.8.30.156:5432/iota20211206?sslmode=disable`
- + 部署以太网站 http://10.8.30.38/
- + Postman调试工具
-
-
-
-**启动SERVER**
-
-配置`server.conf`
-
-```json
-{
- "msg.mqtt.center": "10.8.30.236:1883", -- MQTT Broker地址
- "web.url":":8088", -- WEB接口地址
- "db.type": "postgres",
- "db.conn": "postgres://postgres:postgres@10.8.30.156:5432/iota20211206?sslmode=disable", -- 以太数据库地址
- "log.file":true,
- "log.file.loc":"runtime/logs/log"
-}
-```
-
-启动Server.
-
-
-
-## 功能演示
-
-
-
-### 平台新增边缘网关
-
-目前已经实现CRUD API
-
-**新增设备:**
-
-URL:Post http://localhost:8088/edges
-
-BODY:
-
-```json
-{"serial_no":"002","name":"DEMO-2","hardware":{"name":"FS-EDGE-01"},"software":{"ver":"0.2.1"}}
-```
-
-RET: 200
-
-> 平台serial_no设置必须和设备端SerialNo匹配,才能进行设备控制
-
-
-
-**查询当前所有设备**:
-
-URL: GET localhost:8088/edges
-
-RET:
-
-```json
-{"001":{"serial_no":"001","name":"DEMO-WW","hardware":{"name":"FS-EDGE-01"},"software":{"ver":"0.2.1"},"set_ver":"1","config_ver":"9"},"002":{"serial_no":"002","name":"DEMO-2","properties":{"hb":"true"},"hardware":{"name":"FS-EDGE-01"},"software":{"ver":"0.2.1"},"set_ver":"0","config_ver":"0"}}
-```
-
-
-
-其他: **修改PUT** 和 **删除 DELETE**
-
-
-
-### 网关在线状态和性能在线统计
-
-通过网关心跳数据上报,Prometheus抓取,可通过Grafana查看:
-
-
-
-其中心跳数据格式如下:
-
-```json
-{
- "time": 1642734937400741643, -- 当前数据的设备时间(用于校时)
- "ver": {
- "pv": "v0.0.1" -- 当前配置版本(包括设备配置和采集配置)
- },
- "machine": {
- "mt": 3845, -- 总内存
- "mf": 2616, -- 空闲内存
- "mp": 10.074738688877986, -- 内存使用比
- "dt": 12031, -- 总磁盘
- "df": 7320, -- 剩余磁盘空间
- "dp": 36, -- 磁盘使用率
- "u": 7547, -- 系统启动时长
- "pform": "ubuntu", -- 系统信息
- "pver": "18.04", -- 系统版本
- "load1": 0.09, -- 1分钟内平均负载
- "load5": 0.02, -- 5分钟内平均负载
- "load15": 0.01 -- 15分钟内平均负载
- }
-}
-```
-
-
-
-### 绑定结构物到网关
-
-在以太(测试环境)建立结构物,我们这里模拟的一个振弦采集的场景,如下
-
-
-
-下发该结构物到边缘网关
-
-URL:Post http://llocalhost:8088/edge/002/things
-
-BODY:
-
-```json
-["f73d1b17-f2d5-46dd-9dd1-ebbb66b11854"]
-```
-
-RET: 200
-
-> 获取指定网关绑定的结构物 GET http://llocalhost:8088/edge/002/things
-
-
-
-下发后,边缘网关自动更新配置(如果未在线,会在下次上下后更新配置),并重启
-
-
-
-
-
-模拟DTU设备上线到边缘网关,
-
-
-
-
-
-随后边缘网关按照配置的采集规则进行采集,目前可以通过边缘端InfluxDB的Web UI查看数据:
-
-
-
-采集的数据会通过MQTT消息发送到服务端,见下节(采集数据实时预览)。
-
-同事,在平台更改采集配置(部署)后,通过 POST http://localhost:8088/edge/002/sync 可以触发网关进行配置同步。
-
-
-
-### 采集数据实时预览
-
-DAC采集的数据会实时推送到服务器MQTT上,服务端进行**入库**操作,并支持WebSocket像前端接口**推送**。
-
-ws地址:ws://localhost:8088/edge/ws/{device}
-
-实时数据预览界面:http://localhost:8088/edge/rt/{device}
-
-
-
-
-
-### 绑定包含振动设备的结构物
-
- 新建包含振动设备的结构物,测试如下:
-
-
-
-同上,执行结构物绑定网关操作。
-
-
-
-模拟振动设备连接到网关,通过日志可以看到网关开始采集振动传感器:
-
-
-
-振动数据存储在本地,通过数据库的定时聚集功能(CQ),生成分钟级聚集数据。查看实时数据如下:
-
-
-
-
-
-### 动态数据实时预览
-
-振动的实时数据**默认不会**直接推送到平台。
-
-前端打开振动设备实时数据界面,将发布WS订阅,此时会通知设备开始上报数据(类似视频推流服务的实现),之后类似普通数据的处理方式。
-
-实时数据刷新界面如下:
-
-
-
-WS订阅退出后,会通知设备关闭实时推流(节约流量、性能和服务端存储)。
-
-后面会实现云端保存最近一段播放历史、设备上的历史数据回放功能。
-
-
-
-### 作单机振动采集软件使用
-
-包含振动采集的配置、采集、计算、存储、转发功能。可以替换某些场景下本地工控机上的DAAS软件。
-
-> 注:云端工作模式,访问设备上的Vib界面,可以查看配置,但是不能进行修改。
-
-
-
-振动设备配置:http://10.8.30.244:8828/vib
-
- 
-
-振动通道配置:
-
- 
-
-IP设置:
-
- 
-
-网关侧实时数据预览:
-
- 
\ No newline at end of file
diff --git a/doc/技术文档/EDGE-V0.2调试手册.pdf b/doc/技术文档/EDGE-V0.2调试手册.pdf
deleted file mode 100644
index 6dc96f5..0000000
Binary files a/doc/技术文档/EDGE-V0.2调试手册.pdf and /dev/null differ
diff --git a/doc/技术文档/EDGE-环境准备.md b/doc/技术文档/EDGE-环境准备.md
deleted file mode 100644
index 7145164..0000000
--- a/doc/技术文档/EDGE-环境准备.md
+++ /dev/null
@@ -1,69 +0,0 @@
-找一根USB转接线连接 板子的Console口,如下:
-
-
-
-
-
-
-
-电脑会自动安装驱动,等待自动安装完成,在设备管理界面中,可查看具体的串口号:
-
-
-
-
-
-通过putty或xshell等远程工具可以进行SSH远程连接:
-
-
-
-
-
-
-
-> 默认用户名密码均是forlinx, 可以通过 `sudo su` 命令进入超管账户,密码也是`forlinx`
-
-
-
-进行网络配置:
-
-找一根网线,将板子连接到工作路由上,
-
-```sh
-root@forlinx:/etc/netplan# cd /etc/netplan/
-root@forlinx:/etc/netplan# ls
-50-cloud-init.yaml
-root@forlinx:/etc/netplan# vi 50-cloud-init.yaml
-network:
- ethernets:
- eth0:
- dhcp4: no
- addresses: [10.8.30.244/24]
- gateway4: 10.8.30.1
- nameservers:
- addresses: [114.114.114.114]
- search: [localdomain]
- version: 2
-~
-root@forlinx:/etc/netplan# netplan apply
-root@forlinx:/etc/netplan# ip a
-```
-
-
-
-这里我的配置是:
-
-```yaml
-network:
- ethernets:
- eth0:
- dhcp4: no
- addresses: [10.8.30.244/24] #网络地址和掩码
- gateway4: 10.8.30.1 # 网关地址
- nameservers:
- addresses: [114.114.114.114] # DNS
- search: [localdomain]
- version: 2
-
-```
-
-网络配置完成后,即可执行后续命令,具体参照 《EDGE-V-N调试手册.pdf》
\ No newline at end of file
diff --git a/doc/技术文档/EDGE-环境准备.pdf b/doc/技术文档/EDGE-环境准备.pdf
deleted file mode 100644
index addc941..0000000
Binary files a/doc/技术文档/EDGE-环境准备.pdf and /dev/null differ
diff --git a/doc/技术文档/Flink升级差异性文档.docx b/doc/技术文档/Flink升级差异性文档.docx
deleted file mode 100644
index 7c42162..0000000
Binary files a/doc/技术文档/Flink升级差异性文档.docx and /dev/null differ
diff --git a/doc/技术文档/IOT产品线汇报1020.pdf b/doc/技术文档/IOT产品线汇报1020.pdf
deleted file mode 100644
index 4b7b14a..0000000
Binary files a/doc/技术文档/IOT产品线汇报1020.pdf and /dev/null differ
diff --git a/doc/技术文档/Java调用js函数.docx b/doc/技术文档/Java调用js函数.docx
deleted file mode 100644
index 1527923..0000000
Binary files a/doc/技术文档/Java调用js函数.docx and /dev/null differ
diff --git a/doc/技术文档/Script-analysis接口.docx b/doc/技术文档/Script-analysis接口.docx
deleted file mode 100644
index fc88f73..0000000
Binary files a/doc/技术文档/Script-analysis接口.docx and /dev/null differ
diff --git a/doc/技术文档/UCloud-DAC上云测试.md b/doc/技术文档/UCloud-DAC上云测试.md
deleted file mode 100644
index eca360b..0000000
--- a/doc/技术文档/UCloud-DAC上云测试.md
+++ /dev/null
@@ -1,505 +0,0 @@
-## UCloud云主机
-
-https://console.ucloud.cn/
-
-账户密码 FS12345678
-
-
-
-## 环境准备
-
-**Postgres**
-
-```sh
-apt update
-apt install postgresql postgresql-contrib
-
-su postgres
-> psql
-> # alter user postgres with password 'ROOT';
-
-vi /etc/postgresql/9.5/main/pg_hba.conf
-# host all all 10.60.178.0/24 md5
-service postgresql restart
-
-createdb iOTA_console
-psql -d iOTA_console < dump.sql
-```
-
-
-
-**Docker**
-
-```sh
-curl -sSL https://get.daocloud.io/docker | sh
-```
-
-
-
-**Redis**
-
-因为redis默认端口暴露在外网环境不安全,启动ubuntu防火墙
-
-```sh
-ufw enable
-
-ufw status
-
-# 默认允许外部访问本机
-ufw default allow
-
-# 禁止6379端口外部访问
-ufw deny 6379
-
-# 其他一些
-# 允许来自10.0.1.0/10访问本机10.8.30.117的7277端口
-ufw allow proto tcp from 10.0.1.0/10 to 10.8.30.117 7277
-
-Status: active
-
-To Action From
--- ------ ----
-6379 DENY Anywhere
-6379 (v6) DENY Anywhere (v6)
-```
-
-开放了防火墙,外网还是无法访问开放的端口。进入ucloud控制台,
-
-基础网络UNet > 外网防火墙 > 创建防火墙 (自定义规则)
-
-开放所有tcp端口,只禁用redis-6379
-
-
-
-云主机UHost > 关联资源操作 > 更改外网防火墙
-
-
-
-
-
-安装redis
-
-```sh
-apt update
-apt install redis-server
-```
-
-
-
-
-
-
-
-## 引流测试
-
-机房搬迁,准备在云上运行单实例dac进行数据采集。
-
-准备工作:进行线上引流测试。不影响商用dac的采集,准备如下:
-
-1. proxy上被动连接转发到UCloud。
- 1. 流单向复制。设备 -> proxy -> DAC通路, 开路:DAC->proxy-|->设备。
-2. 主动连接
- 1. mqtt、http主动连接第三方服务器的,
- 2. mqtt 的clientid添加后缀
-3. 截断driver的写入
-
-关键代码
-
-```go
-// io.copy无法多次执行
-
-
-// 如果配置了OutTarget,则进行本地复制到同时向外复制流
-func Pipeout(conn1, conn2 net.Conn, port string, wg *sync.WaitGroup, reg []byte) {
- if OutTarget != "" {
- tt := fmt.Sprintf("%s:%s", OutTarget, port)
- tw := NewTeeWriter(tt, reg)
- tw.Start()
- if _, err := io.Copy(tw, io.TeeReader(conn2 /*read*/, conn1 /*write*/)); err != nil {
- log.Error("pipeout error: %v", err)
- }
- tw.Close()
- } else {
- io.Copy(conn1, conn2)
- }
- conn1.Close()
- log.Info("[tcp] close the connect at local:%s and remote:%s", conn1.LocalAddr().String(), conn1.RemoteAddr().String())
- wg.Done()
-}
-
-// 引流写入器
-type TeeWriter struct {
- target string // 转发目标地址
- conn net.Conn // 转发连接
- isConnect bool // 是否连接
- exitCh chan interface{} // 退出
- registry []byte
-}
-
-func NewTeeWriter(target string, reg []byte) *TeeWriter {
- return &TeeWriter{
- target: target,
- exitCh: make(chan interface{}),
- registry: reg,
- }
-}
-
-func (w *TeeWriter) Start() error {
- go w.keep_connect()
- return nil
-}
-
-func (w *TeeWriter) Close() error {
- close(w.exitCh)
- return nil
-}
-
-func (w *TeeWriter) Write(p []byte) (n int, err error) {
- defer func() {
- if err := recover(); err != nil {
- log.Error("teewrite failed %s", w.target)
- }
- }()
- if w.isConnect {
- go w.conn.Write(p)
- }
- // 此方法永远不报错
- return len(p), nil
-}
-
-func (w *TeeWriter) keep_connect() {
- defer func() {
- if err := recover(); err != nil {
- log.Error("teewrite keep connect error: %v", err)
- }
- }()
- for {
- if cont := func() bool {
- var err error
- w.conn, err = net.Dial("tcp", w.target)
- if err != nil {
- select {
- case <-time.After(time.Second):
- return true
- case <-w.exitCh:
- return false
- }
- }
- w.isConnect = true
- defer func() {
- w.isConnect = false
- }()
- defer w.conn.Close()
-
- if w.registry != nil {
- _, err := w.conn.Write(w.registry)
- if err != nil {
- return true
- }
- }
-
- if err := w.conn.(*net.TCPConn).SetKeepAlive(true); err != nil {
- return true
- }
- if err := w.conn.(*net.TCPConn).SetKeepAlivePeriod(30 * time.Second); err != nil {
- return true
- }
-
- connLostCh := make(chan interface{})
- defer close(connLostCh)
-
- // 检查远端bconn连接
- go func() {
- defer func() {
- log.Info("bconn check exit")
- recover() // write to closed channel
- }()
- one := make([]byte, 1)
- for {
- if _, err := w.conn.Read(one); err != nil {
- log.Info("bconn disconnected")
- connLostCh <- err
- return
- }
- time.Sleep(time.Second)
- }
- }()
-
- select {
- case <-connLostCh:
- time.Sleep(10 * time.Second)
- return true
- case <-w.exitCh:
- return false
- }
- }(); !cont {
- break
- } else {
- time.Sleep(time.Second)
- }
- }
-}
-```
-
-
-
-引流测试未执行。。。
-
-
-
-## DAC线上测试
-
-配置如下
-
-```json
-
-```
-
-需要配置 `url.maps.json`
-
-```json
-"47.106.112.113:1883"
-"47.104.249.223:1883"
-"mqtt.starwsn.com:1883"
-"test.tdzntech.com:1883"
-"mqtt.tdzntech.com:1883"
-
-"s1.cn.mqtt.theiota.cn:8883"
-"mqtt.datahub.anxinyun.cn:1883"
-
-"218.3.126.49:3883"
-"221.230.55.28:1883"
-
-"anxin-m1:1883"
-"10.8.25.201:8883"
-"10.8.25.231:1883"
-"iota-m1:1883"
-```
-
-
-
-以下数据无法获取:
-
-1. gnss数据
-
- http.get error: Get "http://10.8.25.254:7005/gnss/6542/data?startTime=1575443410000&endTime=1637628026000": dial tcp 10.8.25.254:7005: i/o timeout
-
-2. 时
-
-
-
-## DAC内存问题排查
-
-> 文档整理不够清晰,可以参考 https://www.cnblogs.com/gao88/p/9849819.html
->
-> pprof的使用:
->
-> https://segmentfault.com/a/1190000020964967
->
-> https://cizixs.com/2017/09/11/profiling-golang-program/
-
-查看进程内存消耗:
-
-```sh
-top -c
-# shift+M
-top - 09:26:25 up 1308 days, 15:32, 2 users, load average: 3.14, 3.70, 4.37
-Tasks: 582 total, 1 running, 581 sleeping, 0 stopped, 0 zombie
-%Cpu(s): 5.7 us, 1.5 sy, 0.0 ni, 92.1 id, 0.0 wa, 0.0 hi, 0.8 si, 0.0 st
-KiB Mem : 41147560 total, 319216 free, 34545608 used, 6282736 buff/cache
-KiB Swap: 0 total, 0 free, 0 used. 9398588 avail Mem
-
- PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
-18884 root 20 0 11.238g 0.010t 11720 S 48.8 26.7 39:52.43 ./dac
-```
-
-发现dac内存咱用超10G
-
-
-
-查看所在容器:
-
-```sh
-root@iota-n3:/home/iota/etwatcher# systemd-cgls | grep 18884
-│ │ ├─32574 grep --color=auto 18884
-│ │ └─18884 ./dac
-```
-
-
-
-```sh
-for i in $(docker container ls --format "{{.ID}}"); do docker inspect -f '{{.State.Pid}} {{.Name}}' $i; done | grep 18884
-```
-
-定位到 dac-2
-
-
-
-> 查看指定容器的pid可以使用“
->
-> docker top container_id
->
-> 获取所有容器的PID
->
-> ```sh
-> for l in `docker ps -q`;do docker top $l|awk -v dn="$l" 'NR>1 {print dn " PID is " $2}';done
-> ```
->
-> 通过docker inspect方式
->
-> ```sh
-> docker inspect --format "{{.State.Pid}}" container_id/name
-> ```
-
-查看dac-2容器信息
-
-```sh
-root@iota-n3:~# docker ps | grep dac-2
-05b04c4667bc repository.anxinyun.cn/iota/dac "./dac" 2 hours ago Up 2 hours k8s_iota-dac_iota-dac-2_iota_d9879026-465b-11ec-ad00-c81f66cfe365_1
-be5682a82cda theiota.store/iota/filebeat "filebeat -e" 4 hours ago Up 4 hours k8s_iota-filebeat_iota-dac-2_iota_d9879026-465b-11ec-ad00-c81f66cfe365_0
-f23499bc5c22 gcr.io/google_containers/pause-amd64:3.0 "/pause" 4 hours ago Up 4 hours k8s_POD_iota-dac-2_iota_d9879026-465b-11ec-ad00-c81f66cfe365_0
-c5bcbf648268 repository.anxinyun.cn/iota/dac "./dac" 6 days ago Up 6 days k8s_iota-dac_iota-dac-2_iota_2364cf27-41a0-11ec-ad00-c81f66cfe365_0
-```
-
-> 有两个?(另外一个僵尸进程先不管)
-
-
-
-进入容器:
-
-```sh
-docker exec -it 05b04c4667bc /bin/ash
-```
-
-
-
-> 容器里没有 curl命令?
->
-> 使用 wget -q -O - https://www.baidu.com 直接输出返回结果
-
-
-
-在宿主机:
-
-```sh
-go tool pprof -inuse_space http://10.244.1.235:6060/debug/pprof/heap
-
-# top 查看当前内存占用top10
-(pprof) top
-Showing nodes accounting for 913.11MB, 85.77% of 1064.60MB total
-Dropped 215 nodes (cum <= 5.32MB)
-Showing top 10 nodes out of 109
- flat flat% sum% cum cum%
- 534.20MB 50.18% 50.18% 534.20MB 50.18% runtime.malg
- 95.68MB 8.99% 59.17% 95.68MB 8.99% iota/vendor/github.com/yuin/gopher-lua.newLTable
- 61.91MB 5.82% 64.98% 90.47MB 8.50% iota/vendor/github.com/yuin/gopher-lua.newFuncContext
- 50.23MB 4.72% 69.70% 50.23MB 4.72% iota/vendor/github.com/yuin/gopher-lua.newRegistry
- 34.52MB 3.24% 72.94% 34.52MB 3.24% iota/vendor/github.com/yuin/gopher-lua.(*LTable).RawSetString
- 33MB 3.10% 76.04% 33MB 3.10% iota/vendor/github.com/eclipse/paho%2emqtt%2egolang.outgoing
- 31MB 2.91% 78.95% 31MB 2.91% iota/vendor/github.com/eclipse/paho%2emqtt%2egolang.errorWatch
- 31MB 2.91% 81.87% 31MB 2.91% iota/vendor/github.com/eclipse/paho%2emqtt%2egolang.keepalive
- 27.06MB 2.54% 84.41% 27.06MB 2.54% iota/vendor/github.com/yuin/gopher-lua.newFunctionProto (inline)
- 14.50MB 1.36% 85.77% 14.50MB 1.36% iota/vendor/github.com/eclipse/paho%2emqtt%2egolang.alllogic
-```
-
-
-
-> 列出消耗最大的部分 top
->
-> 列出函数代码以及对应的取样数据 list
->
-> 汇编代码以及对应的取样数据 disasm
->
-> web命令生成svg图
-
-
-
-在服务器上执行go tool pprof后生成profile文件,拷贝到本机windows机器,执行
-
-
-
-
-
-> 安装 graphviz
->
-> https://graphviz.gitlab.io/_pages/Download/Download_windows.html
->
-> 下载zip解压配置系统环境变量
->
-> ```sh
-> C:\Users\yww08>dot -version
-> dot - graphviz version 2.45.20200701.0038 (20200701.0038)
-> There is no layout engine support for "dot"
-> Perhaps "dot -c" needs to be run (with installer's privileges) to register the plugins?
-> ```
-
-> ```sh
-> 执行dot初始化
->
-> dot -c
-> ```
-
-
-
-本机执行pprof
-
-```sh
-go tool pprof --http=:8080 pprof.dac.alloc_objects.alloc_space.inuse_objects.inuse_space.003.pb.gz
-```
-
-
-
-内存的占用主要集中在:
-
-runtime malg
-
-去搜寻了大量资料之后,发现go的官网早就有这个issue(官方issue),大佬们知道,只是不好解决,描述如下:
-Your observation is correct. Currently the runtime never frees the g objects created for goroutines, though it does reuse them. The main reason for this is that the scheduler often manipulates g pointers without write barriers (a lot of scheduler code runs without a P, and hence cannot have write barriers), and this makes it very hard to determine when a g can be garbage collected.
-
-大致原因就是go的gc采用的是并发垃圾回收,调度器在操作协程指针的时候不使用写屏障(可以看看draveness大佬的分析),因为调度器在很多执行的时候需要使用P(GPM),因此不能使用写屏障,所以调度器很难确定一个协程是否可以当成垃圾回收,这样调度器里的协程指针信息就会泄露。
-————————————————
-版权声明:本文为CSDN博主「wuyuhao13579」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
-原文链接:https://blog.csdn.net/wuyuhao13579/article/details/109079570
-
-
-
-找进程的日志:
-
-发现出问题的DAC日志重复出现
-
-```sh
-Loss connection
-```
-
-这是DAC代码中mqtt断连的时候触发的日志。查看源码:
-
-```go
-func (d *Mqtt) Connect() (err error) {
-
- //TODO not safe
- d.setConnStat(statInit)
- //decode
-
- //set opts
- opts := pahomqtt.NewClientOptions().AddBroker(d.config.URL)
- opts.SetClientID(d.config.ClientID)
- opts.SetCleanSession(d.config.CleanSessionFlag)
- opts.SetKeepAlive(time.Second * time.Duration(d.config.KeepAlive)) // 30s
- opts.SetPingTimeout(time.Second * time.Duration(d.config.KeepAlive*2))
- opts.SetConnectionLostHandler(func(c pahomqtt.Client, err error) {
- // mqtt连接掉线时的回调函数
- log.Debug("[Mqtt] Loss connection, %s %v", err, d.config)
- d.terminateFlag <- true
- //d.Reconnect()
- })
-}
-```
-
-
-
-## 对象存储(OSS)
-
-阿里云 OSS基础概念 https://help.aliyun.com/document_detail/31827.html
-
-
-
diff --git a/doc/技术文档/flink关键函数说明.docx b/doc/技术文档/flink关键函数说明.docx
deleted file mode 100644
index b8861b3..0000000
Binary files a/doc/技术文档/flink关键函数说明.docx and /dev/null differ
diff --git a/doc/技术文档/flink数据仓库.docx b/doc/技术文档/flink数据仓库.docx
deleted file mode 100644
index ed69c14..0000000
Binary files a/doc/技术文档/flink数据仓库.docx and /dev/null differ
diff --git a/doc/技术文档/iceberg预研/roadmap.pptx b/doc/技术文档/iceberg预研/roadmap.pptx
deleted file mode 100644
index 129f998..0000000
Binary files a/doc/技术文档/iceberg预研/roadmap.pptx and /dev/null differ
diff --git a/doc/技术文档/iceberg预研/杨华.pdf b/doc/技术文档/iceberg预研/杨华.pdf
deleted file mode 100644
index 5cc3886..0000000
Binary files a/doc/技术文档/iceberg预研/杨华.pdf and /dev/null differ
diff --git a/doc/技术文档/iceberg预研/胡争.pdf b/doc/技术文档/iceberg预研/胡争.pdf
deleted file mode 100644
index a8bf272..0000000
Binary files a/doc/技术文档/iceberg预研/胡争.pdf and /dev/null differ
diff --git a/doc/技术文档/iceberg预研/邵赛赛.pdf b/doc/技术文档/iceberg预研/邵赛赛.pdf
deleted file mode 100644
index 7b4697b..0000000
Binary files a/doc/技术文档/iceberg预研/邵赛赛.pdf and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121123929955.png b/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121123929955.png
deleted file mode 100644
index 540b9a3..0000000
Binary files a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121123929955.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121135940527.png b/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121135940527.png
deleted file mode 100644
index ec96b77..0000000
Binary files a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121135940527.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121152314499.png b/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121152314499.png
deleted file mode 100644
index e0a29be..0000000
Binary files a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121152314499.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121152705457.png b/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121152705457.png
deleted file mode 100644
index 6b5e363..0000000
Binary files a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121152705457.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121154630802.png b/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121154630802.png
deleted file mode 100644
index 889b076..0000000
Binary files a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121154630802.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121162513190.png b/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121162513190.png
deleted file mode 100644
index 56bbd15..0000000
Binary files a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121162513190.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121162951692.png b/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121162951692.png
deleted file mode 100644
index 96ec62a..0000000
Binary files a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121162951692.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121163144291.png b/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121163144291.png
deleted file mode 100644
index fec4ea8..0000000
Binary files a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121163144291.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121163903101.png b/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121163903101.png
deleted file mode 100644
index 049a55f..0000000
Binary files a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121163903101.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121164158554.png b/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121164158554.png
deleted file mode 100644
index 21fce5e..0000000
Binary files a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121164158554.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121164306992.png b/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121164306992.png
deleted file mode 100644
index 702a35f..0000000
Binary files a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121164306992.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121164715214.png b/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121164715214.png
deleted file mode 100644
index 2c32c55..0000000
Binary files a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121164715214.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121165041737.png b/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121165041737.png
deleted file mode 100644
index 4f5f18f..0000000
Binary files a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121165041737.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121165146403.png b/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121165146403.png
deleted file mode 100644
index 6bd6e0d..0000000
Binary files a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121165146403.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121165230596.png b/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121165230596.png
deleted file mode 100644
index 82c716d..0000000
Binary files a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121165230596.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121165302506.png b/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121165302506.png
deleted file mode 100644
index 3c60f51..0000000
Binary files a/doc/技术文档/imgs/EDGE-V0.1调试手册/image-20220121165302506.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-环境准备/image-20220407085859032.png b/doc/技术文档/imgs/EDGE-环境准备/image-20220407085859032.png
deleted file mode 100644
index 7422476..0000000
Binary files a/doc/技术文档/imgs/EDGE-环境准备/image-20220407085859032.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-环境准备/image-20220407090121447.png b/doc/技术文档/imgs/EDGE-环境准备/image-20220407090121447.png
deleted file mode 100644
index 79102e3..0000000
Binary files a/doc/技术文档/imgs/EDGE-环境准备/image-20220407090121447.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-环境准备/image-20220407090243473.png b/doc/技术文档/imgs/EDGE-环境准备/image-20220407090243473.png
deleted file mode 100644
index 5614cae..0000000
Binary files a/doc/技术文档/imgs/EDGE-环境准备/image-20220407090243473.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-环境准备/image-20220407090353559.png b/doc/技术文档/imgs/EDGE-环境准备/image-20220407090353559.png
deleted file mode 100644
index 6cf92bf..0000000
Binary files a/doc/技术文档/imgs/EDGE-环境准备/image-20220407090353559.png and /dev/null differ
diff --git a/doc/技术文档/imgs/EDGE-环境准备/image-20220407090848867.png b/doc/技术文档/imgs/EDGE-环境准备/image-20220407090848867.png
deleted file mode 100644
index 131cde1..0000000
Binary files a/doc/技术文档/imgs/EDGE-环境准备/image-20220407090848867.png and /dev/null differ
diff --git a/doc/技术文档/imgs/UCloud-DAC上云测试/image-20211116103902511.png b/doc/技术文档/imgs/UCloud-DAC上云测试/image-20211116103902511.png
deleted file mode 100644
index 9998e62..0000000
Binary files a/doc/技术文档/imgs/UCloud-DAC上云测试/image-20211116103902511.png and /dev/null differ
diff --git a/doc/技术文档/imgs/UCloud-DAC上云测试/image-20211116112452820.png b/doc/技术文档/imgs/UCloud-DAC上云测试/image-20211116112452820.png
deleted file mode 100644
index a4ed750..0000000
Binary files a/doc/技术文档/imgs/UCloud-DAC上云测试/image-20211116112452820.png and /dev/null differ
diff --git a/doc/技术文档/imgs/UCloud-DAC上云测试/image-20211122152046659.png b/doc/技术文档/imgs/UCloud-DAC上云测试/image-20211122152046659.png
deleted file mode 100644
index 60eb595..0000000
Binary files a/doc/技术文档/imgs/UCloud-DAC上云测试/image-20211122152046659.png and /dev/null differ
diff --git a/doc/技术文档/imgs/UCloud-DAC上云测试/image-20211122152136855.png b/doc/技术文档/imgs/UCloud-DAC上云测试/image-20211122152136855.png
deleted file mode 100644
index dd265c5..0000000
Binary files a/doc/技术文档/imgs/UCloud-DAC上云测试/image-20211122152136855.png and /dev/null differ
diff --git a/doc/技术文档/imgs/数据湖2/377adab44aed2e73ddb8d5980337718386d6faf4.jpeg b/doc/技术文档/imgs/数据湖2/377adab44aed2e73ddb8d5980337718386d6faf4.jpeg
deleted file mode 100644
index e288465..0000000
Binary files a/doc/技术文档/imgs/数据湖2/377adab44aed2e73ddb8d5980337718386d6faf4.jpeg and /dev/null differ
diff --git a/doc/技术文档/imgs/数据湖2/77094b36acaf2edd63d01449f226d1e139019328.jpeg b/doc/技术文档/imgs/数据湖2/77094b36acaf2edd63d01449f226d1e139019328.jpeg
deleted file mode 100644
index 2a04e41..0000000
Binary files a/doc/技术文档/imgs/数据湖2/77094b36acaf2edd63d01449f226d1e139019328.jpeg and /dev/null differ
diff --git a/doc/技术文档/imgs/数据湖2/a6efce1b9d16fdfa26174a12c9b95c5c95ee7b96.jpeg b/doc/技术文档/imgs/数据湖2/a6efce1b9d16fdfa26174a12c9b95c5c95ee7b96.jpeg
deleted file mode 100644
index 3d00bc4..0000000
Binary files a/doc/技术文档/imgs/数据湖2/a6efce1b9d16fdfa26174a12c9b95c5c95ee7b96.jpeg and /dev/null differ
diff --git a/doc/技术文档/imgs/数据湖2/b58f8c5494eef01f5824f06566c8492dbc317d19.jpeg b/doc/技术文档/imgs/数据湖2/b58f8c5494eef01f5824f06566c8492dbc317d19.jpeg
deleted file mode 100644
index 2659d70..0000000
Binary files a/doc/技术文档/imgs/数据湖2/b58f8c5494eef01f5824f06566c8492dbc317d19.jpeg and /dev/null differ
diff --git a/doc/技术文档/imgs/数据湖2/f3d3572c11dfa9ec7f198010e3e6270b918fc146.jpeg b/doc/技术文档/imgs/数据湖2/f3d3572c11dfa9ec7f198010e3e6270b918fc146.jpeg
deleted file mode 100644
index 8d2c29f..0000000
Binary files a/doc/技术文档/imgs/数据湖2/f3d3572c11dfa9ec7f198010e3e6270b918fc146.jpeg and /dev/null differ
diff --git a/doc/技术文档/imgs/数据湖2/image-20220119142219318.png b/doc/技术文档/imgs/数据湖2/image-20220119142219318.png
deleted file mode 100644
index 87f5e39..0000000
Binary files a/doc/技术文档/imgs/数据湖2/image-20220119142219318.png and /dev/null differ
diff --git a/doc/技术文档/imgs/数据湖2/image-20220120164032739.png b/doc/技术文档/imgs/数据湖2/image-20220120164032739.png
deleted file mode 100644
index 8727740..0000000
Binary files a/doc/技术文档/imgs/数据湖2/image-20220120164032739.png and /dev/null differ
diff --git a/doc/技术文档/imgs/数据湖2/image-20220127110428706.png b/doc/技术文档/imgs/数据湖2/image-20220127110428706.png
deleted file mode 100644
index a86d371..0000000
Binary files a/doc/技术文档/imgs/数据湖2/image-20220127110428706.png and /dev/null differ
diff --git a/doc/技术文档/imgs/视频产品构想/image-20220129153126420.png b/doc/技术文档/imgs/视频产品构想/image-20220129153126420.png
deleted file mode 100644
index 34b1a52..0000000
Binary files a/doc/技术文档/imgs/视频产品构想/image-20220129153126420.png and /dev/null differ
diff --git a/doc/技术文档/imgs/视频产品构想/image-20220129153140317.png b/doc/技术文档/imgs/视频产品构想/image-20220129153140317.png
deleted file mode 100644
index 2e96076..0000000
Binary files a/doc/技术文档/imgs/视频产品构想/image-20220129153140317.png and /dev/null differ
diff --git a/doc/技术文档/imgs/视频产品构想/image-20220129153624593.png b/doc/技术文档/imgs/视频产品构想/image-20220129153624593.png
deleted file mode 100644
index 8ce924f..0000000
Binary files a/doc/技术文档/imgs/视频产品构想/image-20220129153624593.png and /dev/null differ
diff --git a/doc/技术文档/imgs/视频产品构想/image-20220303173016767.png b/doc/技术文档/imgs/视频产品构想/image-20220303173016767.png
deleted file mode 100644
index 6041e29..0000000
Binary files a/doc/技术文档/imgs/视频产品构想/image-20220303173016767.png and /dev/null differ
diff --git a/doc/技术文档/imgs/视频产品构想/image-20220304094035019.png b/doc/技术文档/imgs/视频产品构想/image-20220304094035019.png
deleted file mode 100644
index f3d34b6..0000000
Binary files a/doc/技术文档/imgs/视频产品构想/image-20220304094035019.png and /dev/null differ
diff --git a/doc/技术文档/imgs/视频产品构想/image-20220305195430986.png b/doc/技术文档/imgs/视频产品构想/image-20220305195430986.png
deleted file mode 100644
index 93b8605..0000000
Binary files a/doc/技术文档/imgs/视频产品构想/image-20220305195430986.png and /dev/null differ
diff --git a/doc/技术文档/imgs/视频产品构想/image-20220305200649152.png b/doc/技术文档/imgs/视频产品构想/image-20220305200649152.png
deleted file mode 100644
index 423bfda..0000000
Binary files a/doc/技术文档/imgs/视频产品构想/image-20220305200649152.png and /dev/null differ
diff --git a/doc/技术文档/imgs/视频产品构想/image-20220307090023722.png b/doc/技术文档/imgs/视频产品构想/image-20220307090023722.png
deleted file mode 100644
index 208563f..0000000
Binary files a/doc/技术文档/imgs/视频产品构想/image-20220307090023722.png and /dev/null differ
diff --git a/doc/技术文档/imgs/视频产品构想/image-20220307092436931.png b/doc/技术文档/imgs/视频产品构想/image-20220307092436931.png
deleted file mode 100644
index eb183de..0000000
Binary files a/doc/技术文档/imgs/视频产品构想/image-20220307092436931.png and /dev/null differ
diff --git a/doc/技术文档/imgs/视频产品构想/image-20220307111257305.png b/doc/技术文档/imgs/视频产品构想/image-20220307111257305.png
deleted file mode 100644
index 3093b6b..0000000
Binary files a/doc/技术文档/imgs/视频产品构想/image-20220307111257305.png and /dev/null differ
diff --git a/doc/技术文档/imgs/视频产品构想/webp.webp b/doc/技术文档/imgs/视频产品构想/webp.webp
deleted file mode 100644
index 2202277..0000000
Binary files a/doc/技术文档/imgs/视频产品构想/webp.webp and /dev/null differ
diff --git a/doc/技术文档/imgs/视频产品构想/视频GB平台.png b/doc/技术文档/imgs/视频产品构想/视频GB平台.png
deleted file mode 100644
index 24c3a2e..0000000
Binary files a/doc/技术文档/imgs/视频产品构想/视频GB平台.png and /dev/null differ
diff --git a/doc/技术文档/imgs/边缘网关功能说明/image-20220407085859032.png b/doc/技术文档/imgs/边缘网关功能说明/image-20220407085859032.png
deleted file mode 100644
index 7422476..0000000
Binary files a/doc/技术文档/imgs/边缘网关功能说明/image-20220407085859032.png and /dev/null differ
diff --git a/doc/技术文档/imgs/边缘网关功能说明/image-20220407090121447.png b/doc/技术文档/imgs/边缘网关功能说明/image-20220407090121447.png
deleted file mode 100644
index 79102e3..0000000
Binary files a/doc/技术文档/imgs/边缘网关功能说明/image-20220407090121447.png and /dev/null differ
diff --git a/doc/技术文档/imgs/边缘网关功能说明/image-20220407090243473.png b/doc/技术文档/imgs/边缘网关功能说明/image-20220407090243473.png
deleted file mode 100644
index 5614cae..0000000
Binary files a/doc/技术文档/imgs/边缘网关功能说明/image-20220407090243473.png and /dev/null differ
diff --git a/doc/技术文档/imgs/边缘网关功能说明/image-20220407090353559.png b/doc/技术文档/imgs/边缘网关功能说明/image-20220407090353559.png
deleted file mode 100644
index 6cf92bf..0000000
Binary files a/doc/技术文档/imgs/边缘网关功能说明/image-20220407090353559.png and /dev/null differ
diff --git a/doc/技术文档/imgs/边缘网关功能说明/image-20220407090848867.png b/doc/技术文档/imgs/边缘网关功能说明/image-20220407090848867.png
deleted file mode 100644
index 131cde1..0000000
Binary files a/doc/技术文档/imgs/边缘网关功能说明/image-20220407090848867.png and /dev/null differ
diff --git a/doc/技术文档/imgs/边缘网关功能说明/image-20220410164834468.png b/doc/技术文档/imgs/边缘网关功能说明/image-20220410164834468.png
deleted file mode 100644
index 7e202c5..0000000
Binary files a/doc/技术文档/imgs/边缘网关功能说明/image-20220410164834468.png and /dev/null differ
diff --git a/doc/技术文档/imgs/边缘网关功能说明/image-20220410165008488.png b/doc/技术文档/imgs/边缘网关功能说明/image-20220410165008488.png
deleted file mode 100644
index 7e202c5..0000000
Binary files a/doc/技术文档/imgs/边缘网关功能说明/image-20220410165008488.png and /dev/null differ
diff --git a/doc/技术文档/imgs/边缘网关功能说明/image-20220410195611807.png b/doc/技术文档/imgs/边缘网关功能说明/image-20220410195611807.png
deleted file mode 100644
index 33c9fa9..0000000
Binary files a/doc/技术文档/imgs/边缘网关功能说明/image-20220410195611807.png and /dev/null differ
diff --git a/doc/技术文档/imgs/边缘网关功能说明/image-20220410201814278.png b/doc/技术文档/imgs/边缘网关功能说明/image-20220410201814278.png
deleted file mode 100644
index 3680a36..0000000
Binary files a/doc/技术文档/imgs/边缘网关功能说明/image-20220410201814278.png and /dev/null differ
diff --git a/doc/技术文档/imgs/边缘网关功能说明/image-20220410202445108.png b/doc/技术文档/imgs/边缘网关功能说明/image-20220410202445108.png
deleted file mode 100644
index 1e8a6b0..0000000
Binary files a/doc/技术文档/imgs/边缘网关功能说明/image-20220410202445108.png and /dev/null differ
diff --git a/doc/技术文档/imgs/边缘网关功能说明/image-20220410202631604.png b/doc/技术文档/imgs/边缘网关功能说明/image-20220410202631604.png
deleted file mode 100644
index 69f07fc..0000000
Binary files a/doc/技术文档/imgs/边缘网关功能说明/image-20220410202631604.png and /dev/null differ
diff --git a/doc/技术文档/imgs/边缘网关功能说明/image-20220410202731912.png b/doc/技术文档/imgs/边缘网关功能说明/image-20220410202731912.png
deleted file mode 100644
index 10b3939..0000000
Binary files a/doc/技术文档/imgs/边缘网关功能说明/image-20220410202731912.png and /dev/null differ
diff --git a/doc/技术文档/imgs/边缘网关功能说明/image-20220410203228982.png b/doc/技术文档/imgs/边缘网关功能说明/image-20220410203228982.png
deleted file mode 100644
index 6b9a97a..0000000
Binary files a/doc/技术文档/imgs/边缘网关功能说明/image-20220410203228982.png and /dev/null differ
diff --git a/doc/技术文档/imgs/边缘网关功能说明/image-20220410203454972.png b/doc/技术文档/imgs/边缘网关功能说明/image-20220410203454972.png
deleted file mode 100644
index 0b67254..0000000
Binary files a/doc/技术文档/imgs/边缘网关功能说明/image-20220410203454972.png and /dev/null differ
diff --git a/doc/技术文档/imgs/边缘网关功能说明/image-20220410203744505.png b/doc/技术文档/imgs/边缘网关功能说明/image-20220410203744505.png
deleted file mode 100644
index bc38cc4..0000000
Binary files a/doc/技术文档/imgs/边缘网关功能说明/image-20220410203744505.png and /dev/null differ
diff --git a/doc/技术文档/imgs/边缘网关功能说明/image-20220410204251741.png b/doc/技术文档/imgs/边缘网关功能说明/image-20220410204251741.png
deleted file mode 100644
index b0c4a23..0000000
Binary files a/doc/技术文档/imgs/边缘网关功能说明/image-20220410204251741.png and /dev/null differ
diff --git a/doc/技术文档/imgs/边缘网关功能说明/image-20220410204712400.png b/doc/技术文档/imgs/边缘网关功能说明/image-20220410204712400.png
deleted file mode 100644
index 685a2d4..0000000
Binary files a/doc/技术文档/imgs/边缘网关功能说明/image-20220410204712400.png and /dev/null differ
diff --git a/doc/技术文档/imgs/边缘网关功能说明/image-20220410204908890.png b/doc/技术文档/imgs/边缘网关功能说明/image-20220410204908890.png
deleted file mode 100644
index 047fb25..0000000
Binary files a/doc/技术文档/imgs/边缘网关功能说明/image-20220410204908890.png and /dev/null differ
diff --git a/doc/技术文档/~$$机房拓扑--非专业.~vsdx b/doc/技术文档/~$$机房拓扑--非专业.~vsdx
deleted file mode 100644
index be4c93e..0000000
Binary files a/doc/技术文档/~$$机房拓扑--非专业.~vsdx and /dev/null differ
diff --git a/doc/技术文档/信息办2022年度工作计划.xlsx b/doc/技术文档/信息办2022年度工作计划.xlsx
deleted file mode 100644
index 598680a..0000000
Binary files a/doc/技术文档/信息办2022年度工作计划.xlsx and /dev/null differ
diff --git a/doc/技术文档/和风天气接口.docx b/doc/技术文档/和风天气接口.docx
deleted file mode 100644
index 00ec18e..0000000
Binary files a/doc/技术文档/和风天气接口.docx and /dev/null differ
diff --git a/doc/技术文档/声光告警下发.docx b/doc/技术文档/声光告警下发.docx
deleted file mode 100644
index b45488f..0000000
Binary files a/doc/技术文档/声光告警下发.docx and /dev/null differ
diff --git a/doc/技术文档/存储.png b/doc/技术文档/存储.png
deleted file mode 100644
index 4a6ecfd..0000000
Binary files a/doc/技术文档/存储.png and /dev/null differ
diff --git a/doc/技术文档/安心云Et模块业务代码梳理.docx b/doc/技术文档/安心云Et模块业务代码梳理.docx
deleted file mode 100644
index 472efda..0000000
Binary files a/doc/技术文档/安心云Et模块业务代码梳理.docx and /dev/null differ
diff --git a/doc/技术文档/振动边缘场景方案设计-GODAAS.pdf b/doc/技术文档/振动边缘场景方案设计-GODAAS.pdf
deleted file mode 100644
index 7d54ee9..0000000
Binary files a/doc/技术文档/振动边缘场景方案设计-GODAAS.pdf and /dev/null differ
diff --git a/doc/技术文档/数据湖2.md b/doc/技术文档/数据湖2.md
deleted file mode 100644
index 74159eb..0000000
--- a/doc/技术文档/数据湖2.md
+++ /dev/null
@@ -1,998 +0,0 @@
-### 环境恢复
-
-**安装新mysql**
-
-```shell
-#命令1
-sudo apt-get update
-#命令2
-sudo apt-get install mysql-server
-
-# 初始化安全配置*(可选)
-sudo mysql_secure_installation
-
-# 远程访问和权限问题*(可选)
-#前情提要:事先声明一下,这样做是对安全有好处的。刚初始化好的MySQL是不能进行远程登录的。要实现登录的话,强烈建议新建一个权限低一点的用户再进行远程登录。直接使用root用户远程登录有很大的风险。分分钟数据库就有可能被黑客drop掉。
-#首先,修改/etc/mysql/my.cnf文件。把bind-address = 127.0.0.1这句给注释掉。解除地址绑定(或者是绑定一个你的固定地址。但宽带上网地址都是随机分配的,固定ip不可行)。
-#然后,给一个用户授权使他能够远程登录。执行下面两句即可。
-
-grant all PRIVILEGES on *.* to user1@'%'identified by '123456' WITH GRANT OPTION;
-FLUSH PRIVILEGES;
-service mysql restart。
-```
-
-
-
-**重新启动Hive**
-
-STILL ON `37测试机` `/home/anxin/apache-hive-3.1.2-bin`
-
-```sh
-./schematool -initSchema -dbType mysql
-# 加载我的环境变量,应为本机还安装了ambari的hive
-source /etc/profile
-hive --service metastore
-
-#P.S. 我的环境变量
-export JAVA_HOME=/usr/local/java/jdk1.8.0_131
-export JAVA_HOME=/home/anxin/jdk8_322/jdk8u322-b06
-export JRE_HOME=$JAVA_HOME/jre
-export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib
-export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
-export HIVE_HOME=/home/anxin/apache-hive-3.1.2-bin
-export HIVE_CONF_DIR=$HIVE_HOME/conf
-export PATH=$HIVE_HOME/bin:$PATH
-export HADOOP_HOME=/usr/hdp/3.1.4.0-315/hadoop
-export HADOOP_CONF_DIR=/usr/hdp/3.1.4.0-315/hadoop/conf
-export HADOOP_CLASSPATH=`$HADOOP_HOME/bin/hadoop classpath`
-export FLINK_HOME=/home/anxin/flink-1.13.6
-
-
-```
-
-
-
-### Hive基础操作
-
-参考:https://www.cnblogs.com/wangrd/p/6275162.html
-
-```sql
---就会在HDFS的[/user/hive/warehouse/]中生成一个tabletest.db文件夹。
-CREATE DATABASE tableset;
-
--- 切换当前数据库
-USE tableset;
-
--- 创建表
-CREATE [EXTERNAL] TABLE [IF NOT EXISTS] table_name
-[(col_name data_type [COMMENT col_comment], ...)]
-[COMMENT table_comment]
-[PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]
-[CLUSTERED BY (col_name, col_name, ...)
-[SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS]
-[ROW FORMAT row_format]
-[STORED AS file_format]
-[LOCATION hdfs_path]
-
-CREATE TABLE t_order (
- id int,
- name string
-)
-ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' -- 指定字段分隔符
-STORED AS TEXTFILE; -- 指定数据存储格式
-
--- 查看表结构
-DESC t_order;
-
--- 导入数据
-load data local inpath '/home/anxin/data/data.txt' [OVERWRITE] into table t_order;
-
--- EXTERNAL表
--- 创建外部表,不会对源文件位置作任何改变
--- 删除外部表不会删除源文件
-CREATE EXTERNAL TABLE ex_order (
- id int,
- name string
-) ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
-STORED AS TEXTFILE
-LOCATION '/external/hive';
-
---分区
-CREATE TABLE t_order(id int,name string) partitioned by (part_flag string)
-row format delimited fields terminated by '\t';
-load data local inpath '/home/hadoop/ip.txt' overwrite into table t_order
-partition(part_flag='part1'); -- 数据上传到part1子目录下
-
--- 查看所有表
-SHOW TABLES;
-SHOW TABLES 'TMP';
-SHOW PARTITIONS TMP_TABLE;-- 查看表有哪些分区
-DESCRIBE TMP_TABLE; -- 查看表结构
-
--- 分桶表
-create table stu_buck(Sno int,Sname string,Sex string,Sage int,Sdept string)
-clustered by(Sno)
-sorted by(Sno DESC)
-into 4 buckets
-row format delimited
-fields terminated by ',';
--- 通过insert into ...select...进行数据插入
-set hive.enforce.bucketing = true;
-set mapreduce.job.reduces=4;
-insert overwrite table stu_buck
-select * from student cluster by(Sno); --等价于 distribute by(Sno) sort by(Sno asc);
-
--- 删除表
-DROP TABLE tablename;
-
--- 临时表
-CREATE TABLE tmp_table
-AS
-SELECT id,name
-FROM t_order
-SORT BY new_id;
-
--- UDF 用户定义函数
--- 基层UDF函数,打包jar到程序,注册函数
-CREATE TEMPORARY function tolowercase as 'cn.demo.Namespace';
-
-select id,tolowercase(name) from t_order;
-```
-
-
-
-### Hadoop基础操作
-
-本例中最终选择通过Hadoop Catalog实现IceBerg数据存储:
-
-```sh
-
-# -skipTrash 直接删除不放到回收站
-hdfs dfs -rm -skipTrash /path/to/file/you/want/to/remove/permanently
-# 清理所有Trash中的数据
-hdfs dfs -expunge
-
-## **清理指定文件夹下的所有数据**
-hdfs dfs -rm -r -skipTrash /user/hadoop/*
-
-## hadoop 启动错误:
-chown -R hdfs:hdfs /hadoop/hdfs/namenode
-# DataNode启动失败:可能多次format导致。修改data-node的clusterid和namenode中的一致
-/hadoop/hdfs/data/current/VERSION
-/hadoop/hdfs/namenode/current/VERSION
-
-# 查看DataNode启动日志
-root@node38:/var/log/hadoop/hdfs# tail -n 1000 hadoop-hdfs-datanode-node38.log
-```
-
-
-
-查看恢复的Hadoop集群:
-
-
-
-
-
-### Flink SQL流式从Kafka到Hive
-
-https://www.cnblogs.com/Springmoon-venn/p/13726089.html
-
-读取kafka的sql:
-
-```sql
-tableEnv.getConfig.setSqlDialect(SqlDialect.DEFAULT)
-
-create table myhive.testhive.iotaKafkatable(
-`userId` STRING,
-`dimensionId` STRING,
-`dimCapId` STRING,
-`scheduleId` STRING,
-`jobId` STRING,
-`jobRepeatId` STRING,
-`thingId` STRING ,
-`deviceId` STRING,
-`taskId` STRING,
-`triggerTime` STRING,
-`finishTime` STRING,
-`seq` STRING,
-`result` STRING,
- `data` STRING
-)with
-('connector' = 'kafka',
-'topic'='iceberg',
-'properties.bootstrap.servers' = '10.8.30.37:6667',
-'properties.group.id' = 'iceberg-demo' ,
-'scan.startup.mode' = 'latest-offset',
-'format' = 'json',
-'json.ignore-parse-errors'='true')
-```
-
-创建hive表:
-
-```sql
-tableEnv.getConfig.setSqlDialect(SqlDialect.HIVE)
-
-CREATE TABLE myhive.testhive.iotatable2(
-`userId` STRING,
-`dimensionId` STRING,
-`dimCapId` STRING,
-`scheduleId` STRING,
-`jobId` STRING,
-`jobRepeatId` STRING,
-`thingId` STRING ,
-`deviceId` STRING,
-`taskId` STRING,
-`triggerTime` TIMESTAMP,
-`seq` STRING,
-`result` STRING,
- `data` STRING
-)
-PARTITIONED BY ( finishTime STRING) -- 分区间字段,该字段不存放实际的数据内容
-STORED AS PARQUET
-TBLPROPERTIES (
- 'sink.partition-commit.policy.kind' = 'metastore,success-file',
- 'partition.time-extractor.timestamp-pattern' = '$finishTime'
- )
-```
-
-
-
-
-
-### IceBerg
-
-概念再解析:
-
-> 好文推荐:
->
-> + [数据湖存储架构选型](https://blog.csdn.net/u011598442/article/details/110152352)
-> +
-
-参考:[Flink+IceBerg+对象存储,构建数据湖方案](https://baijiahao.baidu.com/s?id=1705407920794793309&wfr=spider&for=pc)
-
-
-
-
-
-IceBerg表数据组织架构:
-
-命名空间-》表-》快照》表数据(Parquet/ORC/Avro等格式)
-
-- **快照 Metadata**:表格 Schema、Partition、Partition spec、Manifest List 路径、当前快照等。
-- **Manifest List:**Manifest File 路径及其 Partition,数据文件统计信息。
-- **Manifest File:**Data File 路径及其每列数据上下边界。
-- **Data File:**实际表内容数据,以 Parque,ORC,Avro 等格式组织。
-
- 
-
-由DataWorker读取元数据进行解析,让后把一条记录提交给IceBerg存储,IceBerg将记录写入预定义的分区,形成一些新文件。
-
-Flink在执行Checkpoint的时候完成这一批文件的写入,然后生成这批文件的清单,提交给Commit Worker.
-
-CommitWorker读出当前快照信息,然后与本次生成的文件列表进行合并,生成新的ManifestList文件以及后续元数据的表文件的信息。之后进行提交,成功后形成新快照。
-
- 
-
- 
-
-catalog是Iceberg对表进行管理(create、drop、rename等)的一个组件。目前Iceberg主要支持HiveCatalog和HadoopCatalog两种。
-
-HiveCatalog通过metastore数据库(一般MySQL)提供ACID,HadoopCatalog基于乐观锁机制和HDFS rename的原子性保障写入提交的ACID。
-
-
-
-Flink兼容性
-
-
-
-
-
-### 写入IceBerg
-
-+ IceBerg官网 https://iceberg.apache.org/#flink/
-
-+ 官网翻译 https://www.cnblogs.com/swordfall/p/14548574.html
-
-+ 基于HiveCatalog的问题(未写入Hive) https://issueexplorer.com/issue/apache/iceberg/3092
-
-+ [Flink + Iceberg: How to Construct a Whole-scenario Real-time Data Warehouse](https://www.alibabacloud.com/blog/flink-%2B-iceberg-how-to-construct-a-whole-scenario-real-time-data-warehouse_597824)
-
-+ 被他玩明白了 https://miaowenting.site/2021/01/20/Apache-Iceberg/
-
-
-
-#### 1.使用HadoopCatalog
-
-https://cloud.tencent.com/developer/article/1807008
-
-关键代码:
-
-svn: http://svn.anxinyun.cn/Iota/branches/fs-iot/code/flink-iceberg/flink-iceberg/src/main/scala/com/fs/IceBergDealHadoopApplication.scala
-
-```scala
-...
-```
-
-
-
-#### 2. 使用HiveCatalog
-
-> 进展:??? Hive中可以查询到数据。在FlinkSQL中查询不到数据
-
-关键代码说明:
-
-```scala
-env.enableCheckpointing(5000)
- // 创建IceBerg Catalog和Database
-val createIcebergCatalogSql =
-"""CREATE CATALOG iceberg WITH(
- | 'type'='iceberg',
- | 'catalog-type'='hive',
- | 'hive-conf-dir'='E:\Iota\branches\fs-iot\code\flink-iceberg\flink-iceberg'
- |)
- """.stripMargin
-
-// 创建原始数据表 iota_raw
-val createIotaRawSql =
- """CREATE TABLE iceberg.iceberg_dba.iota_raw (
- |`userId` STRING,
- |`dimensionId` STRING,
- |`dimCapId` STRING,
- |`scheduleId` STRING,
- |`jobId` STRING,
- |`jobRepeatId` STRING,
- |`thingId` STRING ,
- |`deviceId` STRING,
- |`taskId` STRING,
- |`triggerTime` TIMESTAMP,
- |`day` STRING,
- |`seq` STRING,
- |`result` STRING,
- | `data` STRING
- |) PARTITIONED BY (`thingId`,`day`)
- |WITH (
- | 'engine.hive.enabled' = 'true',
- | 'table.exec.sink.not-null-enforcer'='ERROR'
- |)
- """.stripMargin
-
- val kafka_iota_sql =
- """create table myhive.testhive.iotaKafkatable(
- |`userId` STRING,
- |`dimensionId` STRING,
- |`dimCapId` STRING,
- |`scheduleId` STRING,
- |`jobId` STRING,
- |`jobRepeatId` STRING,
- |`thingId` STRING ,
- |`deviceId` STRING,
- |`taskId` STRING,
- |`triggerTime` STRING,
- |`finishTime` STRING,
- |`seq` STRING,
- |`result` STRING,
- | `data` STRING
- |)with
- |('connector' = 'kafka',
- |'topic'='iceberg',
- |'properties.bootstrap.servers' = '10.8.30.37:6667',
- |'properties.group.id' = 'iceberg-demo' ,
- |'scan.startup.mode' = 'latest-offset',
- |'format' = 'json',
- |'json.ignore-parse-errors'='true'
- |)
- """.stripMargin
-
-// 注册自定义函数 Transform
- tenv.createTemporarySystemFunction("dcFunction", classOf[DateCgFunction])
- tenv.createTemporarySystemFunction("tcFunction", classOf[TimeStampFunction])
-val insertSql =
- """
- |insert into iceberg.iceberg_dba.iota_raw
- | select userId, dimensionId,dimCapId,scheduleId,jobId,jobRepeatId,thingId,deviceId,taskId,
- |tcFunction(triggerTime),
- |DATE_FORMAT(dcFunction(triggerTime),'yyyy-MM-dd'),
- |seq,`result`,data
- |from myhive.testhive.iotakafkatable
- """.stripMargin
-```
-
-> 1. 使用HiveCatalog方式,必须指定 'engine.hive.enabled' = 'true'
->
-> 2. 'table.exec.sink.not-null-enforcer'='ERROR' 在非空字段插入空值时的处理办法
->
-> 3. 自定义函数实现
->
-> ```scala
-> class TimeStampFunction extends ScalarFunction {
-> def eval(@DataTypeHint(inputGroup = InputGroup.UNKNOWN) o: String): Timestamp = {
-> val v = DateParser.parse(o)
-> if (v.isEmpty) {
-> null
-> } else {
-> new Timestamp(v.get.getMillis)
-> }
-> }
-> }
-> ```
->
-> 4. PARTITIONED BY (`thingId`,`day`) 根据thingid和日期分区,文件路径如: http://10.8.30.37:50070/explorer.html#/user/hive/warehouse/iceberg_dba.db/iota_raw/data/thingId=b6cfc716-3766-4949-88bc-71cb0dbf31ee/day=2022-01-20
->
-> 5. 详细代码见 http://svn.anxinyun.cn/Iota/branches/fs-iot/code/flink-iceberg/flink-iceberg/src/main/scala/com/fs/DataDealApplication.scala
-
-
-
-查看创建表结构的语句
-
-```sql
-show create table iota_raw;
-
-CREATE EXTERNAL TABLE `iota_raw`(
- `userid` string COMMENT 'from deserializer',
- `dimensionid` string COMMENT 'from deserializer',
- `dimcapid` string COMMENT 'from deserializer',
- `scheduleid` string COMMENT 'from deserializer',
- `jobid` string COMMENT 'from deserializer',
- `jobrepeatid` string COMMENT 'from deserializer',
- `thingid` string COMMENT 'from deserializer',
- `deviceid` string COMMENT 'from deserializer',
- `taskid` string COMMENT 'from deserializer',
- `triggertime` timestamp COMMENT 'from deserializer',
- `day` string COMMENT 'from deserializer',
- `seq` string COMMENT 'from deserializer',
- `result` string COMMENT 'from deserializer',
- `data` string COMMENT 'from deserializer')
-ROW FORMAT SERDE
- 'org.apache.iceberg.mr.hive.HiveIcebergSerDe'
-STORED BY
- 'org.apache.iceberg.mr.hive.HiveIcebergStorageHandler'
-
-LOCATION
- 'hdfs://node37:8020/user/hive/warehouse/iceberg_dba.db/iota_raw'
-TBLPROPERTIES (
- 'engine.hive.enabled'='true',
- 'metadata_location'='hdfs://node37:8020/user/hive/warehouse/iceberg_dba.db/iota_raw/metadata/00010-547022ad-c615-4e2e-854e-8f85592db7b6.metadata.json',
- 'previous_metadata_location'='hdfs://node37:8020/user/hive/warehouse/iceberg_dba.db/iota_raw/metadata/00009-abfb6af1-13dd-439a-88f5-9cb822d6c0e4.metadata.json',
- 'table_type'='ICEBERG',
- 'transient_lastDdlTime'='1642579682')
-```
-
-在Hive中查看数据
-
-```sql
-hive> add jar /tmp/iceberg-hive-runtime-0.12.1.jar;
-hive> select * from iota_raw;
-
-```
-
-#### 报错记录
-
-1. HiveTableOperations$WaitingForLockException
-
- ```sql
- -- HiveMetaStore中的HIVE_LOCKS表 将报错的表所对应的锁记录删除
- select hl_lock_ext_id,hl_table,hl_lock_state,hl_lock_type,hl_last_heartbeat,hl_blockedby_ext_id from HIVE_LOCKS;
-
- delete from HIVE_LOCKS;
- ```
-
-
-
-
-
-### 查询IceBerg
-
-#### 启动Flink SQL Client
-
-flink 配置master `localhost:8081`,配置workers `localhost`.
-
-配置flink.conf (可选)
-
-```ini
-# The number of task slots that each TaskManager offers. Each slot runs one parallel pipeline.
-
-taskmanager.numberOfTaskSlots: 4
-
-# The parallelism used for programs that did not specify and other parallelism.
-
-parallelism.default: 1
-
-```
-
-配置sql-client-defaults.yaml (可选)
-
-```yaml
-execution:
- # select the implementation responsible for planning table programs
- # possible values are 'blink' (used by default) or 'old'
- planner: blink
- # 'batch' or 'streaming' execution
- type: streaming
- # allow 'event-time' or only 'processing-time' in sources
- time-characteristic: event-time
- # interval in ms for emitting periodic watermarks
- periodic-watermarks-interval: 200
- # 'changelog', 'table' or 'tableau' presentation of results
- result-mode: table
- # maximum number of maintained rows in 'table' presentation of results
- max-table-result-rows: 1000000
- # parallelism of the program
- # parallelism: 1
- # maximum parallelism
- max-parallelism: 128
- # minimum idle state retention in ms
- min-idle-state-retention: 0
- # maximum idle state retention in ms
- max-idle-state-retention: 0
- # current catalog ('default_catalog' by default)
- current-catalog: default_catalog
- # current database of the current catalog (default database of the catalog by default)
- current-database: default_database
- # controls how table programs are restarted in case of a failures
- # restart-strategy:
- # strategy type
- # possible values are "fixed-delay", "failure-rate", "none", or "fallback" (default)
- # type: fallback
-```
-
-启动flink集群:
-
-```sh
-./bin/start-cluster.sh
-```
-
-访问Flink UI http://node37:8081
-
-
-
-启动sql-client
-
-```sh
-export HADOOP_CLASSPATH=`hadoop classpath`
-
-./bin/sql-client.sh embedded \
--j /home/anxin/iceberg/iceberg-flink-runtime-0.12.0.jar \
--j /home/anxin/iceberg/flink-sql-connector-hive-2.3.6_2.11-1.11.0.jar \
--j /home/anxin/flink-1.11.4/lib/flink-sql-connector-kafka-0.11_2.11-1.11.4.jar \
-shell
-```
-
-#### 查询语句基础
-
-```sql
-CREATE CATALOG iceberg WITH(
- 'type'='iceberg',
- 'catalog-type'='hadoop',
- 'warehouse'='hdfs://node37:8020/user/hadoop',
- 'property-version'='1'
-);
-use catalog iceberg;
-use iceberg_db; -- 选择数据库
-
-
---可选区域
-SET; -- 查看当前配置
-SET sql-client.execution.result-mode = table; -- changelog/tableau
-SET sql-client.verbose=true; -- 打印异常堆栈
-SET sql-client.execution.max-table-result.rows=1000000; -- 在表格模式下缓存的行数
-SET table.planner = blink; -- planner: either blink (default) or old
-SET execution.runtime-mode = streaming; -- execution mode either batch or streaming
-SET sql-client.execution.result-mode = table; -- available values: table, changelog and tableau
-SET parallelism.default = 1; -- optional: Flinks parallelism (1 by default)
-SET pipeline.auto-watermark-interval = 200; --optional: interval for periodic watermarks
-SET pipeline.max-parallelism = 10; -- optional: Flink's maximum parallelism
-SET table.exec.state.ttl = 1000; -- optional: table program's idle state time
-SET restart-strategy = fixed-delay;
-
-SET table.optimizer.join-reorder-enabled = true;
-SET table.exec.spill-compression.enabled = true;
-SET table.exec.spill-compression.block-size = 128kb;
-
-SET execution.savepoint.path = tmp/flink-savepoints/savepoint-cca7bc-bb1e257f0dab; -- restore from the specific savepoint path
--- 执行一组SQL指令
-BEGIN STATEMENT SET;
- -- one or more INSERT INTO statements
- { INSERT INTO|OVERWRITE