零一开源—技术|科技|资源分享 零一开源—技术|科技|资源分享

技术分享与兴趣交流

目录
【TiDB实战】在线部署TiDB单机集群环境(用单机模拟集群)
/  

【TiDB实战】在线部署TiDB单机集群环境(用单机模拟集群) 置顶!

1. 主机环境情况

环境为自己使用笔记本,安装了windows子系统Ubuntu 20,想要了解如何安装的可以按照博主之前的推文的来安装。
卸载虚拟机干掉双系统,在Windows 10上体验Linux

系统环境

  • 母操作系统:windows 10 家庭版
  • 子操作系统:Ubuntu 20.04.1 LTS
  • 处理器:Intel i7 10510U 低压版
  • 内存:16G
  • 硬盘:SSD 256G

2. 配置/etc/ssh/sshd_config文件

由于模拟多机部署,需要通过 root 用户调大 sshd 服务的连接数限制。

  • 修改 /etc/ssh/sshd_config 将 MaxSessions 调至 20 。
  • 重启 sshd 服务:# systemctl restart sshd

3. 下载并安装 TiUP

curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh

详细输出大概如下:

[root@tidb ~]# curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 8606k  100 8606k    0     0   605k      0  0:00:14  0:00:14 --:--:--  647k
WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.json
Set mirror to https://tiup-mirrors.pingcap.com success
Detected shell: bash
Shell profile:  /root/.bash_profile
/root/.bash_profile has been modified to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try:     tiup playground
===============================================

这样,tiup 就安装好了。

4. 编辑部署yarm文件

tidb部署是基于这个yarm文件来部署的,这个文件指定了各个组件的位置及端口,测试环境如下:

# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
 user: "tidb"
 ssh_port: 22
 deploy_dir: "/u01/tidb/tidb-deploy"
 data_dir: "/u01/tidb/tidb-data"

# # Monitored variables are applied to all the machines.
monitored:
 node_exporter_port: 9100
 blackbox_exporter_port: 9115

server_configs:
 tidb:
   log.slow-threshold: 300
 tikv:
   readpool.storage.use-unified-pool: false
   readpool.coprocessor.use-unified-pool: true
 pd:
   replication.enable-placement-rules: true
   replication.location-labels: ["host"]
 tiflash:
   logger.level: "info"

pd_servers:
 - host: 192.168.198.133

tidb_servers:
 - host: 192.168.198.133

tikv_servers:
 - host: 192.168.198.133
   port: 20160
   status_port: 20180
   config:
     server.labels: { host: "logic-host-1" }

 - host: 192.168.198.133
   port: 20161
   status_port: 20181
   config:
     server.labels: { host: "logic-host-2" }

 - host: 192.168.198.133
   port: 20162
   status_port: 20182
   config:
     server.labels: { host: "logic-host-3" }

tiflash_servers:
 - host: 192.168.198.133

monitoring_servers:
 - host: 192.168.198.133

grafana_servers:
 - host: 192.168.198.133

可以看到配置文件中指定了各个组件的所在ip,同时注意,这个配置文件指定程序及数据路径:

  • deploy_dir: “/u01/tidb/tidb-deploy”
  • data_dir: “/u01/tidb/tidb-data”

这个两个路径,要提前建好,并且指定的用户tidb互通要配置好,需要scp互相拷贝文件。(单机模拟就省了)

5. 使用tiup进行在线部署

命令如下:
(注意运行命令的位置:/u01/tidb

因为topo.yaml在这个路径下,如果用绝对路径,请忽略)

tiup cluster deploy myft_ticlu nightly ./topo.yaml --user root –p

myft_ticlu为自定义集群名,nightly为版本(最新版本),查看支持版本可用用:tiup list tidb。按照引导,输入”y”及 root 密码,来完成部署。正常输出如下:

[root@eomsdr tidb]# tiup cluster deploy myft_ticlu nightly ./topo.yaml --user root –p
Starting component `cluster`: /root/.tiup/components/cluster/v1.2.4/tiup-cluster deploy myft_ticlu nightly ./topo.yaml --user root –p
Please confirm your topology:
Cluster type:    tidb
Cluster name:    myft_ticlu
Cluster version: nightly
Type        Host             Ports                            OS/Arch       Directories
----        ----             -----                            -------       -----------
pd          192.168.198.133  2379/2380                        linux/x86_64  /u01/tidb/tidb-deploy/pd-2379,/u01/tidb/tidb-data/pd-2379
tikv        192.168.198.133  20160/20180                      linux/x86_64  /u01/tidb/tidb-deploy/tikv-20160,/u01/tidb/tidb-data/tikv-20160
tikv        192.168.198.133  20161/20181                      linux/x86_64  /u01/tidb/tidb-deploy/tikv-20161,/u01/tidb/tidb-data/tikv-20161
tikv        192.168.198.133  20162/20182                      linux/x86_64  /u01/tidb/tidb-deploy/tikv-20162,/u01/tidb/tidb-data/tikv-20162
tidb        192.168.198.133  4000/10080                       linux/x86_64  /u01/tidb/tidb-deploy/tidb-4000
tiflash     192.168.198.133  9000/8123/3930/20170/20292/8234  linux/x86_64  /u01/tidb/tidb-deploy/tiflash-9000,/u01/tidb/tidb-data/tiflash-9000
prometheus  192.168.198.133  9090                             linux/x86_64  /u01/tidb/tidb-deploy/prometheus-9090,/u01/tidb/tidb-data/prometheus-9090
grafana     192.168.198.133  3000                             linux/x86_64  /u01/tidb/tidb-deploy/grafana-3000
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]:  y
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:nightly (linux/amd64) ... Done
  - Download tikv:nightly (linux/amd64) ... Done
  - Download tidb:nightly (linux/amd64) ... Done
  - Download tiflash:nightly (linux/amd64) ... Done
  - Download prometheus:nightly (linux/amd64) ... Done
  - Download grafana:nightly (linux/amd64) ... Done
  - Download node_exporter:v0.17.0 (linux/amd64) ... Done
  - Download blackbox_exporter:v0.12.0 (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 192.168.198.133:22 ... Done
+ Copy files
  - Copy pd -> 192.168.198.133 ... Done
  - Copy tikv -> 192.168.198.133 ... Done
  - Copy tikv -> 192.168.198.133 ... Done
  - Copy tikv -> 192.168.198.133 ... Done
  - Copy tidb -> 192.168.198.133 ... Done
  - Copy tiflash -> 192.168.198.133 ... Done
  - Copy prometheus -> 192.168.198.133 ... Done
  - Copy grafana -> 192.168.198.133 ... Done
  - Copy node_exporter -> 192.168.198.133 ... Done
  - Copy blackbox_exporter -> 192.168.198.133 ... Done
+ Check status

Enabling component pd
+ Enable cluster
        Enable pd 192.168.198.133:2379 success
+ Enable cluster
+ Enable cluster
Enabling component tikv
        Enabling instance tikv 192.168.198.133:20162
        Enabling instance tikv 192.168.198.133:20160
+ Enable cluster
+ Enable cluster
+ Enable cluster
        Enable tikv 192.168.198.133:20162 success
Enabling component tidb
+ Enable cluster
        Enable tidb 192.168.198.133:4000 success
Enabling component tiflash
+ Enable cluster
        Enable tiflash 192.168.198.133:9000 success
Enabling component prometheus
+ Enable cluster
        Enable prometheus 192.168.198.133:9090 success
Enabling component grafana
+ Enable cluster
+ Enable cluster
Deployed cluster `myft_ticlu` successfully, you can start the cluster via `tiup cluster start myft_ticlu`
[root@eomsdr tidb]#

6. 启动集群

命令如下:

tiup cluster start myft_ticlu

详细输出如下:

[root@eomsdr tidb]# tiup cluster start myft_ticlu
Starting component `cluster`: /root/.tiup/components/cluster/v1.2.4/tiup-cluster start myft_ticlu
Starting cluster myft_ticlu...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/myft_ticlu/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/myft_ticlu/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.198.133
+ [Parallel] - UserSSH: user=tidb, host=192.168.198.133
+ [Parallel] - UserSSH: user=tidb, host=192.168.198.133
+ [Parallel] - UserSSH: user=tidb, host=192.168.198.133
+ [Parallel] - UserSSH: user=tidb, host=192.168.198.133
+ [Parallel] - UserSSH: user=tidb, host=192.168.198.133
+ [Parallel] - UserSSH: user=tidb, host=192.168.198.133
+ [Parallel] - UserSSH: user=tidb, host=192.168.198.133
+ [ Serial ] - StartCluster
Starting component pd
        Starting instance pd 192.168.198.133:2379
        Start pd 192.168.198.133:2379 success
Starting component node_exporter
        Starting instance 192.168.198.133
        Start 192.168.198.133 success
Starting component blackbox_exporter
        Starting instance 192.168.198.133
        Start 192.168.198.133 success
Starting component tikv
        Starting instance tikv 192.168.198.133:20162
        Starting instance tikv 192.168.198.133:20160
        Starting instance tikv 192.168.198.133:20161
        Start tikv 192.168.198.133:20161 success
        Start tikv 192.168.198.133:20160 success
        Start tikv 192.168.198.133:20162 success
Starting component tidb
        Starting instance tidb 192.168.198.133:4000
        Start tidb 192.168.198.133:4000 success
Starting component tiflash
        Starting instance tiflash 192.168.198.133:9000
        Start tiflash 192.168.198.133:9000 success
Starting component prometheus
        Starting instance prometheus 192.168.198.133:9090
        Start prometheus 192.168.198.133:9090 success
Starting component grafana
        Starting instance grafana 192.168.198.133:3000

        Start grafana 192.168.198.133:3000 success
+ [ Serial ] - UpdateTopology: cluster=myft_ticlu
Started cluster `myft_ticlu` successfully
[root@eomsdr tidb]#

7. 关键组件

几个关键组件信息:

  • Pd:元数据及控制调度组件
  • Tikv:存储组件
  • Tidb:数据库实例组件
  • Tiflash:闪存组件
  • Tidb虽然和mysql类似,但是它厉害在分布式,如果要使用mysql,数据库变大后,要思考虑分库分表、使用mycat等数据路由工具,Tidb设计从底层一开始分布式,类似hdfs的存储架构,将分布式做成一种原生的架构。

查看组件日志
Ps进程,就可以看到组件的日志路径,对应查看即可,例如ps -ef|grep tidb,查看到tidb的进程对应的日志位置:

root      22590  22548  0 16:27 pts/2    00:00:02 /root/.tiup/components/tidb/v4.0.8/tidb-server -P 4000 --store=tikv --host=127.0.0.1 --status=10080 --path=127.0.0.1:2379 --log-file=/root/.tiup/data/SH9Q7P0/tidb-0/tidb.log
root      23061  23007  2 16:29 pts/1    00:00:27 /root/.tiup/components/tidb/v4.0.8/tidb-server -P 36829 --store=tikv --host=127.0.0.1 --status=32166 --path=127.0.0.1:33644 --log-file=/root/.tiup/data/SH9QQAS/tidb-0/tidb.log
root      23812  21745  0 16:32 pts/2    00:00:00 tail -f /root/.tiup/data/SH9QQAS/tidb-0/tidb.log

END

如果有比较特殊的环境,不能联网安装,需要离线装TiDB,可以参考我另一篇文章:

【TiDB实战】离线安装部署TiDB,单机模拟集群环境


标题:【TiDB实战】在线部署TiDB单机集群环境(用单机模拟集群)
作者:hacken
地址:https://www.01open.com/articles/2022/01/14/1642169747595.html