MySQL Shell 8.4  /  ...  /  设置 InnoDB 集群和 MySQL 路由器

6.8.3 设置 InnoDB 集群和 MySQL 路由器

在以下示例中,我们使用沙箱部署和 AdminAPI 完成以下任务,以部署带有 MySQL 路由器的 InnoDB 集群。

部署和使用 MySQL 的本地沙箱实例允许您在生产服务器上部署之前,在本地测试其功能。AdminAPI 具有内置功能,可以创建预配置的沙箱实例,以便在本地部署的情况下与 InnoDB 集群、InnoDB 集群集和 InnoDB 副本集一起使用。

此示例包含以下部分

警告

沙箱实例仅适用于在本地机器上部署和运行以进行测试目的。

安装

安装以下组件

创建 InnoDB 集群沙箱配置

为了提供对一次故障的容忍度,请创建包含三个实例的 InnoDB 集群。在本例中,我们将使用在同一台机器上运行的三个沙箱实例。在实际环境中,这三个实例将在网络上的不同主机上运行。

  1. 要启动 MySQL Shell,请执行

    > mysqlsh
  2. 要创建和启动 MySQL 沙箱实例,请使用 X AdminAPI 中的 dba.deploySandboxInstance() 函数。在 MySQL Shell 中执行以下三个语句,并为每个实例输入一个 root 密码

    mysql-js> dba.deploySandboxInstance(3310)
    mysql-js> dba.deploySandboxInstance(3320)
    mysql-js> dba.deploySandboxInstance(3330)
    注意

    对所有实例使用相同的 root 密码。

创建 InnoDB 集群

要创建 InnoDB 集群,请完成以下步骤

  1. 通过执行以下操作连接到您希望成为 InnoDB 集群中的主实例的 MySQL 实例

    mysql-js> shell.connect('root@localhost:3310')
  2. 执行 dba.createCluster() 命令以创建集群,并使用分配的变量 cluster 保存输出的值

    mysql-js> cluster = dba.createCluster('devCluster')

    此命令输出

    A new InnoDB cluster will be created on instance 'localhost:3310'.
    
    Validating instance configuration at localhost:3310...
    NOTE: Instance detected as a sandbox.
    Please note that sandbox instances are only suitable for deploying test clusters for use within the same host.
    
    This instance reports its own address as 127.0.0.1:3310
    
    Instance configuration is suitable.
    NOTE: Group Replication will communicate with other members using '127.0.0.1:33101'. 
          Use the localAddress option to override.
    
    Creating InnoDB cluster 'devCluster' on '127.0.0.1:3310'...
    
    Adding Seed Instance...
    Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
    At least 3 instances are needed for the cluster to be able to withstand up to
    one server failure.
    
    <Cluster:devCluster>
  3. 使用分配的变量 clustercluster.status() 函数一起验证创建是否成功

    mysql-js> cluster.status()

    输出以下状态

     {
    “clusterName”: “devCluster”,
    “defaultReplicaSet”: {
            "name": "default",
    "primary": "127.0.0.1:3310",
    "ssl": "REQUIRED",
    "status": "OK_NO_TOLERANCE",
    "statusText": "Cluster is NOT tolerant to any failures.",
    "topology": {
       "127.0.0.1:3310": {
           "address": "127.0.0.1:3310",
           "memberRole": "PRIMARY",
           "mode": "R/W",
           "readReplicas": {},
           "replicationLag": null,
           "role": "HA",
           "status": "ONLINE",
           "version": "8.0.28"
       }
    },
    "topologyMode": "Single-Primary"
    }, “groupInformationSourceMember”:
    “127.0.0.1:3310” }
  4. 集群已启动并运行,但尚未容忍故障。使用 <Cluster>.addInstance() 函数将另一个 MySQL 服务器实例添加到集群中

    {
    mysql-js> cluster.addInstance('root@localhost:3320')
    NOTE: The target instance '127.0.0.1:3320' has not been pre-provisioned (GTID set is empty). 
          The Shell is unable to decide whether incremental state recovery can correctly provision it.
    The safest and most convenient way to provision a new instance is through automatic clone provisioning, 
    which will completely overwrite the state of '127.0.0.1:3320' with a physical snapshot from an existing 
    cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.
    
    The incremental state recovery may be safely used if you are sure all updates ever executed in the 
    cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains 
    the same GTID set as the cluster or a subset of it. To use this method by default, set the 
    'recoveryMethod' option to 'incremental'.
    
    Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone):
    mysql-js> cluster.addInstance('root@localhost:3330')
    }
  5. 从提示符中选择一种恢复方法。选项为

    • 克隆:克隆您要添加到主集群的实例,删除该实例包含的所有事务。MySQL 克隆插件会自动安装。假设您要添加一个空实例(尚未处理任何事务)或包含您希望保留的事务的实例,请选择克隆选项。

    • 增量恢复:使用异步复制将集群处理的所有事务恢复到加入的实例。如果您确定集群处理的所有更新均已使用启用全局事务 ID (GTID) 完成,则增量恢复是合适的。没有被清除的事务,新实例包含与集群相同或其子集的 GTID 集。

    在本例中,选择 C 代表 克隆

         Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): C
    Validating instance configuration at localhost:3320...
    NOTE: Instance detected as a sandbox.
    Please note that sandbox instances are only suitable for deploying test clusters for 
    use within the same host.
    
    This instance reports its own address as 127.0.0.1:3320
    
    Instance configuration is suitable.
    NOTE: Group Replication will communicate with other members using '127.0.0.1:33201'. 
          Use the localAddress option to override.
    
    A new instance will be added to the InnoDB cluster. Depending on the amount of
    data on the cluster this might take from a few seconds to several hours.
    
    Adding instance to the cluster...
    
    Monitoring recovery process of the new cluster member. Press ^C to stop monitoring 
    and let it continue in background.
    Clone based state recovery is now in progress.
    
    NOTE: A server restart is expected to happen as part of the clone process. If the
    server does not support the RESTART command or does not come back after a
    while, you may need to manually start it back.
    
    * Waiting for clone to finish...
    NOTE: 127.0.0.1:3320 is being cloned from 127.0.0.1:3310
    ** Stage DROP DATA: Completed
    ** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed
    
    NOTE: 127.0.0.1:3320 is shutting down...
    
    * Waiting for server restart... ready
    * 127.0.0.1:3320 has restarted, waiting for clone to finish...
    ** Stage RESTART: Completed
    * Clone process has finished: 72.61 MB transferred in about 1 second (~72.61 MB/s)
    
    State recovery already finished for '127.0.0.1:3320'
    
    The instance '127.0.0.1:3320' was successfully added to the cluster.
  6. 添加创建的第三个实例,并再次选择 C 作为克隆恢复方法

    mysql-js> cluster.addInstance('root@localhost:3330')
  7. 通过执行以下操作检查集群的状态

    mysql-js> cluster.status()

    这将输出以下内容

    {
    "clusterName": "devCluster",
    "defaultReplicaSet": {
       "name": "default",
       "primary": "127.0.0.1:3310",
       "ssl": "REQUIRED",
       "status": "OK",
       "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
           "topology": {
               "127.0.0.1:3310": {
                   "address": "127.0.0.1:3310",
                   "memberRole": "PRIMARY",
                   "mode": "R/W",
                   "readReplicas": {},
                   "replicationLag": null,
                   "role": "HA",
                   "status": "ONLINE",
                   "version": "8.0.28"
               },
               "127.0.0.1:3320": {
                   "address": "127.0.0.1:3320",
                   "memberRole": "SECONDARY",
                   "mode": "R/O",
                   "readReplicas": {},
                   "replicationLag": null,
                   "role": "HA",
                   "status": "ONLINE",
                   "version": "8.0.28"
               },
               "127.0.0.1:3330": {
                   "address": "127.0.0.1:3330",
                   "memberRole": "SECONDARY",
                   "mode": "R/O",
                   "readReplicas": {},
                   "replicationLag": null,
                   "role": "HA",
                   "status": "ONLINE",
                   "version": "8.0.28"
               }
           },
           "topologyMode": "Single-Primary"
       },
       "groupInformationSourceMember": "127.0.0.1:3310"
    }
    The setup of the InnoDB Cluster was successful!
  8. 集群现在可以容忍最多一次故障。通过执行以下操作退出 MySQL Shell:\q

引导 MySQL 路由器

设置 MySQL InnoDB 集群后,测试集群的高可用性。为此,请使用 MySQL 路由器。如果一个实例出现故障,MySQL 路由器会自动更新其路由配置,并确保将新连接路由到剩余的实例。

在 MySQL 路由器可以执行路由操作之前,请让它了解新的 InnoDB 集群。为此,请使用 –bootstrap 选项,并将 MySQL 路由器指向集群的当前 R/W MySQL 服务器实例(主实例)。使用 -d 选项将路由器的配置存储在名为 mysql-router 的文件夹中。

  1. 在您的主目录中打开一个终端

    • 在 Linux 系统上,执行

      [demo-user@losthost]$> mysqlrouter --bootstrap root@localhost:3310 -d mysqlrouter
    • 在 Windows 系统上,执行

      C:\Users\demo-user> mysqlrouter --bootstrap root@localhost:3310 -d mysql-router

      然后,MySQL 路由器会打印它将用于路由连接的 TCP/IP 端口。有关更多信息,请参阅 部署 MySQL 路由器

  2. 成功配置 MySQL 路由器后,在后台线程中启动它

    • 在 Windows 系统上,使用 start /B 命令,并将路由器指向使用 –bootstrap 选项生成的配置文件

      C:\> start /B mysqlrouter -c %HOMEPATH%\mysql-router\mysqlrouter.conf
    • 或调用之前创建的 mysqlrouter 文件夹中的 Windows PowerShell 脚本

      \mysqlrouter\start.ps1
    • 在使用 systemd 的 Linux 系统上,执行

      sudo systemctl start mysqlrouter.service
    • 或在 Linux 系统上,调用之前创建的 mysqlrouter 文件夹中的 Shell 脚本

      /mysqlrouter/start.sh

测试 MySQL 路由器配置

现在 InnoDB 集群和 MySQL 路由器正在运行,请测试集群设置。

不要直接连接到其中一个 MySQL 服务器实例,而是通过 MySQL 路由器连接。

  1. 执行以下连接命令

    > mysqlsh root@localhost:6446
  2. 提供 root 密码以连接到 InnoDB 集群。

  3. 通过创建一个变量 cluster 并为其分配 dba.getCluster() 操作的值来检查 InnoDB 集群的状态

    mysql-js> cluster = dba.getCluster()
    mysql-js> cluster.status()
  4. 切换到 SQL 模式

    mysql-js> \sql
  5. 通过执行以下操作查询实例运行的端口

    mysql-sql> SELECT @@port;
            
            +--------+
            | @@port |
            +--------+
            |   3310 |
            +--------+
    1 row in set (0.0007 sec)
  6. 切换回 JavaScript 模式

    mysql-js> \js
  7. 使用 dba.killSandboxInstance() 函数停止 MySQL 服务器实例

                dba.killSandboxInstance(3310)
                
    Killing MySQL instance...
    
    Instance localhost:3310 successfully killed.
  8. 通过对刚刚关闭的实例运行 SELECT @@port 命令并检查结果来检查 MySQL 路由器是否正在正确路由流量

    • 切换到 SQL 模式

      mysql-js> \sql
    • 检查 MySQL 的端口

      mysql-sql> SELECT @@port;
  9. 返回错误;ERROR: 2013 (HY000): Lost connection to MySQL server during query。此错误意味着在端口 3310 上运行的实例已停止运行。

  10. 再次检查端口

    mysql-sql> SELECT @@port;
    +--------+
    | @@port |
    +--------+
    |   3320 |
    +--------+
  11. 此输出显示在端口 3320 上运行的实例已提升为新的 Read/Write 主实例。

  12. 返回到 JavaScript 模式,并检查集群的状态

    mysql-js> cluster.status()
    {
       "clusterName": "devCluster",
       "defaultReplicaSet": {
           "name": "default",
           "primary": "127.0.0.1:3320",
           "ssl": "REQUIRED",
           "status": "OK_NO_TOLERANCE",
           "statusText": "Cluster is NOT tolerant to any failures. 1 member is not active.",
           "topology": {
               "127.0.0.1:3310": {
                   "address": "127.0.0.1:3310",
                   "memberRole": "SECONDARY",
                   "mode": "n/a",
                   "readReplicas": {},
                   "role": "HA",
                   "shellConnectError": "MySQL Error 2003: Could not open connection to '127.0.0.1:3310': 
                                         Can't connect to MySQL server on '127.0.0.1:3310' (10061)",
                   "status": "(MISSING)"
               },
               "127.0.0.1:3320": {
                   "address": "127.0.0.1:3320",
                   "memberRole": "PRIMARY",
                   "mode": "R/W",
                   "readReplicas": {},
                   "replicationLag": null,
                   "role": "HA",
                   "status": "ONLINE",
                   "version": "8.0.28"
               },
               "127.0.0.1:3330": {
                   "address": "127.0.0.1:3330",
                   "memberRole": "SECONDARY",
                   "mode": "R/O",
                   "readReplicas": {},
                   "replicationLag": null,
                   "role": "HA",
                   "status": "ONLINE",
                   "version": "8.0.28"
               }
           },
           "topologyMode": "Single-Primary"
       },
       "groupInformationSourceMember": "127.0.0.1:3320"
    }
  13. 以前在端口 3310 上运行的 MySQL 服务器实例为 MISSING

  14. 通过执行具有端口号的 dba.startSandboxInstance() 操作重新启动此实例

    mysql-js> dba.startSandboxInstance(3310)
  15. 检查集群的状态显示,该实例已恢复为集群中的活动状态,但作为 SECONDARY 成员

    mysql-js > cluster.status()
    {
      "clusterName": "devCluster",
      "defaultReplicaSet": {
          "name": "default",
          "primary": "127.0.0.1:3320",
          "ssl": "REQUIRED",
          "status": "OK",
          "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
          "topology": {
              "127.0.0.1:3310": {
                  "address": "127.0.0.1:3310",
                  "memberRole": "SECONDARY",
                  "mode": "R/O",
                  "readReplicas": {},
                  "replicationLag": null,
                  "role": "HA",
                  "status": "ONLINE",
                  "version": "8.0.28"
              },
              "127.0.0.1:3320": {
                  "address": "127.0.0.1:3320",
                  "memberRole": "PRIMARY",
                  "mode": "R/W",
                  "readReplicas": {},
                  "replicationLag": null,
                  "role": "HA",
                  "status": "ONLINE",
                  "version": "8.0.28"
              },
              "127.0.0.1:3330": {
                  "address": "127.0.0.1:3330",
                  "memberRole": "SECONDARY",
                  "mode": "R/O",
                  "readReplicas": {},
                  "replicationLag": null,
                  "role": "HA",
                  "status": "ONLINE",
                  "version": "8.0.28"
              }
          },
          "topologyMode": "Single-Primary"
      },
      "groupInformationSourceMember": "127.0.0.1:3320"
    }
  16. 所有实例都已恢复联机,集群可以再次容忍一次故障。