暂无图片
暂无图片
1
暂无图片
暂无图片
暂无图片

clickhouse 3分片2副本集群部署

原创 冯刚 2022-10-08
2659

前言

该文章介绍在CentOS7上部署Clickhouse 3分片2副本集群的实施过程。

1 环境

1.1 拓扑图


1.2 环境信息

主机名IP端口服务配置文件
clickhouse001192.168.6.82181zookeeper/etc/zookeeper/zoo.cfg
clickhouse002192.168.6.62181zookeeper/etc/zookeeper/zoo.cfg
clickhouse003192.168.6.132181zookeeper/etc/zookeeper/zoo.cfg
clickhouse001192.168.6.89000/8123clickhouse客户端、服务端

/etc/clickhouse-server/config_9000.xml

/etc/clickhouse-server/users01.xml

/etc/clickhouse-server/metrika01.xml

9200/8223clickhouse客户端、服务端

/etc/clickhouse-server/config_9200.xml

/etc/clickhouse-server/users02.xml

/etc/clickhouse-server/metrika02.xml

clickhouse002192.168.6.69000/8123clickhouse客户端、服务端

/etc/clickhouse-server/config_9000.xml

/etc/clickhouse-server/users01.xml

/etc/clickhouse-server/metrika01.xml

9200/8223clickhouse客户端、服务端

/etc/clickhouse-server/config_9200.xml

/etc/clickhouse-server/users02.xml

/etc/clickhouse-server/metrika02.xml

clickhouse003192.168.6.139000/8123clickhouse客户端、服务端

/etc/clickhouse-server/config_9000.xml

/etc/clickhouse-server/users01.xml

/etc/clickhouse-server/metrika01.xml

9200/8223clickhouse客户端、服务端

/etc/clickhouse-server/config_9200.xml

/etc/clickhouse-server/users02.xml

/etc/clickhouse-server/metrika02.xml

下面部署以clickhou001操作举例,如果三台机器均操作,会注明。

2 zookeeper安装

2.1 zoo.cfg配置

3台机器均操作[ root@clickhou001:~ ]# cat /etc/zookeeper/zoo.cfg 
dataDir=/data/zookeeper # the port at which the clients will connect clientPort=2181 server.1=192.168.6.8:2888:3888 server.2=192.168.6.6:2888:3888 server.3=192.168.6.13:2888:3888[ root@clickhou001:~ ]# echo stat|nc localhost 2181 | grep versionZookeeper version: 3.4.13
官方文档:https://zookeeper.apache.org/doc/r3.4.14/zookeeperStarted.html

2.2 自启动

[ root@clickhou001:~ ]# cat /etc/systemd/system/zookeeper.service 
[Unit]
Description=ZooKeeper Service
Requires=network.target
After=syslog.target

[Service]

Type=forking
User=zookeeper
Group=zookeeper
ExecStart=/usr/local/zookeeper/bin/zkServer.sh start /etc/zookeeper/zoo.cfg
ExecStop=/usr/local/zookeeper/bin/zkServer.sh stop /etc/zookeeper/zoo.cfg
ExecReload=/usr/local/zookeeper/bin/zkServer.sh restart /etc/zookeeper/zoo.cfg


[Install]
WantedBy=default.target
systemclt enable zookeeper.service

3 ClickHouse安装

3.1 rpm安装

3台机器均操作
yum -y install yum-utils rpm --import https://repo.clickhouse.com/CLICKHOUSE-KEY.GPG yum-config-manager --add-repo https://repo.clickhouse.com/rpm/stable/x86_64 yum install clickhouse-server clickhouse-client
官方文档:https://clickhouse.com/docs/en/install

3.2 config.xml配置

三台机器均操作clickhou002的config_9000.xml中,<display_name>[clickhou002] {分片2-副本1} {9000}  > </display_name>clickhou002的config_9200.xml中,<display_name>[clickhou002] {分片1-副本2} {9200}  > </display_name>
clickhou003的config_9000.xml中,<display_name>[clickhou003] {分片3-副本1} {9000} > </display_name> clickhou003的config_9200.xml中,<display_name>[clickhou003] {分片2-副本2} {9200} > </display_name>
省略了默认内容,下面只显示需要修改内容,主要是端口和相关目录。[ root@clickhou001:~ ]# cat /etc/clickhouse-server/config_9000.xml <yandex> <logger> <!-- Possible levels [1]: - none (turns off logging) - fatal - critical - error - warning - notice - information - debug - trace [1]: https://github.com/pocoproject/poco/blob/poco-1.9.4-release/Foundation/include/Poco/Logger.h#L105-L114 --> <level>trace</level> <log>/data/clickhouse/clickhouse_9000/log/clickhouse-server.log</log> <errorlog>/data/clickhouse/clickhouse_9000/log/clickhouse-server.err.log</errorlog> <!-- It is the name that will be shown in the clickhouse-client. By default, anything with "production" will be highlighted in red in query prompt. --> <!--display_name>production</display_name--> <display_name>[clickhou001] {分片1-副本1} {9000} > </display_name> <!-- Port for HTTP API. See also 'https_port' for secure connections. This interface is also used by ODBC and JDBC drivers (DataGrip, Dbeaver, ...) and by most of web interfaces (embedded UI, Grafana, Redash, ...). --> <http_port>8123</http_port> <!-- Port for interaction by native protocol with: - clickhouse-client and other native ClickHouse tools (clickhouse-benchmark, clickhouse-copier); - clickhouse-server with other clickhouse-servers for distributed query processing; - ClickHouse drivers and applications supporting native protocol (this protocol is also informally called as "the TCP protocol"); See also 'tcp_port_secure' for secure connections. --> <tcp_port>9000</tcp_port> <!-- Compatibility with MySQL protocol. ClickHouse will pretend to be MySQL for applications connecting to this port. --> <mysql_port>9004</mysql_port> <!-- Compatibility with PostgreSQL protocol. ClickHouse will pretend to be PostgreSQL for applications connecting to this port. --> <postgresql_port>9005</postgresql_port> <!-- Port for communication between replicas. Used for data exchange. It provides low-level data access between servers. This port should not be accessible from untrusted networks. See also 'interserver_http_credentials'. Data transferred over connections to this port should not go through untrusted networks. See also 'interserver_https_port'. --> <interserver_http_port>9009</interserver_http_port> <interserver_http_host>192.168.6.8</interserver_http_host> <!-- Listen specified address. Use :: (wildcard IPv6 address), if you want to accept connections both with IPv4 and IPv6 from everywhere. Notes: If you open connections from wildcard address, make sure that at least one of the following measures applied: - server is protected by firewall and not accessible from untrusted networks; - all users are restricted to subset of network addresses (see users.xml); - all users have strong passwords, only secure (TLS) interfaces are accessible, or connections are only made via TLS interfaces. - users without password have readonly access. See also: https://www.shodan.io/search?query=clickhouse --> <listen_host>0.0.0.0</listen_host> <!-- Used with https_port and tcp_port_secure. Full ssl options list: https://github.com/ClickHouse-Extras/poco/blob/master/NetSSL_OpenSSL/include/Poco/Net/SSLManager.h#L71 --> <openSSL> <server> <!-- Used for https server AND secure tcp port --> <!-- openssl req -subj "/CN=localhost" -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout /etc/clickhouse-server/server.key -out /etc/clickhouse-server/server.crt --> <certificateFile>/etc/clickhouse-server/server.crt</certificateFile> <privateKeyFile>/etc/clickhouse-server/server.key</privateKeyFile> <!-- Path to data directory, with trailing slash. --> <path>/data/clickhouse/clickhouse_9000/</path> <!-- Path to temporary data for processing hard queries. --> <tmp_path>/data/clickhouse/clickhouse_9000/tmp/</tmp_path> <!-- Policy from the <storage_configuration> for the temporary files. If not set <tmp_path> is used, otherwise <tmp_path> is ignored. Notes: - move_factor is ignored - keep_free_space_bytes is ignored - max_data_part_size_bytes is ignored - you must have exactly one volume in that policy --> <!-- <tmp_policy>tmp</tmp_policy> --> <!-- Directory with user provided files that are accessible by 'file' table function. --> <user_files_path>/data/clickhouse/clickhouse_9000/user_files/</user_files_path> <!-- Sources to read users, roles, access rights, profiles of settings, quotas. --> <user_directories> <users_xml> <!-- Path to configuration file with predefined users. --> <path>users01.xml</path> </users_xml> <local_directory> <!-- Path to folder where users created by SQL commands are stored. --> <path>/data/clickhouse/clickhouse_9000/access/</path> </local_directory> </user_directories> <!-- Directory in <clickhouse-path> containing schema files for various input formats. The directory will be created if it doesn't exist. -->     <format_schema_path>/data/clickhouse/clickhouse_9000/format_schemas/</format_schema_path> </yandex>

[ root@clickhou001:~ ]# cat /etc/clickhouse-server/config_9200.xml 
<yandex>
    <logger>
        <!-- Possible levels [1]:

          - none (turns off logging)
          - fatal
          - critical
          - error
          - warning
          - notice
          - information
          - debug
          - trace

            [1]: https://github.com/pocoproject/poco/blob/poco-1.9.4-release/Foundation/include/Poco/Logger.h#L105-L114
        -->
        <level>trace</level>
        <log>/data/clickhouse/clickhouse_9200/log/clickhouse-server.log</log>
        <errorlog>/data/clickhouse/clickhouse_9200/log/clickhouse-server.err.log</errorlog>
     
    <!-- It is the name that will be shown in the clickhouse-client.
         By default, anything with "production" will be highlighted in red in query prompt.
    -->
    <!--display_name>production</display_name-->

    <display_name>[clickhou001] {分片3-副本2} {9200}  > </display_name>    

    <!-- Port for HTTP API. See also 'https_port' for secure connections.
         This interface is also used by ODBC and JDBC drivers (DataGrip, Dbeaver, ...)
         and by most of web interfaces (embedded UI, Grafana, Redash, ...).
      -->
    <http_port>8223</http_port>

    <!-- Port for interaction by native protocol with:
         - clickhouse-client and other native ClickHouse tools (clickhouse-benchmark, clickhouse-copier);
         - clickhouse-server with other clickhouse-servers for distributed query processing;
         - ClickHouse drivers and applications supporting native protocol
         (this protocol is also informally called as "the TCP protocol");
         See also 'tcp_port_secure' for secure connections.
    -->
    <tcp_port>9200</tcp_port>

    <!-- Compatibility with MySQL protocol.
         ClickHouse will pretend to be MySQL for applications connecting to this port.
    -->
    <mysql_port>9204</mysql_port>

    <!-- Compatibility with PostgreSQL protocol.
         ClickHouse will pretend to be PostgreSQL for applications connecting to this port.
    -->
    <postgresql_port>9205</postgresql_port>

    <!-- Port for communication between replicas. Used for data exchange.
         It provides low-level data access between servers.
         This port should not be accessible from untrusted networks.
         See also 'interserver_http_credentials'.
         Data transferred over connections to this port should not go through untrusted networks.
         See also 'interserver_https_port'.
      -->
    <interserver_http_port>9209</interserver_http_port>

    <interserver_http_host>192.168.6.8</interserver_http_host>

    <!-- Listen specified address.
         Use :: (wildcard IPv6 address), if you want to accept connections both with IPv4 and IPv6 from everywhere.
         Notes:
         If you open connections from wildcard address, make sure that at least one of the following measures applied:
         - server is protected by firewall and not accessible from untrusted networks;
         - all users are restricted to subset of network addresses (see users.xml);
         - all users have strong passwords, only secure (TLS) interfaces are accessible, or connections are only made via TLS interfaces.
         - users without password have readonly access.
         See also: https://www.shodan.io/search?query=clickhouse
      -->
    <listen_host>0.0.0.0</listen_host>

    <!-- Used with https_port and tcp_port_secure. Full ssl options list: https://github.com/ClickHouse-Extras/poco/blob/master/NetSSL_OpenSSL/include/Poco/Net/SSLManager.h#L71 -->
    <openSSL>
        <server> <!-- Used for https server AND secure tcp port -->
            <!-- openssl req -subj "/CN=localhost" -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout /etc/clickhouse-server/server.key -out /etc/clickhouse-server/server.crt -->
            <certificateFile>/etc/clickhouse-server/server.crt</certificateFile>
            <privateKeyFile>/etc/clickhouse-server/server.key</privateKeyFile>

    <!-- Path to data directory, with trailing slash. -->
    <path>/data/clickhouse/clickhouse_9200/</path>

    <!-- Path to temporary data for processing hard queries. -->
    <tmp_path>/data/clickhouse/clickhouse_9200/tmp/</tmp_path>

    <!-- Policy from the <storage_configuration> for the temporary files.
         If not set <tmp_path> is used, otherwise <tmp_path> is ignored.

         Notes:
         - move_factor              is ignored
         - keep_free_space_bytes    is ignored
         - max_data_part_size_bytes is ignored
         - you must have exactly one volume in that policy
    -->
    <!-- <tmp_policy>tmp</tmp_policy> -->

    <!-- Directory with user provided files that are accessible by 'file' table function. -->
    <user_files_path>/data/clickhouse/clickhouse_9200/user_files/</user_files_path>

    <!-- Sources to read users, roles, access rights, profiles of settings, quotas. -->
    <user_directories>
        <users_xml>
            <!-- Path to configuration file with predefined users. -->
            <path>users02.xml</path>
        </users_xml>
        <local_directory>
            <!-- Path to folder where users created by SQL commands are stored. -->
            <path>/data/clickhouse/clickhouse_9200/access/</path>
        </local_directory>
    </user_directories>

    <!-- Directory in <clickhouse-path> containing schema files for various input formats.
         The directory will be created if it doesn't exist.
      -->    
    <format_schema_path>/data/clickhouse/clickhouse_9200/format_schemas/</format_schema_path>
</yandex>

3.3 metrika.xml配置

[ root@clickhou001:~ ]# cat /etc/clickhouse-server/metrika01.xml 
<?xml version="1.0"?>

<yandex>
  <zookeeper-servers>
    <node index="1">
      <host>192.168.6.8</host>
      <port>2181</port>
    </node>
    <node index="2">
      <host>192.168.6.6</host>
      <port>2181</port>
    </node>
    <node index="3">
      <host>192.168.6.13</host>
      <port>2181</port>
    </node>
  </zookeeper-servers>


<clickhouse_remote_servers>
  <!-- 3分片2副本 -->
  <!-- 集群名字 -->
  <cluster_slowlog>
    <!-- 数据分片1 -->
    <shard>
      <weight>1</weight>
      <!-- Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). -->
      <internal_replication>true</internal_replication>
      <replica>
        <host>192.168.6.8</host>
        <port>9000</port>
      </replica>
      <replica>
         <host>192.168.6.13</host>
         <port>9200</port>
      </replica>

    </shard>
    <!-- 数据分片2 -->
    <shard>
      <weight>1</weight>
      <internal_replication>true</internal_replication>
      <replica>
        <host>192.168.6.6</host>
        <port>9000</port>
      </replica>
      <replica>
        <host>192.168.6.8</host>
        <port>9200</port>
      </replica>

    </shard>
    <!-- 数据分片3 -->
    <shard>
      <weight>1</weight>
      <internal_replication>true</internal_replication>
      <replica>
        <host>192.168.6.13</host>
        <port>9000</port>
      </replica>
      <replica>
        <host>192.168.6.6</host>
        <port>9200</port>
      </replica>

    </shard>
  <!-- 集群名字 -->
  </cluster_slowlog>

</clickhouse_remote_servers>


<macros>
  <cluster>cluster_slowlog</cluster>
  <shard>01</shard>
  <replica>click1-shard01-replica01</replica>
</macros>

<!-- 监听网络 -->
<networks>
   <ip>::/0</ip>
</networks>

<!-- 数据压缩算法  -->
<clickhouse_compression>
<case>
  <min_part_size>10000000000</min_part_size>
  <min_part_size_ratio>0.01</min_part_size_ratio>
  <method>lz4</method>
</case>
</clickhouse_compression>

</yandex>
[ root@clickhou001:~ ]# cat /etc/clickhouse-server/metrika02.xml 
<?xml version="1.0"?>

<yandex>
  <zookeeper-servers>
    <node index="1">
      <host>192.168.6.8</host>
      <port>2181</port>
    </node>
    <node index="2">
      <host>192.168.6.6</host>
      <port>2181</port>
    </node>
    <node index="3">
      <host>192.168.6.13</host>
      <port>2181</port>
    </node>
  </zookeeper-servers>


<clickhouse_remote_servers>
  <!-- 3分片2副本 -->
  <!-- 集群名字 -->
  <cluster_slowlog>
    <!-- 数据分片1 -->
    <shard>
      <weight>1</weight>
      <!-- Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). -->
      <internal_replication>true</internal_replication>
      <replica>
        <host>192.168.6.8</host>
        <port>9000</port>
      </replica>
      <replica>
         <host>192.168.6.13</host>
         <port>9200</port>
      </replica>

    </shard>
    <!-- 数据分片2 -->
    <shard>
      <weight>1</weight>
      <internal_replication>true</internal_replication>
      <replica>
        <host>192.168.6.6</host>
        <port>9000</port>
      </replica>
      <replica>
        <host>192.168.6.8</host>
        <port>9200</port>
      </replica>

    </shard>
    <!-- 数据分片3 -->
    <shard>
      <weight>1</weight>
      <internal_replication>true</internal_replication>
      <replica>
        <host>192.168.6.13</host>
        <port>9000</port>
      </replica>
      <replica>
        <host>192.168.6.6</host>
        <port>9200</port>
      </replica>

    </shard>
  <!-- 集群名字 -->
  </cluster_slowlog>

</clickhouse_remote_servers>


<macros>
  <cluster>cluster_slowlog</cluster>
  <shard>03</shard>
  <replica>click3-shard03-replica02</replica>
</macros>

<!-- 监听网络 -->
<networks>
   <ip>::/0</ip>
</networks>

<!-- 数据压缩算法  -->
<clickhouse_compression>
<case>
  <min_part_size>10000000000</min_part_size>
  <min_part_size_ratio>0.01</min_part_size_ratio>
  <method>lz4</method>
</case>
</clickhouse_compression>

</yandex>
[ root@clickhou002:~ ]# cat /etc/clickhouse-server/metrika01.xml 
<?xml version="1.0"?>

<yandex>
  <zookeeper-servers>
    <node index="1">
      <host>192.168.6.8</host>
      <port>2181</port>
    </node>
    <node index="2">
      <host>192.168.6.6</host>
      <port>2181</port>
    </node>
    <node index="3">
      <host>192.168.6.13</host>
      <port>2181</port>
    </node>
  </zookeeper-servers>


<clickhouse_remote_servers>
  <!-- 3分片2副本 -->
  <!-- 集群名字 -->
  <cluster_slowlog>
    <!-- 数据分片1 -->
    <shard>
      <weight>1</weight>
      <!-- Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). -->
      <internal_replication>true</internal_replication>
      <replica>
        <host>192.168.6.8</host>
        <port>9000</port>
      </replica>
      <replica>
         <host>192.168.6.13</host>
         <port>9200</port>
      </replica>

    </shard>
    <!-- 数据分片2 -->
    <shard>
      <weight>1</weight>
      <internal_replication>true</internal_replication>
      <replica>
        <host>192.168.6.6</host>
        <port>9000</port>
      </replica>
      <replica>
        <host>192.168.6.8</host>
        <port>9200</port>
      </replica>

    </shard>
    <!-- 数据分片3 -->
    <shard>
      <weight>1</weight>
      <internal_replication>true</internal_replication>
      <replica>
        <host>192.168.6.13</host>
        <port>9000</port>
      </replica>
      <replica>
        <host>192.168.6.6</host>
        <port>9200</port>
      </replica>

    </shard>
  <!-- 集群名字 -->
  </cluster_slowlog>

</clickhouse_remote_servers>


<macros>
  <cluster>cluster_slowlog</cluster>
  <shard>02</shard>
  <replica>click2-shard02-replica01</replica>
</macros>

<!-- 监听网络 -->
<networks>
   <ip>::/0</ip>
</networks>

<!-- 数据压缩算法  -->
<clickhouse_compression>
<case>
  <min_part_size>10000000000</min_part_size>
  <min_part_size_ratio>0.01</min_part_size_ratio>
  <method>lz4</method>
</case>
</clickhouse_compression>

</yandex>
[ root@clickhou002:~ ]# cat /etc/clickhouse-server/metrika02.xml 
<?xml version="1.0"?>

<yandex>
  <zookeeper-servers>
    <node index="1">
      <host>192.168.6.8</host>
      <port>2181</port>
    </node>
    <node index="2">
      <host>192.168.6.6</host>
      <port>2181</port>
    </node>
    <node index="3">
      <host>192.168.6.13</host>
      <port>2181</port>
    </node>
  </zookeeper-servers>


<clickhouse_remote_servers>
  <!-- 3分片2副本 -->
  <!-- 集群名字 -->
  <cluster_slowlog>
    <!-- 数据分片1 -->
    <shard>
      <weight>1</weight>
      <!-- Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). -->
      <internal_replication>true</internal_replication>
      <replica>
        <host>192.168.6.8</host>
        <port>9000</port>
      </replica>
      <replica>
         <host>192.168.6.13</host>
         <port>9200</port>
      </replica>

    </shard>
    <!-- 数据分片2 -->
    <shard>
      <weight>1</weight>
      <internal_replication>true</internal_replication>
      <replica>
        <host>192.168.6.6</host>
        <port>9000</port>
      </replica>
      <replica>
        <host>192.168.6.8</host>
        <port>9200</port>
      </replica>

    </shard>
    <!-- 数据分片3 -->
    <shard>
      <weight>1</weight>
      <internal_replication>true</internal_replication>
      <replica>
        <host>192.168.6.13</host>
        <port>9000</port>
      </replica>
      <replica>
        <host>192.168.6.6</host>
        <port>9200</port>
      </replica>

    </shard>
  <!-- 集群名字 -->
  </cluster_slowlog>

</clickhouse_remote_servers>


<macros>
  <cluster>cluster_slowlog</cluster>
  <shard>01</shard>
  <replica>click1-shard01-replica02</replica>
</macros>

<!-- 监听网络 -->
<networks>
   <ip>::/0</ip>
</networks>

<!-- 数据压缩算法  -->
<clickhouse_compression>
<case>
  <min_part_size>10000000000</min_part_size>
  <min_part_size_ratio>0.01</min_part_size_ratio>
  <method>lz4</method>
</case>
</clickhouse_compression>

</yandex>
[ root@clickhou003:~ ]# cat /etc/clickhouse-server/metrika01.xml
<?xml version="1.0"?> <yandex> <zookeeper-servers> <node index="1"> <host>192.168.6.8</host> <port>2181</port> </node> <node index="2"> <host>192.168.6.6</host> <port>2181</port> </node> <node index="3"> <host>192.168.6.13</host> <port>2181</port> </node> </zookeeper-servers> <clickhouse_remote_servers> <!-- 3分片2副本 --> <!-- 集群名字 --> <cluster_slowlog> <!-- 数据分片1 --> <shard> <weight>1</weight> <!-- Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). --> <internal_replication>true</internal_replication> <replica> <host>192.168.6.8</host> <port>9000</port> </replica> <replica> <host>192.168.6.13</host> <port>9200</port> </replica> </shard> <!-- 数据分片2 --> <shard> <weight>1</weight> <internal_replication>true</internal_replication> <replica> <host>192.168.6.6</host> <port>9000</port> </replica> <replica> <host>192.168.6.8</host> <port>9200</port> </replica> </shard> <!-- 数据分片3 --> <shard> <weight>1</weight> <internal_replication>true</internal_replication> <replica> <host>192.168.6.13</host> <port>9000</port> </replica> <replica> <host>192.168.6.6</host> <port>9200</port> </replica> </shard> <!-- 集群名字 --> </cluster_slowlog> </clickhouse_remote_servers> <macros> <cluster>cluster_slowlog</cluster> <shard>02</shard> <replica>click3-shard03-replica01</replica> </macros> <!-- 监听网络 --> <networks> <ip>::/0</ip> </networks> <!-- 数据压缩算法 --> <clickhouse_compression> <case> <min_part_size>10000000000</min_part_size> <min_part_size_ratio>0.01</min_part_size_ratio> <method>lz4</method> </case> </clickhouse_compression> </yandex>
[ root@clickhou003:~ ]# cat /etc/clickhouse-server/metrika02.xml
<?xml version="1.0"?>

<yandex>
  <zookeeper-servers>
    <node index="1">
      <host>192.168.6.8</host>
      <port>2181</port>
    </node>
    <node index="2">
      <host>192.168.6.6</host>
      <port>2181</port>
    </node>
    <node index="3">
      <host>192.168.6.13</host>
      <port>2181</port>
    </node>
  </zookeeper-servers>


<clickhouse_remote_servers>
  <!-- 3分片2副本 -->
  <!-- 集群名字 -->
  <cluster_slowlog>
    <!-- 数据分片1 -->
    <shard>
      <weight>1</weight>
      <!-- Optional. Whether to write data to just one of the replicas. Default: false (write data to all replicas). -->
      <internal_replication>true</internal_replication>
      <replica>
        <host>192.168.6.8</host>
        <port>9000</port>
      </replica>
      <replica>
         <host>192.168.6.13</host>
         <port>9200</port>
      </replica>

    </shard>
    <!-- 数据分片2 -->
    <shard>
      <weight>1</weight>
      <internal_replication>true</internal_replication>
      <replica>
        <host>192.168.6.6</host>
        <port>9000</port>
      </replica>
      <replica>
        <host>192.168.6.8</host>
        <port>9200</port>
      </replica>

    </shard>
    <!-- 数据分片3 -->
    <shard>
      <weight>1</weight>
      <internal_replication>true</internal_replication>
      <replica>
        <host>192.168.6.13</host>
        <port>9000</port>
      </replica>
      <replica>
        <host>192.168.6.6</host>
        <port>9200</port>
      </replica>

    </shard>
  <!-- 集群名字 -->
  </cluster_slowlog>

</clickhouse_remote_servers>


<macros>
  <cluster>cluster_slowlog</cluster>
  <shard>02</shard>
  <replica>click2-shard02-replica02</replica>
</macros>

<!-- 监听网络 -->
<networks>
   <ip>::/0</ip>
</networks>

<!-- 数据压缩算法  -->
<clickhouse_compression>
<case>
  <min_part_size>10000000000</min_part_size>
  <min_part_size_ratio>0.01</min_part_size_ratio>
  <method>lz4</method>
</case>
</clickhouse_compression>

</yandex>


3.4 自启动

三台机器均操作[ root@clickhou001:~ ]# cat /etc/systemd/system/clickhouse-server9000.service 
[Unit]
Description=ClickHouse Server (analytic DBMS for big data)
Requires=network-online.target
After=network-online.target

[Service]
Type=simple
User=clickhouse
Group=clickhouse
Restart=always
RestartSec=30
RuntimeDirectory=clickhouse-server
ExecStart=/usr/bin/clickhouse-server --config=/etc/clickhouse-server/config_9000.xml --pid-file=/data/clickhouse/clickhouse_9000/clickhouse-server9000.pid
LimitCORE=infinity
LimitNOFILE=500000
CapabilityBoundingSet=CAP_NET_ADMIN CAP_IPC_LOCK CAP_SYS_NICE

[Install]
WantedBy=multi-user.target
[ root@clickhou001:~ ]# cat /etc/systemd/system/clickhouse-server9200.service
[Unit]
Description=ClickHouse Server (analytic DBMS for big data)
Requires=network-online.target
After=network-online.target

[Service]
Type=simple
User=clickhouse
Group=clickhouse
Restart=always
RestartSec=30
RuntimeDirectory=clickhouse-server
ExecStart=/usr/bin/clickhouse-server --config=/etc/clickhouse-server/config_9200.xml --pid-file=/data/clickhouse/clickhouse_9200/clickhouse-server9200.pid
LimitCORE=infinity
LimitNOFILE=500000
CapabilityBoundingSet=CAP_NET_ADMIN CAP_IPC_LOCK CAP_SYS_NICE

[Install]
WantedBy=multi-user.target
systemctl restart clickhouse-server9000.service
systemctl enable clickhouse-server9000.service
systemctl status clickhouse-server9000.service

systemctl restart clickhouse-server9200.service
systemctl enable clickhouse-server9200.service
systemctl status clickhouse-server9200.service
[ root@clickhou001:~ ]# systemctl status clickhouse-server9000.service
 clickhouse-server9000.service - ClickHouse Server (analytic DBMS for big data)
Loaded: loaded (/etc/systemd/system/clickhouse-server9000.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-11-09 18:23:36 CST; 10 months 28 days ago
Main PID: 14116 (clckhouse-watch)
CGroup: /system.slice/clickhouse-server9000.service
├─14116 clickhouse-watchdog --config=/etc/clickhouse-server/config_9000.xml --pid-file=/data/clickhouse/clickhouse_9000/clickhouse-server9000.pid
└─14117 /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config_9000.xml --pid-file=/data/clickhouse/clickhouse_9000/clickhouse-server9000.pid

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
[ root@clickhou001:~ ]# systemctl status clickhouse-server9200.service
 clickhouse-server9200.service - ClickHouse Server (analytic DBMS for big data)
Loaded: loaded (/etc/systemd/system/clickhouse-server9200.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-11-09 18:23:43 CST; 10 months 28 days ago
Main PID: 14296 (clckhouse-watch)
CGroup: /system.slice/clickhouse-server9200.service
├─14296 clickhouse-watchdog --config=/etc/clickhouse-server/config_9200.xml --pid-file=/data/clickhouse/clickhouse_9200/clickhouse-server9200.pid
└─14297 /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config_9200.xml --pid-file=/data/clickhouse/clickhouse_9200/clickhouse-server9200.pid

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

4 登录验证

[ root@clickhou001:~ ]# clickhouse-client --port=9000 --multiline
ClickHouse client version 21.10.2.15 (official build).
Connecting to localhost:9000 as user default.
Connected to ClickHouse server version 21.10.2 revision 54449.

[clickhou001-ops-prod-bj4] {分片1-副本1} {9000}  > select cluster,shard_num,replica_num,host_name,host_address,port,is_local,user from system.clusters;

SELECT
    cluster,
    shard_num,
    replica_num,
    host_name,
    host_address,
    port,
    is_local,
    user
FROM system.clusters

Query id: f6555600-bae2-4f90-b41d-474458ac4e66

┌cluster─────┬shard_num─┬replica_num─┬host_name─ ─┬─host_address─┬─port┬─is_local┬─user ─┐
 cluster_slowlog          1            1  192.168.6.8   192.168.6.8     9000         1  default 
 cluster_slowlog          1            2  192.168.6.13  192.168.6.13    9200         0  default 
 cluster_slowlog          2            1  192.168.6.6   192.168.6.6     9000         0  default 
 cluster_slowlog          2            2  192.168.6.8   192.168.6.8     9200         0  default 
 cluster_slowlog          3            1  192.168.6.13  192.168.6.13    9000         0  default 
 cluster_slowlog          3            2  192.168.6.6   192.168.6.6     9200         0  default 
└──────── ┴──── ─┴────── ┴───────┴────────┴───┴─────┴─── ─┘

6 rows in set. Elapsed: 0.002 sec. 

5 监控部署

clickhouse_exporter重新编译过。

[ root@clickhou001:/data/app ]# nohup ./clickhouse_exporter -scrape_uri=http://192.168.6.8:8213/ -log.level=info >> /dev/null 2>&1 &
最后修改时间:2022-10-17 11:14:05
「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

文章被以下合辑收录

评论