暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

分布式存储fastdfs高可用集群搭建

feelwow 2020-03-31
493

集群架构图

先来简单说一下这个架构图,前段是两台nginx + keepalived的高可用负载均衡群集,对后面两个tracker服务器做负载均衡,然后最后端是由fastdfs组成的分布式存储池,通过tracker进行跟踪控制,文件调度等

环境介绍

  • 前段 nginx + keepalived     (两台机器,一主一备  192.168.3.21/22)

  • tracker服务器 nginx  fastdfs  tracker(两台机器,对等关系 192.168.3.19.20)

  • storage服务器 nginx  fastdfs  storage   fastdfs相关模块    (两台机器,对等的两组  192.168.3.23/24)

keepalived结合nginx做高可用的负载均衡

由于前端只是用来做简单的负载均衡,不需要装载其他的模块,直接简单安装一下即可,如果是生产环境下,则最好统一安装编译,保持一致性

1)安装nginx

不再多说,自行Google

2)安装keepalived

  1. cd /usr/local/src && wget http://www.keepalived.org/software/keepalived-1.2.18.tar.gz

  2. tar zxf keepalived-1.2.18.tar.gz && cd keepalived-1.2.18

  3. ./configure —prefix=/usr/local/keepalived && make && make install

3)拷贝文件至默认的目录

  1. mkdir /etc/keepalived

  2. cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/

  3. cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/

  4. cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/

  5. ln -s /usr/local/sbin/keepalived /usr/sbin/

  6. ln -s /usr/local/keepalived/sbin/keepalived /sbin/

4)添加至系统服务和开机启动

  1. # cat /lib/systemd/system/keepalived.service

  2. [Unit]  

  3. Description=Keepalived  

  4. After=syslog.target network.target remote-fs.target nss-lookup.target  

  5.  

  6. [Service]  

  7. Type=forking  

  8. PIDFile=/var/run/keepalived.pid  

  9. ExecStart=/usr/local/keepalived/sbin/keepalived -D  

  10. ExecReload=/bin/kill -s HUP $MAINPID  

  11. ExecStop=/bin/kill -s QUIT $MAINPID  

  12. PrivateTmp=true  

  13.  

  14. [Install]  

  15. WantedBy=multi-user.target  

  16. #systemctl daemon-reload

  17. #systremctl enable keepalived

  18. #systemctl start keepalived

注意:

    至此,两台机器执行同样的操作,安装nginx和keepalived服务,然后添加系统服务等

5)修改主备的配置文件

  1. # cat /etc/keepalived/keepalived.conf

  2. ! Configuration File for keepalived

  3. global_defs {

  4.   router_id master

  5. }

  6. vrrp_script chk_nginx {

  7.        script "/etc/keepalived/nginx_check.sh"

  8.        interval 2

  9.        weight -20

  10. }

  11. vrrp_instance VI_1 {

  12.    state MASTER

  13.    interface enp0s3

  14.    virtual_router_id 33

  15.    mcast_src_ip 192.168.3.21

  16.    priority 100

  17.    nopreemt

  18.    advert_int 1

  19.    authentication {

  20.        auth_type PASS

  21.        auth_pass 1111

  22.    }

  23.    track_script {

  24.        chk_nginx

  25.        }

  26.    virtual_ipaddress {

  27.        192.168.3.100

  28.    }

  29. }

  30. [root@keepalived-nginx-backup /etc/keepalived]# cat /etc/keepalived/keepalived.conf

  31. ! Configuration File for keepalived

  32. global_defs {

  33.   router_id backup

  34. }

  35. vrrp_script chk_nginx {

  36.    script "/etc/keepalived/nginx_check.sh"

  37.    interval 2

  38.    weight -20

  39. }

  40. vrrp_instance VI_1 {

  41.    state BACKUP

  42.    interface enp0s3

  43.    virtual_router_id 33

  44.    mcast_src_ip 192.168.3.22

  45.    priority 90

  46.    advert_int 1

  47.    authentication {

  48.        auth_type PASS

  49.        auth_pass 1111

  50.    }

  51.    track_script {

  52.        chk_nginx

  53.        }

  54.    virtual_ipaddress {

  55.        192.168.3.100

  56.    }

  57. }

6)编写nginx状态监控脚本

  1. # cat /etc/keepalived/nginx_check.sh

  2. #!/bin/bash

  3. A=`ps -C nginx –no-header |wc -l`

  4. if [ $A -eq 0 ];then

  5. /usr/local/nginx/sbin/nginx

  6. sleep 2

  7. if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then

  8.        killall keepalived

  9. fi

  10. fi

注意:这个脚本存在一些问题,需要把 —no-header 去掉,否则nginx会报错,起不来,显示端口被占用

7)测试

大体的思路就是,两台机器在访问时做好区分,然后主上停掉keepalived和nginx,在浏览器上访问vip,看能否访问,切到备上,看下vip是否飘过来,再看下系统日志关于keepalived的VIP的状态信息等。

搭建fastdfs的tracker服务器

tracker机器上的nginx不用添加fastdfs-ngx-mod  模块

搭建fastdfs的storage服务器

修改配置文件

1)将storage和tracker服务器联系起来

在tracker服务器上做以下修改:

  1. # egrep -v "^#|^$" tracker.conf

  2. disabled=false

  3. bind_addr=

  4. port=22122

  5. connect_timeout=30

  6. network_timeout=60

  7. base_path=/data/tracker

  8. max_connections=256

  9. accept_threads=1

  10. work_threads=4

  11. min_buff_size = 8KB

  12. max_buff_size = 128KB

  13. store_lookup=2

  14. store_group=group2

  15. store_server=0

  16. store_path=0

  17. download_server=0

  18. reserved_storage_space = 10%

  19. log_level=info

  20. run_by_group=

  21. run_by_user=

  22. allow_hosts=*

  23. sync_log_buff_interval = 10

  24. check_active_interval = 120

  25. thread_stack_size = 64KB

  26. storage_ip_changed_auto_adjust = true

  27. storage_sync_file_max_delay = 86400

  28. storage_sync_file_max_time = 300

  29. use_trunk_file = false

  30. slot_min_size = 256

  31. slot_max_size = 16MB

  32. trunk_file_size = 64MB

  33. trunk_create_file_advance = false

  34. trunk_create_file_time_base = 02:00

  35. trunk_create_file_interval = 86400

  36. trunk_create_file_space_threshold = 20G

  37. trunk_init_check_occupying = false

  38. trunk_init_reload_from_binlog = false

  39. trunk_compress_binlog_min_interval = 0

  40. use_storage_id = false

  41. storage_ids_filename = storage_ids.conf

  42. id_type_in_filename = ip

  43. store_slave_file_use_link = false

  44. rotate_error_log = false

  45. error_log_rotate_time=00:00

  46. rotate_error_log_size = 0

  47. log_file_keep_days = 0

  48. use_connection_pool = false

  49. connection_pool_max_idle_time = 3600

  50. http.server_port=8000

  51. http.check_alive_interval=30

  52. http.check_alive_type=tcp

  53. http.check_alive_uri=/status.html

  54. # egrep -v "^#|^$" client.conf

  55. connect_timeout=30

  56. network_timeout=60

  57. base_path=/data/tracker

  58. tracker_server=192.168.3.20:22122

  59. tracker_server=192.168.3.19:22122

  60. log_level=info

  61. use_connection_pool = false

  62. connection_pool_max_idle_time = 3600

  63. load_fdfs_parameters_from_tracker=false

  64. use_storage_id = false

  65. storage_ids_filename = storage_ids.conf

  66. http.tracker_server_port=8000

修改nginx配置文件,使其反向代理后端storage服务器

  1. # egrep -v "^#|^$" /usr/local/nginx/conf/nginx.conf

  2. user root;

  3. worker_processes 1;

  4. events {

  5.        worker_connections 1024;

  6.        use epoll;

  7. }

  8. http {

  9.        include mime.types;

  10.        default_type application/octet-stream;

  11.        #log_format main '$remote_addr - $remote_user [$time_local] "$request" '

  12.        # '$status $body_bytes_sent "$http_referer" '

  13.        # '"$http_user_agent" "$http_x_forwarded_for"';

  14.        #access_log logs/access.log main;

  15.        sendfile on;

  16.        tcp_nopush on;

  17.        #keepalive_timeout 0;

  18.        keepalive_timeout 65;

  19.        #gzip on;

  20.        #设置缓存

  21.        server_names_hash_bucket_size 128;

  22.        client_header_buffer_size 32k;

  23.        large_client_header_buffers 4 32k;

  24.        client_max_body_size 300m;

  25.        proxy_redirect off;

  26.        proxy_set_header Host $http_host;

  27.        proxy_set_header X-Real-IP $remote_addr;

  28.        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

  29.        proxy_connect_timeout 90;

  30.        proxy_send_timeout 90;

  31.        proxy_read_timeout 90;

  32.        proxy_buffer_size 16k;

  33.        proxy_buffers 4 64k;

  34.        proxy_busy_buffers_size 128k;

  35.        proxy_temp_file_write_size 128k;

  36.        #设置缓存存储路径、存储方式、分配内存大小、磁盘最大空间、缓存期限

  37.        proxy_cache_path /data/cache/nginx/proxy_cache levels=1:2

  38.        keys_zone=http-cache:200m max_size=1g inactive=30d;

  39.        proxy_temp_path /data/cache/nginx/proxy_cache/tmp;

  40.        #设置 group1 的服务器

  41.        upstream fdfs_group1 {

  42.                #server 10.0.3.89:8888 weight=1 max_fails=2 fail_timeout=30s;

  43.                server 192.168.3.23:8888 weight=1 max_fails=2 fail_timeout=30s;

  44.        }

  45.        #设置 group2 的服务器

  46.        upstream fdfs_group2 {

  47.                #server 10.0.3.88:8888 weight=1 max_fails=2 fail_timeout=30s;

  48.                server 192.168.3.24:8888 weight=1 max_fails=2 fail_timeout=30s;

  49.        }

  50.        server {

  51.                listen 8000;

  52.                server_name localhost;

  53.                #设置 group 的负载均衡参数

  54.                location /group1/M00 {

  55.                        proxy_next_upstream http_502 http_504 error timeout invalid_header;

  56.                        proxy_cache http-cache;

  57.                        proxy_cache_valid 200 304 12h;

  58.                        proxy_cache_key $uri$is_args$args;

  59.                        proxy_pass http://fdfs_group1;

  60.                        expires 30d;

  61.                }

  62.                location /group2/M00 {

  63.                        proxy_next_upstream http_502 http_504 error timeout invalid_header;

  64.                        proxy_cache http-cache;

  65.                        proxy_cache_valid 200 304 12h;

  66.                        proxy_cache_key $uri$is_args$args;

  67.                        proxy_pass http://fdfs_group2;

  68.                        expires 30d;

  69.                }

  70.                #error_page 404 /404.html;

  71.                # redirect server error pages to the static page /50x.html

  72.                #

  73.                error_page 500 502 503 504 /50x.html;

  74.                        location = /50x.html {

  75.                        root html;

  76.                }

  77.        }

  78. }

两台tracker服务器相对应就可以啦

2)修改storage服务器

  1. # egrep -v "^$|^#" storage.conf

  2. disabled=false

  3. group_name=group1

  4. bind_addr=

  5. client_bind=true

  6. port=23000

  7. connect_timeout=30

  8. network_timeout=60

  9. heart_beat_interval=30

  10. stat_report_interval=60

  11. base_path=/data/storage

  12. max_connections=256

  13. buff_size = 256KB

  14. accept_threads=1

  15. work_threads=4

  16. disk_rw_separated = true

  17. disk_reader_threads = 1

  18. disk_writer_threads = 1

  19. sync_wait_msec=50

  20. sync_interval=0

  21. sync_start_time=00:00

  22. sync_end_time=23:59

  23. write_mark_file_freq=500

  24. store_path_count=1

  25. store_path0=/data/storage

  26. subdir_count_per_path=256

  27. tracker_server=192.168.3.19:22122

  28. tracker_server=192.168.3.20:22122

  29. log_level=info

  30. run_by_group=

  31. run_by_user=

  32. allow_hosts=*

  33. file_distribute_path_mode=0

  34. file_distribute_rotate_count=100

  35. fsync_after_written_bytes=0

  36. sync_log_buff_interval=10

  37. sync_binlog_buff_interval=10

  38. sync_stat_file_interval=300

  39. thread_stack_size=512KB

  40. upload_priority=10

  41. if_alias_prefix=

  42. check_file_duplicate=0

  43. file_signature_method=hash

  44. key_namespace=FastDFS

  45. keep_alive=0

  46. use_access_log = false

  47. rotate_access_log = false

  48. access_log_rotate_time=00:00

  49. rotate_error_log = false

  50. error_log_rotate_time=00:00

  51. rotate_access_log_size = 0

  52. rotate_error_log_size = 0

  53. log_file_keep_days = 0

  54. file_sync_skip_invalid_record=false

  55. use_connection_pool = false

  56. connection_pool_max_idle_time = 3600

  57. http.domain_name=

  58. http.server_port=8888

  59. # egrep -v "^$|^#" mod_fastdfs.conf

  60. connect_timeout=100

  61. network_timeout=300

  62. base_path=/tmp

  63. load_fdfs_parameters_from_tracker=true

  64. storage_sync_file_max_delay = 86400

  65. use_storage_id = false

  66. storage_ids_filename = storage_ids.conf

  67. tracker_server=192.168.3.19:22122

  68. tracker_server=192.168.3.20:22122

  69. storage_server_port=23000

  70. group_name=group1

  71. url_have_group_name = true

  72. store_path_count=1

  73. store_path0=/data/storage

  74. log_level=info

  75. log_filename=

  76. response_mode=proxy

  77. if_alias_prefix=

  78. flv_support = true

  79. flv_extension = flv

  80. group_count = 2

  81. [group1]

  82. group_name=group1

  83. storage_server_port=23000

  84. store_path_count=1

  85. store_path0=/data/storage

  86. [group2]

  87. group_name=group2

  88. storage_server_port=23000

  89. store_path_count=1

  90. store_path0=/data/storage

修改nginx的配置文件

  1. # egrep -v "^$|^#" /usr/local/nginx/conf/conf.d/storage.conf

  2. server {

  3.                listen 8888;

  4.                server_name localhost;

  5.                location ~/group([0-9])/M00 {

  6.                        #alias /fastdfs/storage/data;

  7.                        ngx_fastdfs_module;

  8.                }

  9.                error_page 500 502 503 504 /50x.html;

  10.                location = /50x.html {

  11.                        root html;

  12.                }

  13.        }

两台storage服务器保持对应即可。

3)修改负载均衡器的配置文件

  1. # egrep -v "^$|^#" nginx.conf

  2. user root;

  3. worker_processes 1;

  4. events {

  5.        worker_connections 1024;

  6. }

  7. http {

  8.        include mime.types;

  9.        default_type application/octet-stream;

  10.        #log_format main '$remote_addr - $remote_user [$time_local] "$request" '

  11.        # '$status $body_bytes_sent "$http_referer" '

  12.        # '"$http_user_agent" "$http_x_forwarded_for"';

  13.        #access_log logs/access.log main;

  14.        sendfile on;

  15.        #tcp_nopush on;

  16.        #keepalive_timeout 0;

  17.        keepalive_timeout 65;

  18.        proxy_read_timeout 150;

  19.        #gzip on;

  20.        ## FastDFS Tracker Proxy

  21.        upstream fastdfs_tracker {

  22.                #server 10.0.3.90:8000 weight=1 max_fails=2 fail_timeout=30s;

  23.                #server 10.0.3.90:8000 weight=1 max_fails=2 fail_timeout=30s;

  24.                server 192.168.3.19:8000 weight=1 max_fails=2 fail_timeout=30s;

  25.                server 192.168.3.20:8000 weight=1 max_fails=2 fail_timeout=30s;

  26.        }

  27.        server {

  28.                listen 80;

  29.                server_name localhost;

  30.                #charset koi8-r;

  31.                #access_log logs/host.access.log main;

  32.                location / {

  33.                root html;

  34.                        index index.html index.htm;

  35.                }

  36.                #error_page 404 /404.html;

  37.                # redirect server error pages to the static page /50x.html

  38.                error_page 500 502 503 504 /50x.html;

  39.                        location = /50x.html {

  40.                        root html;

  41.                }

  42.                ## FastDFS Proxy

  43.                location /dfs {

  44.                        root html;

  45.                        index index.html index.htm;

  46.                        proxy_pass http://fastdfs_tracker/;

  47.                        proxy_set_header Host $http_host;

  48.                        proxy_set_header Cookie $http_cookie;

  49.                        proxy_set_header X-Real-IP $remote_addr;

  50.                        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

  51.                        proxy_set_header X-Forwarded-Proto $scheme;

  52.                        client_max_body_size 300m;

  53.                }

  54.        }

  55. }

剩下的就是测试了

1)先测试fastdfs

我们在tracker服务器上传一张图片,然后在storage服务器上访问

  1. /usr/bin/fdfs_upload_file /etc/fdfs/client.conf image.jpg

然后我们在访问tracker服务器,看是否能反向代理到后端storage上

2)测试负载均衡器

我们访问负载均衡器的VIP,来看下是否能访问

至此,,一个简单的fastdfs高可用的负载均衡集群已经搭建完毕,后面我们再做一些优化。


文章转载自feelwow,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论