Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


GlusterFS Geo Replication help
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

GlusterFS Geo Replication help

Hello,

I was trying to connect my GlusterFS local volume with remove openvz vps. My local volumes are based on ubuntu 16.04, GlusterFS 3.7 same the remote vps. Remote vps has enabled FUSE. When I am trying to do this:

gluster volume geo-replication vm hkstore:/data/myna/ start
Staging failed on localhost. Please check the log file for more details.
geo-replication command failed

[2016-08-09 20:13:48.535465] W [dict.c:896:str_to_data] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.14/xlator/mgmt/glusterd.so(glusterd_get_slave_details_confpath+0x13b) [0x7f141521c02b] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_set_str+0x16) [0x7f141a03ad26] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(str_to_data+0x80) [0x7f141a039430] ) 0-dict: value is NULL [Invalid argument]

Any suggestion please? Thanks in advance.

Comments

  • @jibon57 said:
    Hello,

    I was trying to connect my GlusterFS local volume with remove openvz vps. My local volumes are based on ubuntu 16.04, GlusterFS 3.7 same the remote vps. Remote vps has enabled FUSE. When I am trying to do this:

    gluster volume geo-replication vm hkstore:/data/myna/ start
    Staging failed on localhost. Please check the log file for more details.
    geo-replication command failed

    [2016-08-09 20:13:48.535465] W [dict.c:896:str_to_data] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.14/xlator/mgmt/glusterd.so(glusterd_get_slave_details_confpath+0x13b) [0x7f141521c02b] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_set_str+0x16) [0x7f141a03ad26] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(str_to_data+0x80) [0x7f141a039430] ) 0-dict: value is NULL [Invalid argument]

    Any suggestion please? Thanks in advance.

    Hi,

    That looks like only part of the log, can you post the rest?

    Also, are you following a particular guide?
    Official Guide to geo-replication is here: https://gluster.readthedocs.io/en/latest/Administrator Guide/Geo Replication/

  • Thanks for reply. Yes, I was following that tuto. Here is the log:

    [2016-08-10 06:10:13.876864] E [MSGID: 106025] [glusterd-geo-rep.c:5829:glusterd_get_slave_info] 0-management: Invalid slave name [Invalid argument]
    [2016-08-10 06:10:13.876985] W [dict.c:896:str_to_data] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.7.14/xlator/mgmt/glusterd.so(glusterd_get_slave_details_confpath+0x13b) [0x7fe832f0b02b] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(dict_set_str+0x16) [0x7fe837d29d26] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(str_to_data+0x80) [0x7fe837d28430] ) 0-dict: value is NULL [Invalid argument]
    [2016-08-10 06:10:13.876998] E [MSGID: 106060] [glusterd-geo-rep.c:5728:glusterd_get_slave_details_confpath] 0-management: Unable to store slave volume name.
    [2016-08-10 06:10:13.877006] E [MSGID: 106300] [glusterd-geo-rep.c:3027:glusterd_op_stage_gsync_create] 0-management: Unable to fetch slave or confpath details.
    [2016-08-10 06:10:13.877018] E [MSGID: 106301] [glusterd-syncop.c:1281:gd_stage_op_phase] 0-management: Staging of operation 'Volume Geo-replication Create' failed on localhost
    [2016-08-10 06:10:36.974923] W [socket.c:591:__socket_rwv] 0-nfs: readv on /var/run/gluster/d8bcb47211906ed835be34bb3284c162.socket failed (Invalid argument)
    The message "I [MSGID: 106006] [glusterd-svc-mgmt.c:323:glusterd_svc_common_rpc_notify] 0-management: nfs has disconnected from glusterd." repeated 39 times between [2016-08-10 06:09:21.759297] and [2016-08-10 06:11:19.109328]
    [2016-08-10 06:11:22.109700] I [MSGID: 106006] [glusterd-svc-mgmt.c:323:glusterd_svc_common_rpc_notify] 0-management: nfs has disconnected from glusterd.
    [2016-08-10 06:12:43.427978] W [socket.c:591:__socket_rwv] 0-nfs: readv on /var/run/gluster/d8bcb47211906ed835be34bb3284c162.socket failed (Invalid argument)
    The message "I [MSGID: 106006] [glusterd-svc-mgmt.c:323:glusterd_svc_common_rpc_notify] 0-management: nfs has disconnected from glusterd." repeated 39 times between [2016-08-10 06:11:22.109700] and [2016-08-10 06:13:19.431901]
    [2016-08-10 06:13:22.432386] I [MSGID: 106006] [glusterd-svc-mgmt.c:323:glusterd_svc_common_rpc_notify] 0-management: nfs has disconnected from glusterd.

    Thanks again for your suggestion.

  • Hi, I have followed line by line of the official instructions. Now situation is like this:

    gluster volume geo-replication vm hkstore::slavevol create push-pem force

    output error:

    Unable to fetch slave volume details. Please check the slave cluster and slave volume.
    geo-replication command failed

    From log:

    [2016-08-10 08:37:45.599835] E [MSGID: 106316] [glusterd-geo-rep.c:2715:glusterd_verify_slave] 0-management: Not a valid slave
    [2016-08-10 08:37:45.599914] E [MSGID: 106316] [glusterd-geo-rep.c:3102:glusterd_op_stage_gsync_create] 0-management: hkstore::slavevol is not a valid slave volume. Error: Unable to fetch slave volume details. Please check the slave cluster and slave volume.
    [2016-08-10 08:37:45.599921] E [MSGID: 106301] [glusterd-syncop.c:1281:gd_stage_op_phase] 0-management: Staging of operation 'Volume Geo-replication Create' failed on localhost : Unable to fetch slave volume details. Please check the slave cluster and slave volume.

    But in "hkstore" remote VPS, if I run:

    gluster volume info

    Output:

    Volume Name:vm
    Type: Replicate
    Volume ID: 309de7aa-36c8-48f4-add7-4a364edd7636
    Status: Started
    Number of Bricks: 1 x 2 = 2
    Transport-type: tcp
    Bricks:
    Brick1: store2:/data/virtual
    Brick2: store1db2:/data/virtual
    Options Reconfigured:
    auth.allow: 192.168.100.*
    performance.readdir-ahead: on

    command:

    gluster volume profile slavevol info

    output:

    Volume slavevol does not exist

    But

    cat /etc/glusterfs/glusterd.vol

    Output

    volume management
    type mgmt/glusterd
    option transport.socket.read-fail-log off
    option transport.socket.keepalive-interval 2
    option working-directory /var/lib/glusterd
    option mountbroker-geo-replication.geoaccount slavevol
    option rpc-auth-allow-insecure on
    option transport.socket.keepalive-time 10
    option geo-replication-log-group geogroup
    option event-threads 1
    option mountbroker-root /var/mountbroker-root
    option ping-timeout 0
    option transport-type socket,rdma
    # option base-port 49152

    Any idea?

  • gluster volume geo-replication vm hkstore::slavevol create push-pem force
    should be
    gluster volume geo-replication vm geoaccount@hkstore::slavevol create push-pem force

  • Thanks, but:

    gluster volume geo-replication vm geoaccount@hkstore::slavevol create push-pem force

    Output:

    Unable to fetch slave volume details. Please check the slave cluster and slave volume.
    geo-replication command failed

    Log:

    [2016-08-10 14:58:48.756415] E [MSGID: 106316] [glusterd-geo-rep.c:2715:glusterd_verify_slave] 0-management: Not a valid slave
    [2016-08-10 14:58:48.756486] E [MSGID: 106316] [glusterd-geo-rep.c:3102:glusterd_op_stage_gsync_create] 0-management: geoaccount@hkstore::slavevol is not a valid slave volume. Error: Unable to fetch slave volume details. Please check the slave cluster and slave volume.
    [2016-08-10 14:58:48.756503] E [MSGID: 106301] [glusterd-syncop.c:1281:gd_stage_op_phase] 0-management: Staging of operation 'Volume Geo-replication Create' failed on localhost : Unable to fetch slave volume details. Please check the slave cluster and slave volume.

  • Hmm.

    Have you verified slave/master compatability using gverify?

  • jibon57jibon57 Member
    edited August 2016

    How do I run that?

    bash -x /usr/lib/x86_64-linux-gnu/glusterfs/gverify.sh vm hkstore slavevol /home/log.txt

    Output:

    • BUFFER_SIZE=104857600
    • SSH_PORT=
      ++ gluster --print-logdir
    • slave_log_file=/var/log/glusterfs/geo-replication-slaves/slave.log
    • main vm hkstore slavevol /home/log.txt
    • log_file=
      /usr/lib/x86_64-linux-gnu/glusterfs/gverify.sh: line 121: $log_file: ambiguous redirect

    • ping_host slavevol

    • '[' 1 -ne 0 ']'
    • echo 'FORCE_BLOCKER|slavevol not reachable.'
      /usr/lib/x86_64-linux-gnu/glusterfs/gverify.sh: line 133: $log_file: ambiguous redirect

    • exit 1

Sign In or Register to comment.