Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Proxmox 5 - containers / VMs cannot communicate with each other using the public IP
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Proxmox 5 - containers / VMs cannot communicate with each other using the public IP

t_anjant_anjan Member
edited February 2018 in Help

Hello,

I have Dedicated Server on Hetzner with a single NIC and a single public IP on it. There are quite a few discussions about setting up Proxmox networking in such a situation. Specifically, I have followed the following guides:

Find my interfaces file contents below:

# cat /etc/network/interfaces                                                                                     
### Hetzner Online GmbH installimage

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback
iface lo inet6 loopback

auto eno1
iface eno1 inet static
  address 145.250.76.40
  netmask 255.255.255.224
  gateway 145.250.76.33
  # route 145.250.76.32/27 via 145.250.76.33
  up route add -net 145.250.76.33 netmask 255.255.255.224 gw 145.250.76.32 dev eno1
  up ip link set eno1 txqueuelen 10000

iface eno1 inet6 static
  address 2b01:4f8:212:4138::2
  netmask 64
  gateway fe71::1

auto vmbr2
iface vmbr2 inet static
  address 192.168.22.254
  netmask 255.255.255.0
  bridge_ports none
  bridge_stp off
  bridge_fd 0
  up ip link set vmbr2 txqueuelen 10000
  post-up echo 1 > /proc/sys/net/ipv4/ip_forward
  post-up echo 1 > /proc/sys/net/ipv4/conf/vmbr2/proxy_arp
  post-up iptables -t nat -A POSTROUTING -s '192.168.22.0/24' -o eno1 -j MASQUERADE
  post-down iptables -t nat -D POSTROUTING -s '192.168.22.0/24' -o eno1 -j MASQUERADE
  post-up iptables -t nat -A PREROUTING -i eno1 -p tcp --dport 2222 -j DNAT --to 192.168.22.5:22
  post-down iptables -t nat -D PREROUTING -i eno1 -p tcp --dport 2222 -j DNAT --to 192.168.22.5:22
  post-up iptables -t nat -A PREROUTING -i eno1 -p tcp --dport 80 -j DNAT --to 192.168.22.5:80
  post-down iptables -t nat -D PREROUTING -i eno1 -p tcp --dport 80 -j DNAT --to 192.168.22.5:80
  post-up iptables -t nat -A PREROUTING -i eno1 -p tcp --dport 443 -j DNAT --to 192.168.22.5:443
  post-down iptables -t nat -D PREROUTING -i eno1 -p tcp --dport 443 -j DNAT --to 192.168.22.5:443

My sysctl.conf: https://pastebin.com/KN9drab7

To summarize:

  • The NIC on the host has the public IP assigned on the interface named eno1.
  • There is a separate bridge vmbr2 which has a private IP series (192.168.22.x) assigned on it.
  • All containers and VMs will be connected to vmbr2 and have an IP in the same private IP series. The IP of vmbr2 will be the gateway for all the VMs and containers. Internet access works from all the VMs and containers.
  • I have one VM on 192.168.22.5 which is setup as a reverse proxy. 3 ports from the Proxmox host (2222, 80, 443) will be forwarded to this VM. This has been setup using iptables pre-routing.
  • On this reverse-proxy VM, using HAProxy running on 80 and 443, based on the hostname of the request, I forward the request to the appropriate VM / container's (private) IP.
  • So, from the outside world, suppose I make a request to abcd.example.com, it gets routed correctly to the VM with IP 192.168.22.25.

All of the above works as expected.

Now, to the problem I am facing: If I make the same request as above, to abcd.example.com, from one of the other VMs / containers, the request fails.

Say, I SSH in to 192.168.22.10 and run the below command:

$ curl http://abcd.example.com -v
* Rebuilt URL to: http://abcd.example.com/
*   Trying 145.250.76.40...
* connect to 145.250.76.40 port 80 failed: Connection refused
* Failed to connect to abcd.example.com port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to abcd.example.com port 80: Connection refused

I have checked the reverse-proxy server and the intended target VM. Neither of them receive the request at all. For some reason, the host is not forwarding requests if it originates from the internal network.

Could somebody tell me what I am missing here?

Comments

  • i am not sure but last time something like this happend with me was problem of NetMask . i just had to use 255.255.255.0 or 255.255.255.255 , on all vms ,

  • lemonlemon Member
    edited February 2018

    Thats because you've got a NAT rule missing, I guess you can ping your public IP from containers, right?

  • FalzoFalzo Member
    edited February 2018

    you should mask/change your public IP when posting here as you don't want to be a target for whomever.

    afaik you're problem here is that packets from your VMs don't hit the PREROUTING chain, because they simply are generated and handled locally already.

    therefore the node is looking for a service listening on port 80 on the main IP which of course is not there.

    never messed around with this before, but think you need to specify an additional OUTPUT rule for the nat table on the hostnode like:

    iptables -t nat -A OUTPUT -p tcp --destination 145.250.xx.yy/32 --dport 80 -j DNAT --to 192.168.22.5:80
    Thanked by 1t_anjan
  • t_anjant_anjan Member
    edited February 2018

    Thank you for the replies.

    @hammad - Thanks for the suggestion. My netmasks are already all /24 (255.255.255.0) .

    @lemon - Yes, you are right. I can ping the public IP from the containers.

    @Falzo:

    Actually, the public IP you see in the post is not really the actual IP. It is made up. I appreciate the concern, though :-)

    I just tried what you suggested. I read about the OUTPUT chain and what you suggest makes perfect logical sense, but it doesn't seem to work. :-(

    $ iptables -t nat -L                                                                                              
    Chain PREROUTING (policy ACCEPT)
    target     prot opt source               destination
    DNAT       tcp  --  anywhere             anywhere             tcp dpt:2222 to:192.168.22.5:22
    DNAT       tcp  --  anywhere             anywhere             tcp dpt:http to:192.168.22.5:80
    DNAT       tcp  --  anywhere             anywhere             tcp dpt:https to:192.168.22.5:443
    
    Chain INPUT (policy ACCEPT)
    target     prot opt source               destination
    
    Chain OUTPUT (policy ACCEPT)
    target     prot opt source               destination
    DNAT       tcp  --  anywhere             <name_of_host>        tcp dpt:http to:192.168.22.5:80
    DNAT       tcp  --  anywhere             <name_of_host>        tcp dpt:https to:192.168.22.5:443
    
    Chain POSTROUTING (policy ACCEPT)
    target     prot opt source               destination
    MASQUERADE  all  --  192.168.22.0/24      anywhere
    

    Even after adding the OUTPUT rules, the behaviour is exactly the same when I make a request from any of the VMs.

    Is there any other debug step you can suggest? For example, which log do I have to monitor to get some clue as to where the request is getting blocked? I tried looking at the files in /var/log. But none of the files are touched when the request is made from any VM.

  • svmosvmo Member
    edited February 2018

    Your POSTROUTING should do a SNAT to your external IP

    • with masquerade your source will be the bridge address which will not be the expected address for the return packets.

    Traffic reflected from the external IP back to your containers goes through the FORWARD chain.
    (edited: )

  • @svmo - Could you please elaborate?

    iptables -t nat -A POSTROUTING -s '192.168.22.0/24' -o eno1 -j MASQUERADE

    I thought the above MASQUERADE post-routing rule says the following: All traffic from the 192.168.22.0/24 internal subnet, and going out of the eno1 interface - change the source address of all these packets to the eno1 interface's address (which is the external (public) IP).

    Isn't that what you are saying as well? Masquerade is just a type of SNAT, isn't it?

    @falzo, @lemon and @svmo - I am struggling to understand something. The request I am making from the VMs (e.g. curl http://abcd.example.com -v) - this is not strictly traffic "between the VMs", right? The request actually goes from the VM out into the world (at least, out into Hetzner's networks), gets resolved into an IP address and then gets routed back to the Proxmox host "from the outside world", right? Running a tracepath from the VM shows this:

    $ tracepath abcd.example.com                                                                      
     1?: [LOCALHOST]                                         pmtu 1500
     1:  static.yy.xx.250.145.clients.your-server.de           0.036ms reached
     1:  static.yy.xx.250.145.clients.your-server.de           0.052ms reached
         Resume: pmtu 1500 hops 1 back 1 
    

    So, the request should actually be hitting the PREROUTING chain just like any other request, right?

  • jackbjackb Member, Host Rep
    edited February 2018

    Did you enable the sysctl option that calls iptables on bridge traffic?

    If not, that could be your problem

  • FalzoFalzo Member
    edited February 2018

    t_anjan said: he request actually goes from the VM out into the world (at least, out into Hetzner's networks), gets resolved into an IP address and then gets routed back to the Proxmox host "from the outside world", right?

    no it does not. the dns resolving request doesn't have to do anything with your actual traffic to port 80 afterwards and your tracepath exactly proves it, there is no other IP involved then your server itself. if the traffic would leave your box, you'd at least need to see it hit the gateway IP...

    t_anjan said: So, the request should actually be hitting the PREROUTING chain just like any other request, right?

    as said before: not the case, hence your problem ;-)

    think of it the other way round: as the prerouting rule does no further checks for anything besides the destination port all requests send to port 80 from your VM would be natted to your webserver box, in case those packets would hit that rule. you would get the desired result for your domain but not be able to access any other (public) webservers/domains as they would get routed to your webserver instead.

    you really don't want your packets to go through that chain.

    @svmo said:
    Your POSTROUTING should do a SNAT to your external IP

    • with masquerade your source will be the bridge address which will not be the expected address for the return packets.

    t_anjan said: Isn't that what you are saying as well? Masquerade is just a type of SNAT, isn't it?

    that's my understanding too.

    Traffic between your containers goes through the FORWARD chain.

    Afaiaa the FORWARD chain is nothing within the nat-table at all, so you can't do any redirect/nat things with it. and as long as there is no default DROP policy for it, I don't think there is much you can change with FORWARD here.

    @t_anjan general question to rule this out: do you have any other iptables/firewall rules that might interfere?

    jackb said: Did you enable the sysctl option that calls iptables on bridge traffic?

    if this relates to the use of

    sysctl -w net.ipv4.conf.eno1.route_localnet=1

    I am not so sure about the outcome. if this means your local/guest-VM packets will go through the prerouting chain, I am afraid the outcome won't be as desired (as mentioned before).

    @t_anjan depending on your use case maybe try another approach like adding the local IP of the webserver VM for your domain(s) to the hosts file? which means have the guest directly access the neighbour on it's private IP...

    Thanked by 2svmo t_anjan
  • jackbjackb Member, Host Rep

    @Falzo said:

    jackb said: Did you enable the sysctl option that calls iptables on bridge traffic?

    if this relates to the use of

    sysctl -w net.ipv4.conf.eno1.route_localnet=1

    I am not so sure about the outcome. if this means your local/guest-VM packets will go through the prerouting chain, I am afraid the outcome won't be as desired (as mentioned before).

    net.bridge.bridge-nf-call-iptables

    Thanked by 2Falzo t_anjan
  • @Falzo said:

    t_anjan said: he request actually goes from the VM out into the world (at least, out into Hetzner's networks), gets resolved into an IP address and then gets routed back to the Proxmox host "from the outside world", right?

    no it does not. the dns resolving request doesn't have to do anything with your actual traffic to port 80 afterwards and your tracepath exactly proves it, there is no other IP involved then your server itself. if the traffic would leave your box, you'd at least need to see it hit the gateway IP...

    Precisely

    @svmo said:
    Your POSTROUTING should do a SNAT to your external IP

    • with masquerade your source will be the bridge address which will not be the expected address for the return packets.

    t_anjan said: Isn't that what you are saying as well? Masquerade is just a type of SNAT, isn't it?

    that's my understanding too.

    >

    Yes but masquerade does SNAT to whatever addres the outgoing interface has - in this case the outgoing interface is vmbr2

    Traffic between your containers goes through the FORWARD chain.

    Afaiaa the FORWARD chain is nothing within the nat-table at all, so you can't do any redirect/nat things with it. and as long as there is no default DROP policy for it, I don't think there is much you can change with FORWARD here.

    Should have been clearer:

    Traffic reflected from the external IP back to your container goes through the FORWARD chain.

    Thanks @Falzo

    Scenario all traffic from 192.168.22.0/24 to external IP reflected back to 192.168.22.5 :

    iptables -t nat -A PREROUTING -i vmbr2 -d external-IP -j DNAT --to-destination 192.168.22.5
    iptables -t nat -A POSTROUTING -s 192.168.22.0/24 -o vmbr2 -j SNAT --to-source external-IP
    

    Your proxy arp should not be needed as this is straightforward routing:

    Thanked by 2Falzo t_anjan
  • @svmo said:

    Yes but masquerade does SNAT to whatever addres the outgoing interface has - in this case the outgoing interface is vmbr2

    I see where you are coming from... yet for 'normal' NAT operations of guests masquerade works just fine, as it usually is just needed for enabling them to access the public net. so this rule should not be wrong in general ;-)

    Traffic between your containers goes through the FORWARD chain.

    Afaiaa the FORWARD chain is nothing within the nat-table at all, so you can't do any redirect/nat things with it. and as long as there is no default DROP policy for it, I don't think there is much you can change with FORWARD here.

    Should have been clearer:

    Traffic reflected from the external IP back to your container goes through the FORWARD chain.

    yes, that of course is.

    Scenario all traffic from 192.168.22.0/24 to external IP reflected back to 192.168.22.5 :

    > iptables -t nat -A PREROUTING -i vmbr2 -d external-IP -j DNAT --to-destination 192.168.22.5
    > iptables -t nat -A POSTROUTING -s 192.168.22.0/24 -o vmbr2 -j SNAT --to-source external-IP
    > 

    I am not sure if the local traffic will hit PREROUTING even though using vmbr2 in those rules, at least worth a try, might work... but you would want to narrow down the ports most probably (80/443)

    Your proxy arp should not be needed as this is straightforward routing:

    agreed.

    jackb said: net.bridge.bridge-nf-call-iptables

    Thanks for pointing out! I have to admit, I never looked into this... this seems to be a good hint and maybe the missing piece.

    Thanked by 1t_anjan
  • Thank you for all the input. I am going to try all that you guys have suggested and get back with the results in a few hours.

  • Sorry about the delay. I finally got around to testing the suggestions.

    In short, it works!

    What made it work?

    Adding another set of rules to the PREROUTING chain, targeting the vmbr2 interface was the trick.

    # New rules
    iptables -t nat -A PREROUTING -i vmbr2 -p tcp --dport 80 -j DNAT --destination <public_ip> --to-destination 192.168.22.5:80
    iptables -t nat -A PREROUTING -i vmbr2 -p tcp --dport 443 -j DNAT --destination <public_ip> --to-destination 192.168.22.5:443
    
    # Existing rules
    iptables -t nat -A PREROUTING -i eno1 -p tcp --dport 80 -j DNAT --destination <public_ip> --to-destination 192.168.22.5:80
    iptables -t nat -A PREROUTING -i eno1 -p tcp --dport 443 -j DNAT --destination <public_ip> --to-destination 192.168.22.5:443
    

    What made no difference?

    Adding rules to the OUTPUT chain did not seem to make any difference.

    # Rules that did not help
    iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --destination <public_ip>/32 --to-destination 192.168.22.5:80
    iptables -t nat -A OUTPUT -p tcp --dport 443 -j DNAT --destination <public_ip>/32 --to-destination 192.168.22.5:443
    

    I still kept these rules, because they make sense.

    Regarding the suggestion to use net.bridge.bridge-nf-call-iptables=1, it was already set to 1 for me. So, I did not change anything here.

    What broke things for me?

    Adding an explicit SNAT rule on the vmbr2 interface broke things for me because this changed the source on all packets in the communication between VMs. So, for example, when I have locked MySQL access to only the private network address, the MySQL server sees all traffic as if it is coming from the public IP.

    # Broke things for me
    iptables -t nat -A POSTROUTING -o vmbr2 -j SNAT --source '192.168.22.0/24' --to-source <public_ip>
    

    In the end, I removed this SNAT rule and stuck to just the masquerade rule on eno1, because it just worked.

    Full interfaces file

    https://pastebin.com/VHufJDzz

    Thank you to everybody for helping me. :-)

    Thanked by 4Wolveix msg7086 Falzo FHR
  • t_anjant_anjan Member
    edited February 2018

    Is there anyway I can restrict the PREROUTING DNAT operation such that it happens only if the request has originated from a specific IP address?

    Any firewall rules I add in the Proxmox GUI are bypassed by traffic that gets DNATed. I am guessing this is because the Proxmox GUI adds its firewall rules to the INPUT chain, which is not traversed by the DNATed packets.

    Should I add my "IP address" filter to the FORWARD chain? But the FORWARD chain seems to be setup by Proxmox's firewall to allow all packets.

  • @falzo, @svmo - Sorry to bug you guys. Do you have any suggestions for my last question?

  • FalzoFalzo Member
    edited February 2018

    @t_anjan said:
    @falzo, @svmo - Sorry to bug you guys. Do you have any suggestions for my last question?

    for the proxmox firewall it might heavily depend, what you want to achieve and where you put that, as there are three levels (datacenter/node/guest) where you can set up rules.

    also you might look into setting up that rules for both devices eon and vmbr accordingly?

    I have to admit I usually don't do much with that but use iptables on the node or guest if needed.

    PS: and if that's something top put into FORWARD depends much on what you want to achieve I'd say ;-)

  • PS: and if that's something top put into FORWARD depends much on what you want to achieve I'd say ;-)

    @falzo - I'm sorry if I did not make it really clear about what I want to achieve.

    I have MySQL installed on one of my VMs. This MySQL server VM has an IP in the 192.168.22.x series, just like all other VMs. The other VMs connect to this MySQL server directly using this private IP. All of this works just fine.

    I usually connect (from my laptop) to this MySQL server by forwarding a port on my laptop to the VM's 3306 (mysql) port through an SSH tunnel. Then I can connect to the MySQL server from my laptop using just 127.0.0.1: . This works fine, as long as I can SSH into the VM.

    I also want to access the MySQL server directly using the public IP, without having to SSH into the VM first. To achieve this, I have added a DNAT rule on the Proxmox host to forward the 3306 port of the host to 192.168.22.106:3306 (MySQL VM's private IP). This works too, as long as the VM itself has 3306 port open in its IPTables, and the MySQL server software is setup to accept connections from outside the local network.

    Doing this makes the MySQL server open to the world on the public IP of the proxmox host. I don't want this. I want to limit this public IP access to only the (static) IP address of my office. In other words, the DNATing should work only if the request is coming from my office's static IP.

    I see that I cannot add a --source parameter to the DNAT rules in the PREROUTING chain.

    Any filter rules I add to the INPUT chain of the host is not used by the NATed packets. The NATed packets traverse the FORWARD chain only. Hence my question: should I add my filter rules to the FORWARD chain? Currently, Proxmox, by default, allows everything in the FORWARD chain.

    For now, I have added these rules to the IPTables of the VM (using ufw). But I would prefer to do this on the proxmox host itself. How should I go about achieving this?

    Hope I have made it clear.

  • t_anjan said: I see that I cannot add a --source parameter to the DNAT rules in the PREROUTING chain.

    I can't remember to have used that, but what makes you think that this is not possible?

    for the FORWARD chain I'd say simply go and try if it does what you want :-)

    but afaik there is no FORWARD chain in the nat table, so only in general for iptables.
    your NATted packets also shouldn't go through INPUT/OUTPUT in general, but only in the NAT table and from there to POSTROUTING, hence they are not touching the proxmox firewall rules.

  • gattyttogattytto Member
    edited February 2018

    @t_anjan there is a WAY simpler approach to this using the proxmox openvswitch-switch package, I really can't believe you got so far with iptables tables and routes without losing your mind lol..

    There's never only one way to do stuff, but I always prefer the one that can be reproduced using generic configuration files. This if you plan on having a server cluster of many proxmox hosts living in the same class b subnet with their own class c subnets of their own.. I can imagine how messy your tables will get, right?. So here's the simple approach (which took me days without sleep to shorten and ease).

    openvswitch-switch lets you define virtual interfaces in /etc/network/interfaces

    it's like this:
    phisical server:
    eth0: whatever 192.168.1.10 / 200.121.22.83(ipv4 public)
    OVSBridge vmbr0: 10.0.1.1
    OVSIntPort int1: 10.0.1.2 (bridge=vmbr0)

    then you add the shorewall firewall package to your server.
    I won't go in great detail for this part as there are many guides on this.
    but key proxmox parts are:
    interfaces, zones, masq(meaning masquerade) (to eth0 from int1 and vmbr0),policies.

    shorewall also lets you add external router providers (other proxmox hosts) for other subnets using the /etc/shorewall/providers and route_rules files.

    you can also add webmin package to your proxmox box and use it to manage a dhcp server which will be a great addition to ease and automate vm first-runs.

    doing it this way you can have different proxmox hosts running in local networks as:
    10.0.1.x host1
    10.0.2.x host2
    and then have as many 10.0.x.254 virtual machines running inside every host.

    additionally to that I recommend you use strongswan vpn solution to connect between your proxmox hosts, that way you can also make a cluster of the hosts and have high availability services between them..

    edit: after re-reading I can see you talk a lot about ports on specific programs like sql.. with this approach you use /etc/shorewall/rules to accept ports in your proxmox hosts and then later in the same file you can DNAT single ports or port ranges to vm ip's inside each proxmox host. ex:

    DNAT net dmz:10.0.1.13:22 tcp 9374

    Thanked by 1t_anjan
  • @gattytto - Thank you very much for the detailed response. I will definitely look into your suggestions. You are right that the iptables rules on the host have become quite messy.

Sign In or Register to comment.