Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Building a VPN custom network mesh-like thing (in the cloud)
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Building a VPN custom network mesh-like thing (in the cloud)

raindog308raindog308 Administrator, Veteran

I'm going to sound like a total dork here, but I'm just not that deep a networking guy.

I was thinking recently that it would be convenient if I could have my own ipv4/ipv6 space, mainly because then I could write one set of firewall rules and not have to add in random IPs every time I buy a new VPS.

But then I thought...could I do that with a VPN network on top of the normal ipv4 network? So imagine I have a dozen VPSes at a dozen providers because, well, I do. Can I have them all on my own private 10.x network and talk to each other on it?

I'd want any of them to be able to talk to any other (well, I'd layer firewall rules on top of that).

If so...how would I set that up?

  • do I need some sort of VPN concentrator? I'm assuming I can run it on a LEB. Can I make it HA? Because I'm assuming if it goes down, everything goes down.

  • I assume if I don't go across subnets I don't need to get into routers, though that probably isn't a big deal because every Linux VPS is potentially a router

  • Could I do DHCP on that? Can I have multiple IPs, like a floating IP?

  • Could I segment it with virtual firewall appliances...see...this is where things always go with me, to extreme complexity...

I'm used to the idea of VPNing into a box or VPNing into work, but not the idea of building a VPN network though maybe it's all the same and I'm just weak on my network fu, which I am.

Comments

  • edited June 2016

    Use Tinc, hook up your nodes with a public IP, and give it whatever size of a private subnet.

    Then I suppose you can give multiple servers one same subnet so you can install OVPN with the same config and just get into the server? It will choose lowest latency.

    So connect to Tinc first, then connect to OVPN

    Thanked by 1netomx
  • 127001127001 Member
    edited June 2016

    You are going down the complex route. As I was reading I wanted to suggest something and then your topology kept getting more complex. Kinda sounds like you're interested in creating a stealthy moc-botnet on the application layer. Anyway, I think that overall you could use any implementation of PPP. This sort of thing should involve some crypto when we're talking about machines in the network being VPSs on multiple networks. VPN concentrator does this, basically, but I've never used one on a VPS just with baremetal cisco concentrators, so I'm just gonna shut up.

  • edited June 2016

    I've worked on something exactly like this.

    Note that the below guide has the following additional info which may be useful for adding additional networks, segmenting the nodes, or having access to subnets on the "central" tinc nodes.

    • HA access to subnets on any core node, useful with VPNs for access from the internet or separate find networks
    • preference based routing to prefer routes from a particular node over another

    Both mainly have to do with HA access to additional subnets (segmented networks, etc). The features above require BGP in which the setup is described below. That is the main addition to the default find guide to provide the HA routing features.

    First, choose a number of "central" nodes that all nodes will connect to.
    The non-central nodes connecting to the "central" nodes will automatically mesh with each other as the "central" nodes already have the keys for all nodes.

    Second, the "central" nodes are meshed with OSPF. In my case, one of the nodes had access to additional subnets, which OSPF distributed. You may not require OSPF in your setup, it's your preference and you can use whatever routing protocol you want.

    Third, all nodes are setup with iBGP within the same ASN. "central" nodes become in charge of distributing routes. The central nodes are setup with default bgp_local_pref <value>; to change the preference. BGP detection of one of the "central" nodes going down is very fast, so non-central nodes will connect to the central nodes in order of preference.

    The main issue was with the BGP preference. I ended up sectioning all my nodes into different zones (was going to use consul with this anyways...), and setting a bgp_local_pref for each zone based on the location of the "central" node.

    Side note #1:
    This works well if you have several tinc networks that you want to route between.

    Side note #2:
    Have a complete salt formula for this, though it may require some work to suit your needs. It happens to come with a dnsmasq configurator so that all nodes are reachable by hostname.

    Side Note #3:
    DHCP - found it was not the best idea. If you need to bind services to an IP, you quickly loose the ability to do that if you distribute ips using zeroconf or dhcp. MAC addresses of tinc interfaces are random, and as such, you would need to assign a static MAC address to each node's interface. Why do that when you can assign an IP directly?

    Side Note #4:
    Firewall. I'm running CSF on all of the core nodes, with some adjustments, you can block routing however you want.

  • rm_rm_ IPv6 Advocate, Veteran
    edited June 2016

    So imagine I have a dozen VPSes at a dozen providers because, well, I do. Can I have them all on my own private 10.x network and talk to each other on it?

    I'd want any of them to be able to talk to any other

    Yes you can and it's quite simple, just set up Tinc. In fact we already have a thread about a similar topic these days: https://www.lowendtalk.com/discussion/85650/peer-2-peer-vpn
    To quote a part, "[Tinc] makes your servers appear as if they all are plugged into the same Ethernet switch via one more NIC. And from there on it's just normal routing/forwarding and iptables to do whatever you can imagine."

    ALinuxNinja said: First, choose a number of "central" nodes that all nodes will connect to. The non-central nodes connecting to the "central" nodes will automatically mesh with each other as the "central" nodes already have the keys for all nodes.

    With you up to this point. But you don't explicitly mention that you're talking about Tinc.

    ALinuxNinja said: Second, the "central" nodes are meshed with OSPF. Third, all nodes are setup with iBGP within the same ASN.

    Whoa, hold it right there. You need to OSPF the iBGP of ASNs... for what purpose exactly? I invite you to kindly reread the first post, the way I see it the author would be happy with just a simple Tinc setup, and none of the complex shit you explain further.

    Heck, I use Tinc for years with a complex network of up to a dozen nodes, and I still can't say for certain what exactly do you achieve on top of regular Tinc by doing all that complex OSPF b/s. Faster failover?... It's fast enough already. Do you want it to be sub-second fast? The OP never specified that's a requirement. And so on and so on.

    Thanked by 1gks
Sign In or Register to comment.