Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


How many ipv6 per client - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

How many ipv6 per client

2

Comments

  • does the router incur load from many ipv6? sounds so many, some provider give 1 or 10. how about real usage

  • LeviLevi Member

    @skorupion said:

    @LTniger said: What to do with /48? What is practicality of this?

    What's the practicality of having more than 1 ipv4 then.
    You can use different IP addresses on every domain you use,
    you don't have to worry about running out of ports or changing ports for your app
    In case of a DDoS attack on one of the addresses, you can just turn it off
    There are many uses.

    Excuse me, but I don't believe that for those reasons single person should receive 65,536 ips. It is some sort of hoarding / wasting of resources which will never be used in full.

  • skorupionskorupion Member, Host Rep

    @notarobo said: does the router incur load from many ipv6? sounds so many, some provider give 1 or 10. how about real usage

    It doesn't and you need new routers anyways, so they are faster and actually support ipv6

    Thanked by 1notarobo
  • skorupionskorupion Member, Host Rep

    @LTniger said: Excuse me, but I don't believe that for those reasons single person should receive 65,536 ips. It is some sort of hoarding / wasting of resources which will never be used in full.

    It will never be used anyways, so might as well give them out

    Read this article http://www.networkworld.com/article/2223248/cisco-subnet/the-logic-of-bad-ipv6-address-management.html

    Thanked by 1kkrajk
  • brueggusbrueggus Member, IPv6 Advocate

    @LTniger said:

    @skorupion said:

    @LTniger said: What to do with /48? What is practicality of this?

    What's the practicality of having more than 1 ipv4 then.
    You can use different IP addresses on every domain you use,
    you don't have to worry about running out of ports or changing ports for your app
    In case of a DDoS attack on one of the addresses, you can just turn it off
    There are many uses.

    Excuse me, but I don't believe that for those reasons single person should receive 65,536 ips. It is some sort of hoarding / wasting of resources which will never be used in full.

    This article was quoted above but I'll quote it again: https://www.networkworld.com/article/2223248/the-logic-of-bad-ipv6-address-management.html

    IPv6 addressing works differently from IPv4.

  • @brueggus said:

    @LTniger said:

    @skorupion said:

    @LTniger said: What to do with /48? What is practicality of this?

    What's the practicality of having more than 1 ipv4 then.
    You can use different IP addresses on every domain you use,
    you don't have to worry about running out of ports or changing ports for your app
    In case of a DDoS attack on one of the addresses, you can just turn it off
    There are many uses.

    Excuse me, but I don't believe that for those reasons single person should receive 65,536 ips. It is some sort of hoarding / wasting of resources which will never be used in full.

    This article was quoted above but I'll quote it again: https://www.networkworld.com/article/2223248/the-logic-of-bad-ipv6-address-management.html

    IPv6 addressing works differently from IPv4.

    Unless I missed it this article doesn't take IoT into account.

  • But why kimsufi only give /128 ipv6?

  • HarambeHarambe Member, Host Rep

    @ariq01 said:
    But why kimsufi only give /128 ipv6?

    You can use the whole /64 they assign, but you can only set rDNS on the single IP

    Thanked by 2skorupion ariq01
  • @freerangecloud said:
    We assign a /64 to each VPS by default, customers can request up to a /56 (with justification) and up to a /48 for an extra nominal fee.

    What would justification look like for a single VPS?

  • Daniel15Daniel15 Veteran
    edited March 2021

    @notarobo said:
    does the router incur load from many ipv6?

    If anything it's the opposite... Routing IPv6 addresses is easier than IPv4 (since there's no weird subnetting like with IPv4), plus no NAT is needed, so the routing overhead should be lower compared to IPv4.

    Thanked by 1xms
  • I don't know why you guys refer to that site, which talks about subnets for networks with multiple devices, not single ones. It's main point is future proofing so you don't need to worry about too many devices per subnet.

    It doesn't even mention the popular reason of /64 blacklisting (which is more valid than talking about future device counts in the thousands) as a reason to assign /64's to single devices.

    It really is a management burden to have so many unique subnets to manage. Until people don't need IPV4, the limits of 254 devices per subnet will come into play long before you get large numbers on a single network.

  • LittleCreekLittleCreek Member, Patron Provider

    So I asked my provider what their policy was and they didn't answer that exactly they just went ahead and gave me a /48. So I think I am good for a while.

    So now I just need to learn how to implement this effectively in Virtualizor. Anybody have any tips for Virtualizor?

  • freerangecloudfreerangecloud Member, Patron Provider

    @TimboJones said:

    @freerangecloud said:
    We assign a /64 to each VPS by default, customers can request up to a /56 (with justification) and up to a /48 for an extra nominal fee.

    What would justification look like for a single VPS?

    Needing to sub-allocate for VPN tunnels is usually what we set it up for.

    Thanked by 1TimboJones
  • @Jio said:

    @LittleCreek said: So you are saying each client I have should be given 18,446,744,073,709,551,616 addresses?

    The absolute minimum should be a /64. You don't need to give them the whole /64 (especially if you use OpenVZ) but each client needs to be allocated out of a separate /64.

    Most places on internet ban by the /64. So /64 = 1 IPv4.

    With OpenVZ it seems common to give a prefix, which can be a /64, to the container. But then the client needs to configure each IPv6 address in the management panel. I don't understand why that's required and why the prefix isn't instead routed to the instance. Then the instance could use any address within the prefix.

  • jmginerjmginer Member, Patron Provider

    We provide one /48 per customer. Then the customer can split it into different /64's which they route to the VPS.

    That is, to the VPS we route only the /64.

  • skorupionskorupion Member, Host Rep

    IPv6 is like ipv4 in the old days where you didn't need exceptional reason to get free addresses from ARINs

    Thanked by 1TimboJones
  • yoursunnyyoursunny Member, IPv6 Advocate

    @lebuser said:
    With OpenVZ it seems common to give a prefix, which can be a /64, to the container. But then the client needs to configure each IPv6 address in the management panel. I don't understand why that's required and why the prefix isn't instead routed to the instance. Then the instance could use any address within the prefix.

    OpenVZ venet interface doesn't have MAC address so they don't work well with IPv6. You have to statically assign each address from the host node side.
    https://wiki.openvz.org/IPv6

    OpenVZ veth interface would work properly just like KVM, but none of the providers offer that.

    When I created TraceArt at Hack Arizona 2016 that requires routed IPv6, I resorted to disabling the provider's IPv6 and using the routed address space from Tunnel Broker.

    Thanked by 1lokuzard
  • @yoursunny said:

    @lebuser said:
    With OpenVZ it seems common to give a prefix, which can be a /64, to the container. But then the client needs to configure each IPv6 address in the management panel. I don't understand why that's required and why the prefix isn't instead routed to the instance. Then the instance could use any address within the prefix.

    OpenVZ venet interface doesn't have MAC address so they don't work well with IPv6. You have to statically assign each address from the host node side.
    https://wiki.openvz.org/IPv6

    Yes, I know it's a point-to-point interface which means you can't use SLAAC. But I don't see why it wouldn't be possible to route a prefix to the interface, you don't even need to have a via gateway since it's point-to-point.

    BTW it shouldn't be any problem to have both have a /64 route to the interface, and a /128 route for each IPv6 address the customer configures in the management panel. That why customers which want to use the whole prefix would be able to do so.

    OpenVZ veth interface would work properly just like KVM, but none of the providers offer that.

    I don't agree that KVM instances usually are configured properly. Often they might only get a /64 prefix assigned to the external interface. Which means you need to use NDP proxy to use it on another interface such as a docker bridge. This is of broken. A KVM instance should get a routed IPv6 prefix.

  • yoursunnyyoursunny Member, IPv6 Advocate

    @lebuser said:

    @yoursunny said:

    @lebuser said:
    With OpenVZ it seems common to give a prefix, which can be a /64, to the container. But then the client needs to configure each IPv6 address in the management panel. I don't understand why that's required and why the prefix isn't instead routed to the instance. Then the instance could use any address within the prefix.

    OpenVZ venet interface doesn't have MAC address so they don't work well with IPv6. You have to statically assign each address from the host node side.
    https://wiki.openvz.org/IPv6

    Yes, I know it's a point-to-point interface which means you can't use SLAAC. But I don't see why it wouldn't be possible to route a prefix to the interface, you don't even need to have a via gateway since it's point-to-point.

    BTW it shouldn't be any problem to have both have a /64 route to the interface, and a /128 route for each IPv6 address the customer configures in the management panel. That why customers which want to use the whole prefix would be able to do so.

    https://wiki.openvz.org/Virtual_network_device

    Venet drop ip-packets from the container with a source address, and in the container with the destination address, which is not corresponding to an ip-address of the container.

    Thus, routing isn't possible with venet.

    OpenVZ veth interface would work properly just like KVM, but none of the providers offer that.

    I don't agree that KVM instances usually are configured properly. Often they might only get a /64 prefix assigned to the external interface. Which means you need to use NDP proxy to use it on another interface such as a docker bridge. This is of broken. A KVM instance should get a routed IPv6 prefix.

    My workaround is using robbertkl/docker-ipv6nat for Docker IPv6.

    Let's ask @MaxKVM and @SpartanHost and @EvolutionHost why they don't have routed IPv6.

    Thanked by 1Ananchoreta
  • HostEONSHostEONS Member, Patron Provider

    @LittleCreek said:
    I am about ready to offer ipv6 to clients but was wondering what is the typical number of ipv6 clients expect? I have a /64.

    We provide /64 per VPS but we allow only 50 IPv6 IP to be used from that /64 per VPS, as adding too many IPv6 can saturate your router/switch

    We ran into an issue with one client, he was running some IPv6 proxy or deamon - "ndppd" and it was causing pfem (the PFE manager) in our switch to crash, causing short outages, but then we instructed client to limit to 50 ipv6 and also restricted it on vps node and never had issues

    Most clients hardly use 1-2 IPv6 IP but even 1-2 abuse users can cause a lot of issues

  • yoursunnyyoursunny Member, IPv6 Advocate

    @HostEONS said:

    @LittleCreek said:
    I am about ready to offer ipv6 to clients but was wondering what is the typical number of ipv6 clients expect? I have a /64.

    We provide /64 per VPS but we allow only 50 IPv6 IP to be used from that /64 per VPS, as adding too many IPv6 can saturate your router/switch

    We ran into an issue with one client, he was running some IPv6 proxy or deamon - "ndppd" and it was causing pfem (the PFE manager) in our switch to crash, causing short outages, but then we instructed client to limit to 50 ipv6 and also restricted it on vps node and never had issues

    Most clients hardly use 1-2 IPv6 IP but even 1-2 abuse users can cause a lot of issues

    If you provide routed IPv6, the number of addresses in use would not affect the router in any way.
    Then, you can limit on-link IPv6 to one address only. It needs to be in another /64, which could be a link-local address.

    Thanked by 1brueggus
  • SpartanHostSpartanHost Member, Host Rep

    @yoursunny said:
    Let's ask @MaxKVM and @SpartanHost and @EvolutionHost why they don't have routed IPv6.

    Main goal for us was to allow it to be possible to move /64 subnets between VPS nodes in the same VLAN making routed IPv6 not possible but maybe no one cares about being able to keep the same IPv6 subnet if they're migrated to another VPS node? Certainly would be operationally easier on our end if we did routed IPv6 to each VPS node.

  • yoursunnyyoursunny Member, IPv6 Advocate

    @SpartanHost said:

    @yoursunny said:
    Let's ask @MaxKVM and @SpartanHost and @EvolutionHost why they don't have routed IPv6.

    Main goal for us was to allow it to be possible to move /64 subnets between VPS nodes in the same VLAN making routed IPv6 not possible but maybe no one cares about being able to keep the same IPv6 subnet if they're migrated to another VPS node? Certainly would be operationally easier on our end if we did routed IPv6 to each VPS node.

    For many use cases, it is very important to keep the same IPv6 routed subnet during a live migration event. The subnet would be written into config files, DNS records, etc.
    Changing the IPv6 routed subnet is as bad as changing the IPv4 address - you have to schedule maintenance window, inform users in advance, and keep both subnets attached for a few days so that DNS updates take effect.

  • SpartanHostSpartanHost Member, Host Rep

    @yoursunny said:

    @SpartanHost said:

    @yoursunny said:
    Let's ask @MaxKVM and @SpartanHost and @EvolutionHost why they don't have routed IPv6.

    Main goal for us was to allow it to be possible to move /64 subnets between VPS nodes in the same VLAN making routed IPv6 not possible but maybe no one cares about being able to keep the same IPv6 subnet if they're migrated to another VPS node? Certainly would be operationally easier on our end if we did routed IPv6 to each VPS node.

    For many use cases, it is very important to keep the same IPv6 routed subnet during a live migration event. The subnet would be written into config files, DNS records, etc.
    Changing the IPv6 routed subnet is as bad as changing the IPv4 address - you have to schedule maintenance window, inform users in advance, and keep both subnets attached for a few days so that DNS updates take effect.

    There wouldn't really be any technical way to do that e.g. if we routed a /48 to a VPS node and gave each VPS a /64, it wouldn't be possible to migrate any /64s from that /48 to a different VPS node. I'm aware of setups where L3 is ran on the VPS node e.g. a BGP session back to the upstream L3/router which would allow migrating /64s between VPS nodes but Virtualizor doesn't natively support such a setup.

  • yoursunnyyoursunny Member, IPv6 Advocate

    @SpartanHost said:

    @yoursunny said:

    @SpartanHost said:

    @yoursunny said:
    Let's ask @MaxKVM and @SpartanHost and @EvolutionHost why they don't have routed IPv6.

    Main goal for us was to allow it to be possible to move /64 subnets between VPS nodes in the same VLAN making routed IPv6 not possible but maybe no one cares about being able to keep the same IPv6 subnet if they're migrated to another VPS node? Certainly would be operationally easier on our end if we did routed IPv6 to each VPS node.

    For many use cases, it is very important to keep the same IPv6 routed subnet during a live migration event. The subnet would be written into config files, DNS records, etc.
    Changing the IPv6 routed subnet is as bad as changing the IPv4 address - you have to schedule maintenance window, inform users in advance, and keep both subnets attached for a few days so that DNS updates take effect.

    There wouldn't really be any technical way to do that e.g. if we routed a /48 to a VPS node and gave each VPS a /64, it wouldn't be possible to migrate any /64s from that /48 to a different VPS node. I'm aware of setups where L3 is ran on the VPS node e.g. a BGP session back to the upstream L3/router which would allow migrating /64s between VPS nodes but Virtualizor doesn't natively support such a setup.

    1. Don't route the /48.
    2. On L3 router: setup one /64 route per VPS, with the host node as nexthop. This can be automated by sending SSH or NETCONF commands to the router.
    3. On host node: setup one /64 route per VPS, with the VPS link local address as nexthop. This can be automated by executing ip route commands.

    Blame Virtualizor if certain feature is missing.

  • SpartanHostSpartanHost Member, Host Rep

    @yoursunny said:

    @SpartanHost said:

    @yoursunny said:

    @SpartanHost said:

    @yoursunny said:
    Let's ask @MaxKVM and @SpartanHost and @EvolutionHost why they don't have routed IPv6.

    Main goal for us was to allow it to be possible to move /64 subnets between VPS nodes in the same VLAN making routed IPv6 not possible but maybe no one cares about being able to keep the same IPv6 subnet if they're migrated to another VPS node? Certainly would be operationally easier on our end if we did routed IPv6 to each VPS node.

    For many use cases, it is very important to keep the same IPv6 routed subnet during a live migration event. The subnet would be written into config files, DNS records, etc.
    Changing the IPv6 routed subnet is as bad as changing the IPv4 address - you have to schedule maintenance window, inform users in advance, and keep both subnets attached for a few days so that DNS updates take effect.

    There wouldn't really be any technical way to do that e.g. if we routed a /48 to a VPS node and gave each VPS a /64, it wouldn't be possible to migrate any /64s from that /48 to a different VPS node. I'm aware of setups where L3 is ran on the VPS node e.g. a BGP session back to the upstream L3/router which would allow migrating /64s between VPS nodes but Virtualizor doesn't natively support such a setup.

    1. Don't route the /48.
    2. On L3 router: setup one /64 route per VPS, with the host node as nexthop. This can be automated by sending SSH or NETCONF commands to the router.
    3. On host node: setup one /64 route per VPS, with the VPS link local address as nexthop. This can be automated by executing ip route commands.

    Blame Virtualizor if certain feature is missing.

    What you mention did come to mind but sadly completely unsupported by Virtualizor so that's where we're let down.

  • brueggusbrueggus Member, IPv6 Advocate

    @yoursunny said:
    Blame Virtualizor if certain feature is missing.

    How long is your grace period until providers not offering routed subnets get added to the list?

    Thanked by 1yoursunny
  • yoursunnyyoursunny Member, IPv6 Advocate

    @SpartanHost said:

    @yoursunny said:
    1. Don't route the /48.
    2. On L3 router: setup one /64 route per VPS, with the host node as nexthop. This can be automated by sending SSH or NETCONF commands to the router.
    3. On host node: setup one /64 route per VPS, with the VPS link local address as nexthop. This can be automated by executing ip route commands.

    Blame Virtualizor if certain feature is missing.

    What you mention did come to mind but sadly completely unsupported by Virtualizor so that's where we're let down.

    It's time to ditch Virtualizor and make new control software.
    Let's call it hyperbrueggus.


    @brueggus said:

    @yoursunny said:
    Blame Virtualizor if certain feature is missing.

    How long is your grace period until providers not offering routed subnets get added to the list?

    Do one thing at a time. It will take a while.

    1. I'll first wait for the top three in the current list (GitHub, Google, Oracle) to come off.
    2. There would be a new list for provider offering less than /64.
    3. Not offering routed subnets comes last.
  • /64 or /114, techncically you don't need more than a /114

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @yoursunny said:

    @SpartanHost said:

    @yoursunny said:
    1. Don't route the /48.
    2. On L3 router: setup one /64 route per VPS, with the host node as nexthop. This can be automated by sending SSH or NETCONF commands to the router.
    3. On host node: setup one /64 route per VPS, with the VPS link local address as nexthop. This can be automated by executing ip route commands.

    Blame Virtualizor if certain feature is missing.

    What you mention did come to mind but sadly completely unsupported by Virtualizor so that's where we're let down.

    It's time to ditch Virtualizor and make new control software.
    Let's call it hyperbrueggus.


    @brueggus said:

    @yoursunny said:
    Blame Virtualizor if certain feature is missing.

    How long is your grace period until providers not offering routed subnets get added to the list?

    Do one thing at a time. It will take a while.

    1. I'll first wait for the top three in the current list (GitHub, Google, Oracle) to come off.
    2. There would be a new list for provider offering less than /64.
    3. Not offering routed subnets comes last.

    Are you mad or just fucking with people?

    Francisco

Sign In or Register to comment.