New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
They reached 4.4Gbit/s with 87k pps.
Again, it is not about the Gbit/s, it is all about the packets per second that matters.
https://github.com/Netgate/netmap-fwd
https://github.com/Netgate/netmap-fwd/issues/3
Don’t see it production ready, working out of the box, anywhere ?
Out of the box of course not. But it's possible from a technical point of view. You only need to know C and Netlink to import / delete routes based on the kernel routing table, rtnetlink to be precise:
http://man7.org/linux/man-pages/man7/rtnetlink.7.html
Thats basically the same what the Ghandi guys did with their packet journey. Instead of netmap, they use dpdk.
Everything is possible, but it's not a solution ready at the moment.
We're not talking potential future solutions here, let me remind you the OP:
That was an addition to the post of @techhelper1 in order to add the technical possibilities, it's already clear for me what the OP wants.
For a normal use case, I suggest a Juniper MX104 / MX240 - depending on the redundancy requirements.
OSPF is making a comeback, tho.
@jsg I'm gonna admit, I skipped past your wall of a comment, because I know the difference of how a real router works on an ASIC vs. a PC powered router. But I will say that a Cisco 6500 does not transfer state to another SUP as quick as you think.
@Clouvider It's true that netmap has not had the necessary upkeep but something will happen of it some day. Getting the packet is still part of the battle. I would prefer DPDK as it has an actual stack that is more mature for this kind of purpose as netmap and pf_ring are used for analyzers, DDoS systems, etc... The pfSense company Netgate has TNSR (https://www.netgate.com/products/tnsr/), that would be the one stop shop if they sold it to the public.
Agreed.
Earlier this week I came across another DPDK packet routing solution that supports LACP and a firewall of sorts, https://github.com/alexk99/the_router and found a Mellanox card (http://www.mellanox.com/related-docs/prod_adapter_cards/PB_ConnectX-5_VPI_Card_SocketDirect.pdf) that can handle 126Mpps (roughtly 84.6Gbits). Now imagine two of these cards in a system, and a 100G switch w/ 10G ports, that's quite a bit of bandwidth going through a single system.
You know what I think? Uhum ...
Relation to this topic?
Routing primarily isn't about massive bandwidth but about routing traffic between a lot of ports. 2 ports, even if they were 400Gb could be valuable for a lot of things but they wouldn't turn a server into a router (except the exotic case of a 1:1 gateway).
Now throw in 2 of those Mellanox cards in 2 servers and 2 100Gb switches with 10Gb ports for a redundancy and add failover software (with poor latency anyway) ... and then look at the price tag of what you got.
If you really knew what you are talking about (see your 1st paragraph) you would have suggested 2 (redundant) OFN 100Gb/n x 10Gb switches along with 2 servers (plain machines with 10 Gb will do fine) ... et voila, you would have a better solution for way less money (those Connectix5 cards aren't exactly cheap).
That would get you multiple (e.g. 4) 40 Gb or 100 Gb ports plus 48 (or more) 10 Gb ports with internal 1Tb+ capacity hundreds of mio. of pps and very low latency due to the hardware in those switches (like typhon3 or cavium). Plus btw. an almost full open source stack if you chose the switches wisely.
@jsg I was just sharing a piece of knowledge to update the first post on this thread.
The massive bandwidth in packets per second let's you know how many 10g/40g ports it can handle. Of course you can oversubscribe the 10g ports 50% and get double. But in reality all of this is for handling multiple full tables on the cheap. I also know that networking should be done with redundancy.
It is also possible to pick up just a couple of whitebox switches, and setup some sFlow monitoring and BIRD to handle your routing and 10G inbound at full speed, then allow some
Whoops I thought I had finished my post yesterday before submitting it today, but guess not.
The point of my second paragraph was using sFlow to sample outbound IPs, compare the possible BGP paths, run ping tests, then send the proper BGP route to the switch, so it routes at full ASIC speed.
In reality, no one uses the full table 24/7/365. It's more like 10-15K routes, which is more than enough for todays whitebox switches that have 200k routes.
Yes, that's more like it. As for the routes they would be either directly pushed into the ASIC (edge, small network) or they would be pushed into the ASIC for packet tagging (larger network) by the controller.
Our edge switches are VMs and can handle easily 10gbps+ worth of traffic (we use nsx). There are options out there.
Sigh. Because your switches handle full table in L3, at 10G linerate, on commodity hw.
No full table, 10g linerate, on HP servers. My point was there are options outside cisco/juniper, depending on the use cases.
But that’s not what the OP wants. To remind the OP