New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
IX traffic
Hi Folks
Want to ask, if any dedicated server providers offer separate network interface for within peer exchange they are connected to.
Let’s say, I want to install a CDN edge, with 2 network interfaces, one connected with transit provider, and another looking into exchange. And obviously I want different traffic plans, as IX traffic is almost free.
How is this usually handled as long as you are not Netflix willing to ship your own servers for colo.
Thanked by 1mrclown
Comments
You send @clouvider a nicely worded message.
Thanks! But it is UK/NL/FRA only, my question is are there any providers operating on IX level at all? Cause @Clouvider does not seem to offer that off the shelf
It's not that common that a customer needs this, so it's rarely "off the shelf". I also provide IXP Access in FRA, but only on a VM level.
Many Dedicated Server providers (like @Zare, @Clouvider) offer this if you ask, as @teamacc said, nicely;)
Which IXPs / Locations do you want? UK/NL/FRA gives you access to almost every Europen Networks.
Quite frankly, this is the way most companies do it. Just as you are more likely to find a 1HE Colo than a Dedi with an IXP connection.
I’d say US coasts, Northern Europe, Russia.
VMs are somewhat better, when you are starting/growing, but how much of the value can it be with minimal ram/disk space?
Is there any approximate price range for such VMs? Are they cheaper than regular ones?
Bet most of dedis are IX connected, as it’s money out of nothing.
I get that most heavy players are using colo, but that’s lowendtalk, right? Ima gonna build cdn for pennies 🙂
@ruben I don't think he's looking for an IX port + routeserver + BGP, i think he just wants a 2nd nic with cheaper traffic that's IX only.. .or for IX / transit billing buckets different while the upstream is on both.
no, they will be more expensive... and a lot more complex.. this is not what you think
Thanks @teamacc
@at0mic it depends what you’re looking to achieve and your individual circumstances. To join an IX you need an ASN, a routable subnet, usually an incorporated business, and you need to run BGP. If you have all that, assuming your scale, you find a dedicated server provider who is a partner/reseller of an IX you’re liking to join, perhaps an IX Sales team will be able to direct you. For example, we are partner of LINX and can supply you access to LINX on a VLAN. We can then provide you a session with a full table on one vlan, billed separately, and a vlan to IX that you would handle, billed separately.
If you need a hand in London, Amsterdam or Frankfurt, we’re your people and happy to help :-)
Basically yes, second IX nic. No need for upstream on IX nic though, down only.
I suppose IX traffic is not “cheaper”, it should be free, no? Or almost free at least.
Cross connects (port from router to IX) can be $1k+ PER MONTH per port in the US. It's not free. (This doesn't include traffic or IX membership at all)
This also helps upstream providers balance a bit, because imagine if you were the one paying by 95% instead of per-GB.
In that case unless you have a very specific use case with a ton of traffic that goes that particular cheaper route it will be more expensive.
First of all - complicated and so adds to costs.
Second of all - I won’t charge you the same price for a full table if you explicitly intend to push only the most expensive traffic through it, and the remainder through cheaper option. You’ll pay more for the “remaining traffic” and less for the IX only traffic, with all complexity involved you’ll likely loose - not save.
Can you please elaborate on that? I’ve checked couple of IXes, and it looks like fees are quite reasonable for 10G port unlimited traffic for colo.
For example there's an IX in one of the DCs I use. The DC's cost to have a cable run from my rack to the IX router costs more than the 10G port per month. This doesn't include the IX cost.
Serving video content is exactly the case you’ve described. Minimal transit, maximum IX. Guess in the result it’s going to be more profitable.
Don’t quite get where the complexity comes from, adding a BGP entry? Almost sure you set up internal traffic routed to IX for all already set up servers.
Wow, that’s tough! Does that mean, that it’s easier for them to route internal IX traffic on that 10G port for you, rather than letting you do it?
Is this IX inter-connected with any other IXes?
Any provider worth working with will negotiate bandwidth pricing with you based on your transit/peering ratios. What you're trying to achieve here is possible but it doesn't sound like you're at the scale where it'd be worthwhile.
You are totally right, but I’ve worked in content delivery before and know how expensive hitting the roof can be. Especially with video.
Getting to know the options does not seem to be worthless.
The real tl;dr here is you need enough scale before it's even worth the engineer time to respond to a sales quote like this. Saving 10TB of bw is literal pennies.
FYI: getting on an IX does not imply you will be able to send traffic over the IX. Larger players, home ISPs, etc will refuse this unless you meet minimum xx Gbps 24/7 traffic ratios over transit already AND can peer in multiple locations simultaneously meeting the same rules.
This is normal:
Cannot agree completely, depends on the business type we are talking about. Sure thing, considering one specific traffic generator it does not make sense.
But clearing things up, before re-selling traffic does not look meaningless.
I sincerely appreciate your comments and explanations.
If you are willing to commit to 20-30 Gbps for a minimum of a year contract immediately I would bet people will reduce pricing based on your transit:peering ratios after measuring them over time. If this is for a single dedicated server, i don't think I would really find anyone that would do this.
@at0mic I am doing what you seem to be doing, and at some point I had the same idea and even reached out to providers to see how they will respond. I then figured out it won't work out because a provider has no motivation to offer such a thing. When they bill you for bandwidth and they give you a good price on that, they already count on a lot of your traffic going over IX links and not transit so they can make a healthy profit AND still give you good pricing. Asking them to separate your bandwidth and bill you lower for the part that goes over IXs will make no sense to them. To make this happen, you go out there, colo yourself, buy the equipment and stuff, and then either go with someone who can help you get a port at an exchange cheaper (thats where you can contact these providers again for help), or deal directly with the exchange.
Are you talking about cross-connects with them?
I see, very valid point. That’s why I thought some dedi providers operating on a scale might offer this option already.
Thanks for clarification that it’s better to PM for such offers.
Thanks! That makes A LOT of sense 👍🏻 And once you are at this point, you join the group of providers making a healthy profit with no intent to re-sell split traffic.
No. I mean they will not accept your IX traffic or peer with you (the selective ones are often not on the routeservers either) - and require that traffic to them go over transit, unless you are willing to push xx Gbps + peer at ALL IXes you both are present at, with a minimum of some amount.
And they require your own network topology to more or less be that you have your own private transport between locations - they expect the same routes to be announced from all locations consistently.
I apologise for this bunch of dumb questions, but why would they prefer transit over IX, like the whole IX idea is about avoiding transit as much as possible, no?
Won’t it affect their 95/5?
What is the benefit of getting off the routeserver?
They do not want "anyone" to crap up their sessions. Or they are Comcast and want people to pay them both for sending and receiving the traffic. They will only accept your IX traffic directly if you can demonstrate 24/7 instant NOC response, consistent and always available routes, and they will kill your peering ports if you let them go past x% of port capacity (ie if you are sending 8 Gbps on a 10G port, they will stop accepting IX traffic from you until you upgrade to 20G, 100G, whatever, otherwise their users will get lossy at peak)
They do not prefer, it is just policy that they do not peer with risky small players that might do dumb shit accidentally, or some other thing like that
They don't care, they do terabits of traffic, your 1G port is nothing
Sounds pretty logical for companies, who are both transit providers and ISPs themselves. Accepting anything from IX would mean losses on transit side.
It's also more that accepting peering with a tiny 1-2 person company for 1 Gbps is just a liability to them in terms of route leaks, accidental hijacks, unfiltered sessions, whatever other enterprisey stuff. The engineer time required to resolve any of these for 30 seconds completely outnumbers the cost savings of having 1-10G on an IX.
Keep in mind at those scales, your 10G of transit probably costs nearly as much as 10G of IX
If you're providing 100G+ of value to their customers, then they miiiiight possibly give half a shit.
What you want, usually only makes sense at a volume where you should already be considering to run your own network.
If you just want to stream some gbps of video, get some cheap bandwidth from budget providers like OVH. If that bandwidth becomes some tens of gigabits per second, then you can start considering other options.
That’s what I was doing, and it sounds good on paper but in reality does not work that well.
When you are streaming live video, 1gbps port can handle only 400 users watching stream in 720p and this port better be not shared.
Secondly, streaming from OVH to US west coast is hardly an option as of hops and jitter, meaning you’ll never provide a decent stream quality without edges somewhere in LA for instance.
And this traffic is highly unpredictable, cause for 1 stream you might have 500 viewers or 5000 easily. They also come with spikes, so you literally have no time to react to traffic clogs.
Besides that, during just 2 hours 1000 customers will generate at least 2TB of traffic, and it is 1 stream only, having couple of them running in parallel will get you more. And this traffic spikes only during weekends, while servers being idle for the rest of the week.
And using traditional CDN like Akamai is damn expensive.
What I’m trying to say, is that volume and scalability issues in such situations aren’t something theoretical but a typical issue from the very beginning. You can’t even start that business without having an idea in mind how you scale from the very beginning.