Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Live Streaming Bandwidth.
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Live Streaming Bandwidth.

Is there any solution available on AWS/GCP/Azure for reducing bandwidth requirements for live stream.

I want to do a live stream for 5k + users. I don't think it's possible to it on normal VPS from these providers.

How zoom and twitch handle their bandwidth requirements?

Comments

  • hzrhzr Member
    edited April 2020

    They serve standard video files from a CDN. End users do not get a live stream, they get chunks of 10-30 second (basically webm/mp4/vp9) files one after another.

    Effectively the stream endpoint (server) is encoding and dumping chunks of frames to disk, and serving it via standard http/s.

    See https://en.wikipedia.org/wiki/Dynamic_Adaptive_Streaming_over_HTTP - but the tl;dr is it's delayed x seconds, and you "save" x seconds as (for example) 00001-720p.webm, 00002-720p.webm, 00003-1080p.webm, and your video player just keeps incrementing numbers as it loads it in advance.. etc.

    The adaptive part is your player can also check if it's downloading fast enough and upgrade itself from 720 to 1080 or similar; this is why on many sites you start off with pixely video but after a few seconds it resolves itself.. it just downloads a different file.

  • Understood. But I can't have that much latency. My requirement is more like a zoom meeting. Multiple concurrent live streams with 50 - 70 users per each live stream and they interact with the streamer in realtime. Thinking to webrtc instead of HLS.

  • hzrhzr Member
    edited April 2020

    We've been able to get it down to 1-3 seconds of latency with very short frames. This is easiest for "one to many", most scalable, and can theoretically scale infinitely because it's just plain boring files. This will honestly be the easiest way.

    The tradeoff here is you will need either a LOT of CPU time for encoding at high quality OR encoding at lower quality but with faster speed/less latency or stream delay.

    The more realtime it is, the more you will have to compensate.

    If you are doing "many to many" all over the place, webrtc or otherwise have a SHITLOAD of bw because good luck (like all 70 people are streaming back)

  • hzr said: We've been able to get it down to 1-3 seconds of latency with very short frames. This is easiest for "one to many", most scalable, and can theoretically scale infinitely because it's just plain boring files. This will honestly be the easiest way.

    The tradeoff here is you will need either a LOT of CPU time for encoding at high quality OR encoding at lower quality but with faster speed/less latency or stream delay.

    The more realtime it is, the more you will have to compensate.

    If you are doing "many to many" all over the place, webrtc or otherwise have a SHITLOAD of bw because good luck (like all 70 people are streaming back)

    I'm planning to stream SD only and those 70 people won't stream back video/audio, Just realtime text messaging. Thank you for sharing your experience. Let me dig further into cloudfront and akamai documentations.

  • @hzr said:
    We've been able to get it down to 1-3 seconds of latency with very short frames. This is easiest for "one to many", most scalable, and can theoretically scale infinitely because it's just plain boring files. This will honestly be the easiest way.

    The tradeoff here is you will need either a LOT of CPU time for encoding at high quality OR encoding at lower quality but with faster speed/less latency or stream delay.

    The more realtime it is, the more you will have to compensate.

    If you are doing "many to many" all over the place, webrtc or otherwise have a SHITLOAD of bw because good luck (like all 70 people are streaming back)

    That's the part I don't get about Zoom. They seem to mostly use a mix of AWS and colocation, according to their listed IP ranges, yet they can offer 1 host + 100 participants for 15 dollars. Like, how does that make financial sense? I know most people won't use it all the time (hence a large amount of overselling is expected), but even a couple hours of a conference with a dozen participants will wipe out their profits.

  • AC_Fan said: That's the part I don't get about Zoom. They seem to mostly use a mix of AWS and colocation, according to their listed IP ranges, yet they can offer 1 host + 100 participants for 15 dollars. Like, how does that make financial sense? I know most people won't use it all the time (hence a large amount of overselling is expected), but even a couple hours of a conference with a dozen participants will wipe out their profits.

    May be deploying servers on a colocation facility and peering with major IXPs will make sense.

  • @AC_Fan said:

    @hzr said:
    We've been able to get it down to 1-3 seconds of latency with very short frames. This is easiest for "one to many", most scalable, and can theoretically scale infinitely because it's just plain boring files. This will honestly be the easiest way.

    The tradeoff here is you will need either a LOT of CPU time for encoding at high quality OR encoding at lower quality but with faster speed/less latency or stream delay.

    The more realtime it is, the more you will have to compensate.

    If you are doing "many to many" all over the place, webrtc or otherwise have a SHITLOAD of bw because good luck (like all 70 people are streaming back)

    That's the part I don't get about Zoom. They seem to mostly use a mix of AWS and colocation, according to their listed IP ranges, yet they can offer 1 host + 100 participants for 15 dollars. Like, how does that make financial sense? I know most people won't use it all the time (hence a large amount of overselling is expected), but even a couple hours of a conference with a dozen participants will wipe out their profits.

    You are thinking about it in the wrong way, you are thinking about it as if you were going to be the one providing this product as a small business who would have to pay retail. The way they make their money is they purchase things like bandwidth, colocation, power, etc in advance in bulk or on contract (2-10 year terms) to keep their costs down. If you are actually a serious business, you aren't buying single servers or single bandwidth allotments, you are buying and colocating hundreds of servers and purchasing hundreds of gigs of transit in a month -- when you make it to that level of purchasing, things come down in price exponentially. Zoom is definitely at that level of purchasing / contracting, so for them they are paying pennys on the dollar for those resources compared to say 'joe blow start up' with 2 rented servers and a metered gigabit connection.

    my 2 cents.

    Cheers!

  • @TheLinuxBug said:

    @AC_Fan said:

    @hzr said:
    We've been able to get it down to 1-3 seconds of latency with very short frames. This is easiest for "one to many", most scalable, and can theoretically scale infinitely because it's just plain boring files. This will honestly be the easiest way.

    The tradeoff here is you will need either a LOT of CPU time for encoding at high quality OR encoding at lower quality but with faster speed/less latency or stream delay.

    The more realtime it is, the more you will have to compensate.

    If you are doing "many to many" all over the place, webrtc or otherwise have a SHITLOAD of bw because good luck (like all 70 people are streaming back)

    That's the part I don't get about Zoom. They seem to mostly use a mix of AWS and colocation, according to their listed IP ranges, yet they can offer 1 host + 100 participants for 15 dollars. Like, how does that make financial sense? I know most people won't use it all the time (hence a large amount of overselling is expected), but even a couple hours of a conference with a dozen participants will wipe out their profits.

    You are thinking about it in the wrong way, you are thinking about it as if you were going to be the one providing this product as a small business who would have to pay retail. The way they make their money is they purchase things like bandwidth, colocation, power, etc in advance in bulk or on contract (2-10 year terms) to keep their costs down. If you are actually a serious business, you aren't buying single servers or single bandwidth allotments, you are buying and colocating hundreds of servers and purchasing hundreds of gigs of transit in a month -- when you make it to that level of purchasing, things come down in price exponentially. Zoom is definitely at that level of purchasing / contracting, so for them they are paying pennys on the dollar for those resources compared to say 'joe blow start up' with 2 rented servers and a metered gigabit connection.

    my 2 cents.

    Cheers!

    I do understand that, but in their more exotic locations, the best pricing I could find (at 40G+ commits) was still roughly double of what they would require. Perhaps I made some bad assumptions (usage, peaks etc.) but I would definitely love to see some figures from them (I've been trying to find out how they can afford their packages since a couple of days now, since they hit mainstream news).

  • You can use AWS or Azure for live transcoding if that is just one ad-hoc event and not 24/7. That is very economical.

    Use together with CDN to serve. 5k+ isn't that big.

    Thanked by 1Ozoneflare
  • PUSHR_VictorPUSHR_Victor Member, Host Rep

    Zoom uses a hybrid model which prefers P2P and falls back to proxy via their servers if needed. So for a large portion of the traffic they are only doing the signaling to connect the participants but the traffic is actually not served by them.

  • @PUSHR_Victor said:
    Zoom uses a hybrid model which prefers P2P and falls back to proxy via their servers if needed. So for a large portion of the traffic they are only doing the signaling to connect the participants but the traffic is actually not served by them.

    Do they manage to make that work through CGNATs on both sides? Wonder how they manage that, but yes, that would largely eliminate the traffic requirements.

  • PUSHR_VictorPUSHR_Victor Member, Host Rep

    @AC_Fan said:
    Do they manage to make that work through CGNATs on both sides? Wonder how they manage that, but yes, that would largely eliminate the traffic requirements.

    I am not sure, to be honest. I would guess TURN would be needed for CGNAT if it's on both sides, so this part of the traffic is probably served by them.

  • @AC_Fan said:

    @PUSHR_Victor said:
    Zoom uses a hybrid model which prefers P2P and falls back to proxy via their servers if needed. So for a large portion of the traffic they are only doing the signaling to connect the participants but the traffic is actually not served by them.

    Do they manage to make that work through CGNATs on both sides? Wonder how they manage that, but yes, that would largely eliminate the traffic requirements.

    You could use something similar to tinc or zerotier

  • @PUSHR_Victor said:

    @AC_Fan said:
    Do they manage to make that work through CGNATs on both sides? Wonder how they manage that, but yes, that would largely eliminate the traffic requirements.

    I am not sure, to be honest. I would guess TURN would be needed for CGNAT if it's on both sides, so this part of the traffic is probably served by them.

    But wouldn't P2P result in a high bandwidth usage for upload since they would need to share it to all other participants, whereas via TURN they only need to upload to that server?

  • donkodonko Member

    i save a lot using a p2p script with google cloud, mostly i pay is just storage

  • bandwidth is cheap, cloud is expensive.
    nginx-rtmp can serve to hundreds of people easily, but without a Flash video player, there is no easy or free way to serve live videos by HTML5 in realtime. HLS delay is about 8-10 seconds and LHLS is elusive.

  • Jitsi meet on a 4GB instance from Hetzner could do it.

  • datadiskdatadisk Member
    edited May 2020

    If you want to deliver high quality live streams to a large audience go with a server provider that offers dedicated bandwidth. Regarding the streaming software, use a media server such as Flussonic

  • perennateperennate Member, Host Rep
    edited May 2020

    AC_Fan said: I do understand that, but in their more exotic locations, the best pricing I could find (at 40G+ commits) was still roughly double of what they would require. Perhaps I made some bad assumptions (usage, peaks etc.) but I would definitely love to see some figures from them (I've been trying to find out how they can afford their packages since a couple of days now, since they hit mainstream news).

    I don't think Zoom pays AWS "best pricing", they have even better pricing because they are huge user.

    @Ozoneflare I haven't used Oracle Cloud but I hear it has much better bandwidth pricing than AWS. Also I heard Zoom is switching to Oracle Cloud and chose them over AWS, possibly for this reason.

    Edit: oh these people keep bumping a thread from April ...

  • ClouviderClouvider Member, Patron Provider

    @perennate I'm getting a lot of notifications recently about Zoom increasing their IX capacity globally, I also know for a fact they are buying more colo space. so I'm pretty sure they grow it on their own metal now.

  • Wow. So much bad info in this thread. All the major players are using Janus and WebRTC (Zoom actually uses a websocket fallback). Janus acts as an SFU so upstream bandwidth is O(1) not O(n) and latency is superb.

Sign In or Register to comment.