Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Using timezone offset for finding closest server
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Using timezone offset for finding closest server

elwebmasterelwebmaster Member
edited July 2021 in General

I am building an open source framework for p2p real-time communication through relays which is supposed to work in web browsers as well as mobile devices. There is no backend, there are a bunch of relays scattered around the world, peers can talk to each other only when connected to the same relay.
Originally I was planning to build an API to get the user’s IP location and return the closest relay. But I decided not to have this single point of failure as the API may not always be accessible.
So I just host a file on GitHub with a list of all active relays and their location. I want to vaguely link a peer to the closest relay to achieve lower latency and load distribution, though if the peer can’t connect to that specific relay it will try the next closest and so on.

I was originally thinking of using the user’s timezone and mapping that to geo coordinates but it will be a very big file and every OS/browser may have a different timezone naming. So I decided to instead just map all the timezone offsets to “average” longitude within the timezone and use that to find closest relay. Is that a crazy idea? Can you guys suggest a better solution?

So with this idea all users in Europe or Africa will have closest longitude to one of the EU servers, users in Asia/Australia will have closest longitude to HK/SG, users in West Coast to LA/Seattle and users on East Coast to NY.

Comments

  • hnzlethnzlet Member
    edited July 2021

    Could you have the client ping the relays and then just connect to the one with the lowest ping?

    Thanked by 10xbkt
  • @hnzlet said:
    Could you have the client ping the relays and then just connect to the one with the lowest ping?

    Since it needs to work in the browser it can't really ping but it could attempt connecting to multiple relays at the same time and take the first one to succeed. However, I am worried about the load implications on the relays as every user would be hitting multiple ones. But yeah, it's a great idea!

  • lentrolentro Member, Host Rep

    Quick search: https://github.com/alfg/ping.js/

    Does this kinda help testing ping latency?

    Thanked by 1elwebmaster
  • KousakaKousaka Member
    edited July 2021

    Timezone is not reliable, especially in the APAC region.
    Two examples: China, Mongolia and part of Russia share the UTC+8 zone, however connections between them often get routed via Europe or even US. While there is a direct link between Australia and Singapore, practically it’s often not used due to cost reasons.

    Thanked by 1raindog308
  • Thanks guys, this totally helps. I will implement a ping.js based solution.

  • yoursunnyyoursunny Member, IPv6 Advocate

    @elwebmaster said:
    I am building an open source framework for p2p real-time communication through relays which is supposed to work in web browsers as well as mobile devices. There is no backend, there are a bunch of relays scattered around the world, peers can talk to each other only when connected to the same relay.

    Port the app to use Named Data Network (NDN) and NDNts library, and you aren't limited to same relay anymore.

    Originally I was planning to build an API to get the user’s IP location and return the closest relay. But I decided not to have this single point of failure as the API may not always be accessible.

    I solved this problem by running the API frontend on Cloudflare Workers, which almost never has an outage.
    The API uses IP geolocation (provided by Cloudflare) to determine which NDN router (similar to your relay but they are all interconnected) the user could connect to.

    I also have self-hosted API server, which periodically checks whether each router is up or down, so that it doesn't return failed routers to the clients.
    Workers script will try to proxy to this API server first.
    If the API server is unavailable, Workers script executes the fallback logic that returns nearest routers from user location regardless of availability.

    Each query returns several routers.
    The browser will attempt to connect to all of them and test RTT (request-response in WebSockets), then disconnect from slower ones.

    https://github.com/11th-ndn-hackathon/ndn-fch-worker
    https://github.com/11th-ndn-hackathon/ndn-fch

    So I just host a file on GitHub with a list of all active relays and their location. I want to vaguely link a peer to the closest relay to achieve lower latency and load distribution, though if the peer can’t connect to that specific relay it will try the next closest and so on.

    What if GitHub is down?

    My solution doesn't use GitHub.
    In case Cloudflare Workers is down, I have three routers hard-coded into the app itself.

    @Kousaka said:
    Timezone is not reliable, especially in the APAC region.

    IP geolocation sadly isn't reliable in Asia either.
    China is close to Japan, but routing sometimes goes through USA.
    Airtel India seems to send all international traffic through London.

    https://yoursunny.com/t/2021/NDN-video-QUIC/
    "The Case of an Asian NDN-QUIC Gateway"

  • @yoursunny said:
    I solved this problem by running the API frontend on Cloudflare Workers, which almost never has an outage.
    The API uses IP geolocation (provided by Cloudflare) to determine which NDN router (similar to your relay but they are all interconnected) the user could connect to.

    This is exactly the setup I am moving away from. Currently I am using an API on Cloudflare Workers. It is a single point of failure, the only single point of failure in the network. And it went down already. Cloudflare Workers didn't go down, but one of the guys on my team managed to burn through the requests allowance. You may think that's easily solvable by upgrading to a paid plan, but I see a couple of issues with that approach. Firstly, the nearing limit notifications from Cloudflare came AFTER the limit was already exhausted, i.e. the API was down. So I incurred a downtime regardless and could not have done anything to prevent it. Secondly, if one guy can accidentally burn through the requests allowance how about when it is out in production and people are using it?
    With Github if one person makes a ton of requests to Github then they will block/rate limit that one person, not the project itself. And I trust Github's uptime more than I trust Cloudflare Workers or any other serverless technology that hasn't been around for nearly as long as Github.
    Secondly, I already see cases where users are unable to access endpoints behind Cloudflare. I pined it down to QUICK and so I disabled QUICK. It helped, but still some users occasionally have issues with Cloudflare trying to present its anti-DDOS check when a JSON is requests. So I would like to diversify the network and not be fully dependent on Cloudflare. Although it is a great service and helps immensely, it will continue to be used by 90% of my users but still I want to give an option for those who can't use it for one reason or another.

    Thanked by 1yoursunny
Sign In or Register to comment.