New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
750k RPS on a single dedicated server
What do you think about this?
Comments
Hetzner, so it must be good.
But I also read Cloud and Node.js, so I am confused.
Where do you see the word cloud there?
https://websummit.com/wp-content/uploads/2019/11/Press-release-4-November-2019.pdf
Well they test with wrk load testing tool and that is HTTP/1.1 based so they're testing HTTP/1.1 HTTPS loads which in this day and age isn't always real world given that HTTP/2 HTTPS is becoming the norm. They should also test using h2load HTTP/2 load testing tool.
I could much higher requests/sec using wrk with HTTP/1.1 HTTPS than with h2load with HTTP/2 HTTPS.
I'm super interested in this kind of high perf project, what is your best score (RPS)? And with wich stack? GO? C++? Java?
Let's see how far this goes..
wrk -t8 -c1024 -d24h https://vms2.terasp.net/debug
I highly doubt its one AMD 3600 generating 750k RS
Im doing the above command x5 on my servers and got it up to 1 Mil RS now.
Im highly interested in this project for sure, will run it as a cache or reverse proxy or even as a webserver if it turns out to be good.
we'll see how far it works.
I'm genuinely impressed now.
Cool, Indeed it's now showing 1M RPS! It will be an amazing web / cache server for sure
Google search for the server name and leads to pdf press release for appdrag
and
Yup Neoon already found this and published the link to the pdf few posts above, you are 1 day late mate
whoops missed that LOL
AppDrag's github account at https://github.com/jbenguira
@angelius
Frankly, from what little information (let alone tangible info) is available I think this is a load of BS.
Experience tells us that someone who needs to bend the status quo to make himself look great does not have something great. Also note the fact that µFTL code as well as benchmark code is not yet provided - but marketing blabla is.
They tell us nothing about the context, nothing tangible about the "cluster", virtually nothing tangible about µFTL, nothing about the benchmark. All we have is their marketing blurb.
Plus the benchmark numbers they provide are between ridiculous and nonsensical. and not at all credible.
Also note that in one place they claim µFTL to be 100% faster while in their press release they claim "response times at least 100 times faster".
I guess that's in part due to http/2 being much more complex and in part due to a quite different request structure. In summary though (from the users perspective) http/2 should deliver somewhat better results (between 5% and 30%, typ. ca. 15%).
HTTP/1.1 vs HTTP/2 - for latency response time in terms of page speed yes HTTP/2 is faster but throughput as in requests/second maybe not.
But yeah without context of the benchmark environment hard to say.
Did a quick test on my Centmin Mod Nginx builds with CentOS 7.7 64bit and Intel Core i7 4790K 4C/8T with forked version of wrk, wrk-cmm https://github.com/centminmod/wrk/tree/centminmod for wrk-cmm -t4 -c256 test of hello world static file and could push around 160,000 to 165,000 requests/sec for HTTP/1.1 HTTPS work loads.
Since µFTL is going to be used on their own AppDrag cloud platform, they have access and ability to tune their whole environment and web stack/networking for such. They maybe doing same thing Facebook and Cloudflare are doing and using XDP/DPDK like tech to move network packet processing away from Kernel to userland which can realistically produce such amount of requests/second on a good web server. I vaguely recall seeing someone experiment with Nginx custom build with DPDK or XDP pushing easily 10-50x times higher requests/sec than Nginx via normal Kernel network processing.
guess it depends on testing tool
just did h2load HTTP/2 vs HTTP/1.1 HTTPS benchmarks
HTTP/1.1 HTTPS
HTTP/2 HTTPS
Note, my Centmin Mod Nginx server was running with Cloudflare full HTTP/2 HPACK encoding patch hence why h2load reported header space savings in 95+% range. Nginx upstream doesn't implement full HTTP/2 HPACK encoding so usually you'd only see header space savings between 15-25%.
So probably difference in HTTP/2 vs HTTP/1.1 HTTPS for h2load tests came down to HTTP/2 HPACK header encoding savings = less data transferred = more requests/sec.
I can easily make nginx+lua respond with ~450k Hello World requests per second with a slightly tuned setup on an E3-1270 v5 that's doing a whole lot of other checks, configuration etc. so 750k on a Ryzen 3600 would probably be easily done as well.
This seems incredibly stripped down and tweaked to perform really well in benchmarks and even running with Apache Benchmark, for example, fails completely because it was tuned to get the best performance out of wrk.
The interesting thing here is that this is Node.js, but besides that, it's nothing spectacular.
In the end, the CPU will be eaten away by the application, SSL handshakes, disk IO etc. anyway.
With some nodejs based tool? I doubt that. And note that they didn't say that their platform is now so fast but that their new tool is, and that's nodejs based.
I value your hands on approach to run some benchmarks as well as your thoughts re DPDK/XDP but again: This whole thing about some marketing blabla with no relevant information whatsoever and some ridiculous data on the "competition". Frankly, I think your work is but a waste of time. That marketing BS does not deserve your efforts.
"http/1.1 vs http/2"
As you noted correctly this still is somewhat of a lottery because http/2 is relatively new, a lot more complex than 1.1 and current implementations are early/not yet really sound.
FWIW I myself am still quite reluctant re http/2 because at least as of now I don't like the tradeoff between sound and battle-proven http/1.1 vs. often flaky and not yet production quality/real world proven code. Also http/2 does no miracles; if someones application is too slow then the reason is rarely to do with http protocol version.
FWIW I myself am still quite reluctant re http/2 because at least as of now I don't like the tradeoff between sound and battle-proven http/1.1 vs. often flaky and not yet production quality/real world proven code. Also http/2 does no miracles; if someones application is too slow then the reason is rarely to do with http protocol version.
Yeah HTTP/2 implementations also differ between web servers so it can vary too. But my personal focus on HTTP/2 HTTPS is because all my sites by default use it.
Yeah I was just curious heh. But yeah for a 27 bytes debug file which has ~67 bytes network transfer overhead, it's small enough for any web server properly configured to push decent numbers. For real world where file/html sizes are much larger that would be more telling.
Yeah nginx+lua would be another option. But you do have a point would be interesting to see cpu/memory usage comparisons too.
I think it's performing decently, but I think this statement is a bit misleading: "Use your hardware resources up to 100x more efficiently. µFTL can be used as a load balancer, a firewall, a DDOS protection layer, an in-memory cache, an api gateway and a serverless runtime for Node.js. Can be scaled horizontally by adding more nodes in cluster mode."
EDIT: Removed the last paragraph as I noticed they're actually doing over 1M, not 750k
@BunnySpeed you had me curious about nginx lua hello world tests too and since I have my Centmin Mod Nginx build with optional lua nginx module enabled, decided to test it out.
First result is the best I squeezed out for plain Nginx throughput at ~280K requests/s and 2nd result is Nginx Lua for same hello world test at ~334K requests/s.
Difference in header size due to HTTP response headers for Nginx versus Nginx lua module response.