Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


[Request] Five Dedicated 24-Cores Intel server - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

[Request] Five Dedicated 24-Cores Intel server

2»

Comments

  • @antonpa said:
    So any another offers for 48 threads / 64GB RAM / 2x1Tb / 1Gbit shared ?
    I need best prices budget up to 200 USD its a maximum, because we can setup
    2 x Intel Xeon E5-2651 V2 1.8Ghz 30 МБ 12-cores / 64 / 2x1000 / 1Gbit shared this configs in our Datacenter by this price. Or no any lowend offers on this forum?

    You're asking for lowend offers on this forum, as well as you're asking for
    dual E5 48threads.

    I'd suggest you some single core, 1GB RAM VM's from Clouvider, mikho etc.
    That's lowend.

  • @antonpa said:
    So any another offers for 48 threads / 64GB RAM / 2x1Tb / 1Gbit shared ?
    I need best prices budget up to 200 USD its a maximum, because we can setup
    2 x Intel Xeon E5-2651 V2 1.8Ghz 30 МБ 12-cores / 64 / 2x1000 / 1Gbit shared this configs in our Datacenter by this price. Or no any lowend offers on this forum?

    So, taking the old formula of core count times core speed, you're at 43.2 ghzcores.

    Hetzner has a few i7 3770 servers (4 cores, 3.4ghz or 13.6 ghzcores) for just under 30 euros pre-tax. You would need about 3.2 of them to replace one of your 24 core machines, and given that you're looking for 5 of them, you'd end up with about 16 of the hetzner machines to replace them.

    At 30 euros each, that would mean you could replace each of your 24 core machines with 3.2 of those hetzner boxes, at 96 euros equivalent pricing, or 112 usd.

    For other providers similar calculations can be made, assuming your workload doesnt need a shitload of cache (ie cryptomining) and can be split up across multiple machines fairly well.

  • MrRadicMrRadic Patron Provider, Veteran
    edited October 2017

    Dual Processor Intel Xeon Silver 4116 (24 Cores, 48 threads)

    64 GB DDR4 ECC Reg RAM

    2 x 1 TB HDD

    1 Gbps Dedicated Port

    10 TB Premium Bandwidth

    /29 IP Block (5 Usable IPs)

    KVM Over IP

    20 Gbps DDoS Protection

    NYC Metro Data Center

    $319/mo + Free Setup



    We can announce your /24 free of charge. I guarantee you won't find a better deal, this is Intel's newest processors released just recently.



    E-mail [email protected] for epic deals on latest generation servers (not the old stuff...).

  • cociucociu Member
    edited October 2017

    antonpa said: 2 x Intel Xeon E5-2651 V2 1.8Ghz 30 МБ 12-cores / 64 / 2x1000 / 1Gbit shared

    my price for this config will be arrownd 200 $/mo if is minimum 5 servers . For future details please pm. Also free bgp setup .We are located in Romania

  • @bsdguy said:

    @sureiam said:
    I think you should consider co-location... Additionally you could go with AMD Eypc processors and get what your looking for at a lower start up cost single socket.

    Too expensive and not needed. A threadripper will do.

    I'm amazed btw to not read a lot more about threadripper here. 32 cores and relatively cheap at that, massive bandwidth both in terms of memory and of pcie sounds like a nice basis for VPSs.
    But I guess the crowd is waiting for supermicro, hp, and dell offering such machines.

    Ya absolutely, although threadrippers power consumption is a bit high, but down-clocking it would still net great performance and lowered thermals. I agree it's surprising they aren't jumping on this considering the performance and value proposition for VPS services. In particular the half core and 1 core VPS solutions.

  • bsdguy said: Too expensive and not needed. A threadripper will do.

    Depends, a 7281 is ~600$ and 2 are ~1200$, not far from a single 1950X (~1000$ in EU) but you get 32c/64t (albeit at 2.1Ghz). Lot more memory support as well.

    bsdguy said: I'm amazed btw to not read a lot more about threadripper here

    There are no enterprise boards at this time and the gaming boards cost, compared to even dual E5 SM boards, a high premium for zero enterprise needed/desired feature.

    Most cited, and my own complaint as well, are zero MBs with IPMI and the mostly used 8+4 CPU power being fairly rare in server cases/PSUs (most of mine have 2x fixed 8 pin, not 4+4 plugs).

    SM has EPYC boards shipping (CPUs also since some days finally better available) but the single socket one is still "TBA" sadly, might be pretty interesting. From what i know SM has no plans for server form factor (read closely; this means there might be a tower workstation and mainboards in EE-ATX/SSI-EEB) Threadripper systems.

    https://www.supermicro.nl/products/nfo/AMD_SP3.cfm?pg=MOBO

  • @William said:

    bsdguy said: Too expensive and not needed. A threadripper will do.

    Depends, a 7281 is ~600$ and 2 are ~1200$, not far from a single 1950X (~1000$ in EU) but you get 32c/64t (albeit at 2.1Ghz). Lot more memory support as well.

    bsdguy said: I'm amazed btw to not read a lot more about threadripper here

    There are no enterprise boards at this time and the gaming boards cost, compared to even dual E5 SM boards, a high premium for zero enterprise needed/desired feature.

    Most cited, and my own complaint as well, are zero MBs with IPMI and the mostly used 8+4 CPU power being fairly rare in server cases/PSUs (most of mine have 2x fixed 8 pin, not 4+4 plugs).

    SM has EPYC boards shipping (CPUs also since some days finally better available) but the single socket one is still "TBA" sadly, might be pretty interesting. From what i know SM has no plans for server form factor (read closely; this means there might be a tower workstation and mainboards in EE-ATX/SSI-EEB) Threadripper systems.

    https://www.supermicro.nl/products/nfo/AMD_SP3.cfm?pg=MOBO

    I know, I know. Still ugly that situation. Without good ipmi and dual cpu systems (or at least industry quality mainboards) AMD can't enter that market.

    On the other hand it seems only reasonable to assume that AMD will gain speed and arrange for motherboards being built (worst case by themselves). Based on that I'm surprised to not see much more interest and discussions here.

    And yes, of course both intel and amd have advantages but still. at least AMD clearly has some advantages, too.

  • bsdguy said: And yes, of course both intel and amd have advantages but still. at least AMD clearly has some advantages, too.

    Intel can by now pretty much fuck off for me in recent generations - eg. the i9-7980XE as highest end "Desktop" part, at a cost of ~1600$+, has only 44 PCIe lanes (which, by UPI and 2 sockets unlike the AMD IF on the Xeon side is at least "ok" and not "what a joke" level like here) and yet again no ECC support.

    For now, also by availability at launch and on scale, AMD takes the lead in many segments, they won't beat Intel obviously in total (but maybe by new sales for some time and get total % up also) but compared to Opteron (brr, Piledriver) it's something good at last.

  • I thought it was fairly established that AMD Eypc has dual and single socket boards available. Threadripper won't get it but eypc will. Not to mention that massive amount of pcie lanes could be useful for raid nvme as nvme prices keep dropping

  • @William said:

    bsdguy said: And yes, of course both intel and amd have advantages but still. at least AMD clearly has some advantages, too.

    Intel can by now pretty much fuck off for me in recent generations - eg. the i9-7980XE as highest end "Desktop" part, at a cost of ~1600$+, has only 44 PCIe lanes (which, by UPI and 2 sockets unlike the AMD IF on the Xeon side is at least "ok" and not "what a joke" level like here) and yet again no ECC support.

    For now, also by availability at launch and on scale, AMD takes the lead in many segments, they won't beat Intel obviously in total (but maybe by new sales for some time and get total % up also) but compared to Opteron (brr, Piledriver) it's something good at last.

    Well, what was to be expected from a monopoly? I find quite some things ridiculous since quite some time but hey, the market sucked it up, smiled and burped and that was it.

    One danger I see is to do with exactly that. intels prices were in pretty much no relation with anything physical or even research costs; these prices were wanton prices. Obviously that's an invitation for competition and AMD has taken on. However, it is at the same time fatally dangerous, too. Reason: intels only problem is how to explain hefty price drops without severely damaging it return path to arbitrary pricing. Part 1 is obvious and they seem to go that path: create some new series that are competitively priced. Part 2 is the difficult one: How to explain that in a way that allow intel to return to their old exuberant pricing once AMD is dead in the pit?

    Another quite obvious question is that of the war chest. I'm not a finance and stock market guy but it's my clear impression that AMD will need a solid war chest to fight the upcoming battles.
    Plus, of course, a lot of dirty games, which intel has been shown to play; games like threatening manufacturers. Whatever nice processors AMD might have, they'll need some well respected board manufacturers as well as some big name system manufacturers, too, to really penetrate a not insignificant part of the market.

    Technically speaking, I'm not at all worried. A processor is pretty much about a really good core plus some gadgets like hw enrcyption, caches, etc, with the core being the holy grail that is really, really hard to get right (opteron, anyone?). The current processors show that AMD has a very solid, good basis, good processes, good gadgets, good IP.

    Plus: AMD has some killer features/selling points. The number of lanes being an obvious example. But again there too, right next to milk and honey there are dangers. Example: Unless AMD manages to turn that into some practical killer feature, those lanes will be but a nice number in a nice brochure. So, what to reasonably do with those lanes and how to make them tangibly advantageous?

    Above I read someone writing about nvme. Yes, theoretically. But: How big is the market of people needing, say, 8 nvmes in a system? Plus: Won't most in that market segment be Oracle (Sparc) customers anyway? Well, the gamers then. But again: How big is the market of people really needed and willing to pay for, say, 4 full width graphic engines?
    Some years ago I would have said "well, throw them lanes at higher end networking". Not so today, however. Those needing n x 100 Gb or even just n x 40 Gb either went to other processor platforms or to ASICs.

    One of the few realistic options I see is the server market and for a funny reason profile: power and cost savings plus the option based on many lanes.
    Looking at data centers a) power is one of the (if not the) top cost factors (keep in mind that power also translates into cooling needs), b) if you run 1000 of something saving some $ means much much more than if you run 1 or 5 or 10 of something. c) optionally: lanes which means more gpus and or more storage.
    Add to that the matter of security which becomes increasingly important. AMD has covered that quite well.

    But where the fuck are them widely available and not more expensive than intel Epyc mainboards? Move, AMD, move!

  • @bsdguy Goodbye, Sun.

    Too soon.

  • They've just introduced a new Sparc w/256 threads. So I guess some remains of Sun stay alive for a while.

  • WSSWSS Member

    @bsdguy said:

    They've just introduced a new Sparc w/256 threads. So I guess some remains of Sun stay alive for a while.

    It'll be bought by Joyent and they'll run Node on it. The era is over. :(

  • raindog308raindog308 Administrator, Veteran

    bsdguy said: They've just introduced a new Sparc w/256 threads. So I guess some remains of Sun stay alive for a while.

    I was just at the Oracle conference. Exactly no one was talking about SPARC, unless they were being boozily nostalgic for the dot-com days.

    Oracle has killed SPARC-based systems and, since they were the only reason for its existence, Solaris. M8 is the last hardware rev. It'll all wind down via normal product support lifecycles but the roadmap now points to the sunset.

    They didn't make an official announcement but when they're not announcing new gear and the tech press is reporting layoffs...

  • bsdguy said: Another quite obvious question is that of the war chest. I'm not a finance and stock market guy but it's my clear impression that AMD will need a solid war chest to fight the upcoming battles. Plus, of course, a lot of dirty games, which intel has been shown to play; games like threatening manufacturers

    Less of an issue than it seems, both by diversification (AMD does more SoC things like Sony/MS hardware, plus the GPU market) and by operational area.

    Intel focuses on US/EU based R&D (+ heavily on Israel, their facilities are huge) and production (formerly also Costa Rica, now silicon is actually in Asia but by US companies/fabs) while AMD is Asia based (TSMC, more Taiwanese, Japanese and now also Indian & Chinese). It's partly also a management thing; the Intel board (and largest investors) are more US/EU based while Asians & multi-national investment are more common in AMD (and, for completeness, the Intel lawsuits did surely help in the past as well, but one cannot blame AMD for that, they clearly won for a reason so this is only fair).

    By customer wants (not like dead birth AM1) and performance (much less the case on AM2/3) currently the board manufacturers will (and do) deliver - they have nothing to loose, if Intel threatens on the size of MSI or Gigabyte they win by size but the promo alone with the competitive AMD offers sounds like insanely bad marketing results (or even suicide for that Intel dept).

    As Intel refuses to sell to some large customers (some AMD obviously has to also but some are just company decisions, even if political influenced, where AMD can decide differently) the CPU market for gov/supercomputing is there and interested in Zen gains, the end-user market is also there now again from low (AM4 with old arch; then later upgrade CPU - try that on Intel... however you have the G Celeron/Pentiums there in same price area since start) to workstation/high end, the enterprise/gov market (normal datacenter, like the SM EPYC servers) is getting up again

    sureiam said: I thought it was fairly established that AMD Eypc has dual and single socket boards available. Threadripper won't get it but eypc will.

    Dual socket TR was not the point at all.

    TR allows solid overclocking (and higher base freq than single socket EPYC) and still supports ECC, something Intel does not at all outside of rare expensive runs (X5698 @ 2x2x4.4G, E5-2687W @ 12x2x3.5G etc.) for specific use but has at this time no enterprise boards (incl. workstation) from anyone.

    By that it is not too useable for larger scale deployments by component reliability (not as much though) and features (IPMI, more sensors, TPM, SAS, tested long term stability EFI/OPROM, tested SAS/IB/NIC compatibility etc.) in datacenter space even if you build your own servers. Intel side example is E3 for lack of anything higher (single socket E5 has no desktop part with ECC on high end i7 so is not qualified for enterprise use) which always had cheap, reliable, enterprise feature Supermicro boards and Off-the-shelf servers from HP(e), Dell etc. without the gaming stuff (SLI, Audio, Wifi...).

    TR by design does not allow dual socket functionality, EPYC single socket is very similar but not the same by this either. A similar Intel side example is eg. W3680 and X5680 - functionally same CPU but different by dual socket support (and memory layout) on entirely same socket (so far that a single W3680 will somewhat work in a dual board and a X5680 fully on single).

    Ultra large customers can, and likely already do, avoid this by getting own mainboard designs from Quanta (AWS scale/similar) or Gigabyte/Fujitsu/Asrock (OVH, Online.net scale/similar) that incorporate some of this or work around them (implementing IPMI/KVM is NOT simple from engineering view, removing audio circuitry and replacing GE NICs with 10GE or Chinese capacitors with JP is plain rewiring/rebase only and your PCB design tool can likely do that nearly entirely automated, after that everything left is QA with a shitload of SAS cards and RAM).

    bsdguy said: Part 2 is the difficult one: How to explain that in a way that allow intel to return to their old exuberant pricing once AMD is dead in the pit?

    You don't. If there is no competition the customer has no choice, which makes this pretty simple - as stock company they can also just blame the investors desire for profit, which means the company is clean morally.

    I also highly doubt AMD would do ANYTHING different if they could (or could "again", this time just with no illegal Intel manipulation).

    bsdguy said: Example: Unless AMD manages to turn that into some practical killer feature, those lanes will be but a nice number in a nice brochure. So, what to reasonably do with those lanes and how to make them tangibly advantageous?

    That happened already, somewhat, even as an enterprise desired feature - Intel charges you premium for NVMe RAID and limits you to their SSDs on some configurations, AMD does it free on all and much larger scale (and supports PCIe bifurcation better than Intel, allowing to split the x16/x8 slots easily into x4 rewired to support passive M.2 carrier cards which are cheap as a PLX chip costs easily 50$+ alone).

    https://www.extremetech.com/computing/256861-nvme-raid-support-now-available-amd-threadripper

    Now we wait for more E-ATX/Workstation centered boards, i'd rather have more x8 slots than x16 for other usage.

    bsdguy said: But where the fuck are them widely available and not more expensive than intel Epyc mainboards? Move, AMD, move!

    AMD has CPUs in stock (Klarsicht-IT and Also list me high availability since last week) so i assume components needed to form the chipset/EFI side (as TR/EPYC is SoC for most I/O anyway) are also available so the only lack are board manufacturers, Intel offers 'incentives' for this things (discount on chipset which obviously is not applicable on TR/EPYC, discount on NICs, Thunderbolt, ESX(i) certification path shortcuts, Marketing, plain money if legal/event sponsoring) which AMD cannot as much/high, might be part of the issue as well.

    bsdguy said: They've just introduced a new Sparc w/256 threads. So I guess some remains of Sun stay alive for a while.

    Was never too into SPARC but the architecture has (or rather, had, as the IPC gain on x86-64, sub-20nm FAB and socket scalability) some advantages especially in multi threading (and, weirdly, as we have seen by Sony - gaming, if your code is optimized for it). It does have financial and enterprise/RAS features others don't (E7 does get near this though) but you pay this by the arch quirks.

    By now they pretty much just exist to update old servers with new ones that run the same or nearly the same code for certification, liability or legal reasons - this is not uncommon in financial markets at all, and the cost of operation and development is absolutely negligible to eg. BoA or HSBC.

    Thanked by 1steny
  • @William

    Interesting discussion and you have a lot to contribute. I like the fact that the two of us on one hand have enough in common for a good basis yet come from different fields and with different foci and experience sets.

    "nvme raid" - Yes, sure, but won't that stay a rather small segment, mostly limited to not so normal scenarios? Let's look at the performance and the cost. With "disks" delivering upwards of 2 GB/s reading and writing - at a price! - how much sense does it make to raid them for performance and for whom? Plus: Is it even reasonably (incl. cost) feasible?
    Suppose I have a 1TB database and need/want extreme speed. Wouldn't I then go RAM-based anyway (with even a raid 1 nvme disk easily handling the writes/inserts/updates).
    Yes, there will certainly be some exotic cases mirroring some more speed out of nvme raids but I think that the vast majority won't care (seeing the price tag, too).

    Btw, like you I'd prefer more 8x slots, but then I don't care shit about multiple graphics card gaming.

    I have a hunch (well, after thinking a lot about AMDs xYx). What's a server? A (more or less powerful) cpu plus lots of memory (available), some raid/disk controller and some networking.
    Now, looking at processors with built in engines (e.g. sata/raid/networking) like the power cpus we see that it's not technically demanding; the problem is massive i/o (hence bridge chipsets). My hunch is that AMD didn't go for 8/16 core modules because they are incapable to build 1 die with 32C/64T but because they wanted it modular.
    In other words, I'm expecting "server socs".
    Why? Simple: servers is all about a certain performance level and about cost. To have server soc is about the only way to get a considerable cost savings. Forget all them bridges.

    Let's play. Instead of 4 sub-dies with 8C/16T each the server soc will contain just 2 or 3 but it will also contain 2 x 10Gb plus 16 sas channels (in groups of 4) plus the usual set of small fries and some plain pcie lanes.
    That would give you a quite respectable and quite simple and cheap to manufacture server board that would address a very considerable share of the server market.
    Next step: with a second socket you simply add cores, maybe 16/32 maybe 32/64. Such you could address both, the mass market (keep all the company servers in mind) and the higher end market (e.g. cloud) with a total of 48C/96T (and btw. quite a bite of the workstation market, too). Plus: AMD would have a reserve buffer in case intel would start a price war.
    bcm/ipmi should be too hard a problem for AMD; they already have an Arm management core.

    As for Sparc, I'm somewhat split. On the one hand I never considered those processors to be the greatest. On the other hand I highly value that Sun has open sourced them. The eu built their Leon on that basis, the Russians built a MCST product line, and many others learned a whole lot and found their entry into the field easier. And hey, come one, them modern Sparcs aren't shitty either.

  • sureiamsureiam Member
    edited October 2017

    Intel can't just drop prices on their server products without investors getting spooked by the lower overall company margin data. Intel has already been losing a little there and investors have asked about that last two quarters. Intel will need to shift their tactic or give more for less. I believe the new coffee lake chips are actually purposely kept low in stock to not make skylake obsolete and lower margins too much (monolithic 6cores and 4cores are more expensive to produce vs AMDs infinity fabric aided ccx).

    The problem is unlike in desktop environments where your can increase clock speed and continue using a monolithic chip design it doesn't scale well above 12cores. The failure rate goes to high and they end up having to raise prices to compensate for that. AMDs infinity fabric ccx design allows for amazing scaling. In addition dual socket Eypc boards use infinity fabric to communicate between the two CPUs creating a very fast link. it actually dedicates 128pci lanes just for that which is why current dual socket Eypc boards still have 128lanes.

    Board partners should be easy. Zen based chips which include Eypc are a form of SoC. They have their own chipsets on board managing a lot. This provides board manufactures to create much simpler boards costing them less to make.

    AMDs memory encryption i would think could also be a marketing tool in regards to VPS solutions. The data is encrypted at rest and when hot in the memory. That's appealing to users I shared environments like Webhosting and vps..

    Lastly Amd has a built remote module similar to impi or Dell idrac in all Zen based processors running on a dedicated ARM sub processor i believe. However it doesn't seem fully utilized yet as i can't find too much information on how to access or use it!

    Intel is in for a fight on the web services front i feel if more providers start to consider AMD again. Oh and another example of good use of pci slots outside of nvme is pci ramdisk modules used for caching. which not as popular as nvme are incredibly fast and potentially cheaper than nvme. But for obvious reasons it would be a caching only solution to aid ssd or hd deployments

  • @antonpa said:
    Im fixed to 2x1Tb HDD. Waiting for your offers.

    Could you give me a test IP?

  • 6ixth6ixth Member

    @MrRadic said:
    Dual Processor Intel Xeon Silver 4116 (24 Cores, 48 threads)

    64 GB DDR4 ECC Reg RAM

    2 x 1 TB HDD

    1 Gbps Dedicated Port

    10 TB Premium Bandwidth

    /29 IP Block (5 Usable IPs)

    KVM Over IP

    20 Gbps DDoS Protection

    NYC Metro Data Center

    $319/mo + Free Setup



    We can announce your /24 free of charge. I guarantee you won't find a better deal, this is Intel's newest processors released just recently.



    E-mail [email protected] for epic deals on latest generation servers (not the old stuff...).

    Even though this is kind of irrelevant now, read the fucking thread before posting your shite.

Sign In or Register to comment.