Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


AMD EPYC & Intel Xeon Gold 6148 Scalable Spotted At Linode
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

AMD EPYC & Intel Xeon Gold 6148 Scalable Spotted At Linode

Seems Linode has new cpus in some datacenters as a few of my members have spotted both Intel Xeon Gold 6148 and AMD EPYC 7501 cpus on some Linode VPS nodes https://community.centminmod.com/threads/guessing-linodes-next-server-cpus.12642/#post-61453

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                1
On-line CPU(s) list:   0
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 85
Model name:            Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
Stepping:              4
CPU MHz:               2399.996
BogoMIPS:              4801.99
Virtualization:        VT-x
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              4096K
L3 cache:              16384K
NUMA node0 CPU(s):     0
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat pku ospke

and

processor    : 0
vendor_id    : AuthenticAMD
cpu family    : 23
model        : 1
model name    : AMD EPYC 7501 32-Core Processor
stepping    : 2
microcode    : 0x1000065
cpu MHz        : 1999.992
cache size    : 512 KB
physical id    : 0
siblings    : 1
core id        : 0
cpu cores    : 1
apicid        : 0
initial apicid    : 0
fpu        : yes
fpu_exception    : yes
cpuid level    : 13
wp        : yes
flags        : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 arat
bugs        : fxsave_leak sysret_ss_attrs null_seg spectre_v1 spectre_v2
bogomips    : 4001.65
TLB size    : 1024 4K pages
clflush size    : 64
cache_alignment    : 64
address sizes    : 40 bits physical, 48 bits virtual
power management:
Thanked by 2ariq01 v3ng

Comments

  • ClouviderClouvider Member, Patron Provider

    64 Core, assuming dual CPU Config, wonder how much memory and storage they chuck in to their new nodes.

  • ClouviderClouvider Member, Patron Provider

    I don’t like the new, confusing, naming scheme and jacked prices despite Spectre and Meltdown @ Intel. I hope AMD steps up across all ranges (that means introducing a competitor to the E3 series) so hopefully Intel gets real in the near future.

  • That's fantastic! The EyPC platform is definitely ready for production and prime time. The Zen arch base has been proven across the board in the Consumer Ryzen sector it's stability. I would additionally argue that EyPC is the perfect LET processor. High core counts, high RAM limits, and very high raid abilities for NVMEs...

    You could spec one out with 64 cores and 128 threads! That's 128 potential single "core" VPS solutions. Memory is maxed at 2tb across the board of the processors so add as much as you can afford, and NVME raid isn't extra and works very well.

    I would love me an EyPC node for a VPS!

    Thanked by 1AnthonySmith
  • HivelocityHivelocity Member, Patron Provider

    We have and some of our customers have been using the EPYC in production for several months now and I can say the performance has been impressive. When paired with PCIe SSDs the read/write speeds are out of this world. We use them on Tyan chassis which along with the EPYCs support 128 PCIe lanes which obviously makes for a huge performance boost. Had one customer in particular going from Dual E5-2630v4 to Single AMD 7551P EPYC and according to him "have gone from pegging out my resources to about 10%". Anyhow, my thought is you should give the Linode AMD option a try. I think you will like it.

  • vmhausvmhaus Member, Top Host, Host Rep

    Well, you could always try out our AMD EPYC + NVMe based KVM in London. Its been all positive reviews so far.

  • rm_rm_ IPv6 Advocate, Veteran

    Hivelocity said: Anyhow, my thought is you should give the Linode AMD option a try. I think you will like it.

    How often do you get an EPYC? A few days ago I made a couple Linodes, first one got onto a "hardware problem" node, 2nd one got some old E3 or whatever, and the 3rd one got Xeon Gold 6148. If I knew there's also EPYC, I'd maybe try some more.

    Also can anyone try how much EPYC gets in:

    dd if=/dev/zero bs=1M count=2048 | md5sum
  • Thread cleaned.

    Thanked by 2Aidan eva2000
  • @rm_ said:

    Hivelocity said: Anyhow, my thought is you should give the Linode AMD option a try. I think you will like it.

    How often do you get an EPYC? A few days ago I made a couple Linodes, first one got onto a "hardware problem" node, 2nd one got some old E3 or whatever, and the 3rd one got Xeon Gold 6148. If I knew there's also EPYC, I'd maybe try some more.

    Also can anyone try how much EPYC gets in:

    dd if=/dev/zero bs=1M count=2048 | md5sum

    Unfortunately, there's no way other than to try spinning up instances until you get the cpu you want :(

    When I do land on AMD EPYC or Intel Xeon Gold 6148 on Linode, will definitely be sharing benchmarks I usually do for each VPS I use :)

  • HxxxHxxx Member

    You could code an integration with their API to loop [deploy, check CPU, cancel] until you reach that CPU you want. Of course with some kind of cool down.

  • MikePTMikePT Moderator, Patron Provider, Veteran
    edited April 2018

    https://benchgeeks.com/2018/02/20/amd/amd-epyc-16-core-7351p-2-4ghz/

    Here is a benchmark done on a @HIVELOCITY server, EPYC. Impressive performance.

    Thanked by 1eva2000
  • rm_rm_ IPv6 Advocate, Veteran
    edited April 2018

    eva2000 said: Unfortunately, there's no way other than to try spinning up instances until you get the cpu you want :(

    I did not ask if there is any other way, I asked how often do you get an EPYC while spinning up and destroying instances. Or at least how many attempts it took for someone who tried, to finally get one.

  • eva2000eva2000 Veteran
    edited April 2018

    @MikePT said:
    https://benchgeeks.com/2018/02/20/amd/amd-epyc-16-core-7351p-2-4ghz/

    Here is a benchmark done on a @HIVELOCITY server, EPYC. Impressive performance.

    With AMD EPYC you want to be using Linux 4.15+ kernels for best performance - there was a huge difference when I tested CentOS 7 with 4.15 kernel versus 3.10 distro kernel with AMD EPYC 7401P cpu. With Linode added bonus is they use Linux 4.15+ kernels by default https://www.linode.com/kernels/

    @rm_ said:

    eva2000 said: Unfortunately, there's no way other than to try spinning up instances until you get the cpu you want :(

    I did not ask if there is any other way, I asked how often do you get an EPYC while spinning up and destroying instances. Or at least how many attempts it took for someone who tried, to finally get one.

    Depends on region I guess, Linode users reported them on London and Newark Linode datacenters.

    They also gave some clues as the host node names differ https://community.centminmod.com/threads/move-from-vultr-to-linode-caused-a-huge-performance-decrease.14318/page-2#post-61451

    I landed on one in Newark! From what I can tell in this specific location, if the host ID starts with h13**-cjj1 it will most likely have the newer Gold 6148 CPU.

    https://community.centminmod.com/threads/move-from-vultr-to-linode-caused-a-huge-performance-decrease.14318/#post-61432

    Seems as Xeon Gold host servers are described as h****-lon1.
    Old server with the known: london*****

    Linode Newark seems to be one region that has reported both Intel Xeon Gold 6148 and AMD EPYC 7501s

    Thanked by 1MikePT
  • @Clouvider said:
    I don’t like the new, confusing, naming scheme and jacked prices despite Spectre and Meltdown @ Intel. I hope AMD steps up across all ranges (that means introducing a competitor to the E3 series) so hopefully Intel gets real in the near future.

    Although we are going some offtopic now. I would say at some point they will. When you consider Intels reaction to ryzen when they threw coffee lake onto the market.
    In my opinion Intel lacked a competitor over all those years resulting in them making smaller steps in innovation and research. Okay it is their fault up to some point because they denied AMD to compete with them regarding all the patents Intel got. I am glad to see AMD is coming back slowly. Give it another 1-2 years. We already see a lot of hosters offering AMD systems. One of my hosters offered me to test their AMD series and it was actually quite nice.
    I have to confess I am kind of an AMD fanboy tho having an Intel Core in my desktop. I had the phenom ii x4 and was in need for a newer cpu ~2yrs ago but that FX-Series well you know I avoided it then.
    In general I dislike Intel because of a few certain aspects.

    tl;dr AMD will get its time to shine more :D

  • sureiamsureiam Member
    edited April 2018

    More datacenters need to adopt EPYC the performance is remarkably high. However I'm personally more interested in the built in virtual memory encryption. In fact I would pay EXTRA if I could be placed on an EPYC node with virtual memory encryption enabled (SEV)

    Demo video of memory encryption by AMD EPYC.

    But perhaps I'm just paranoid and don't trust admins I've never met. Currently we have no choice but to trust them.

    Thanked by 1Hxxx
Sign In or Register to comment.