zamadatix 4 days ago

I miss the ~X99 era of HEDT where the price to go from the consumer Z77 class stuff in the article to the "workstation" class was very minimal. The low end "Xeon" processors were had all with the consumer Haswell models had, including single core performance and relative price equivalence, but you got 4 memory channels, the ability to drive massive amounts of memory, ECC support, and 40 PCIe lanes from the CPU.

Nowadays if you want to buy HEDT you either get a very fast version of the consumer part which is still extremely lane limited (limiting the usefulness of features like bifurcation), still has 2 memory channels, has the same memory capacity limits as that 10 year old X99 platform, and may or may not have usable ECC support (depending on the particular board and/or platform chipset options for that year) OR you can spend way more to get a CPU which is worse performing unless you need 32+ cores (and the even higher cost associated with that) but at least the platform has all that original HEDT capability.

The 1st generation of Threadripper was a bit like that too, albeit AMDs CPUs weren't the top performers of the day when that came out and they've since gone up accordingly.

bdhess 4 days ago

More config options means more to test and support, so more cost. The manufacturers who are buying lower end motherboards/chipsets at scale don’t need the feature. And virtually no consumer installs a PCI-E device, ever.

So basically, it’s an argument to make all computers more expensive, in order to subsidize hobbyists.

  • hakfoo 4 days ago

    I'm sort of amazed these days that we still have commercial BIOSes (well, UEFIs)

    They're not a selling point, even for the enthusiasts who buy individual motherboards.

    You'd think that that by now, the chipset vendors would provide a reference Coreboot image for each new platform, (which, since it would be more or less for debugging purposes, would have MAX OPTIONS) and then the motherboard manufacturers would do the bare minimum to cut off any features not relevant to their boards or swap in modules for whichever small technical deltas-- different audio chipsets or clock generators-- they actually make underneath the garish silkscreen and RGB strips.

    • jeroenhd 4 days ago

      Motherboard manufacturers would much rather pay AMI or Insyde a non-insignificant amount of money to make all BIOS development go away.

      They could hire people to do the work for them, but that'd require finding and vetting people with the necessary skills without the knowledge of how to do so. As long as BIOS manufacturers can sell their services for cheaper than setting up an extra firmware department, they'll do good business.

    • winocm 4 days ago

      Honestly, if someone made a reference platform setup utility (and BDS initialization screen) for EFI systems with a familiar-ish UI and got it into upstream git, then I think a lot of the value add of commercial firmware would probably dissipate.

  • crote 4 days ago

    The problem with this argument is that 1) bifurcation often still isn't available on enthusiast-level motherboards which are filled with features catered to hobbyists, and 2) all motherboards are using variations on the same handful of BIOS firmware images.

    Same-slot bifurcation controlled by the BIOS requires basically zero additional testing. If it works on one board, it's pretty much guaranteed to work on another. After all, it's nothing more than a feature flag toggle.

    They have already invested the money into developing the software feature, because some boards do have bifurcation. So why isn't it available on on all enthusiast-level boards? Or even worse, why are they spending extra time and energy into disabling it on cheap motherboards?

  • reginald78 4 days ago

    If the board has PCI-E slots, it should support their features. If customers never used them they would have been dropped for cost reasons and no one would care. And indeed there are plenty of mini PCs and laptops where that is the case. Presumably people not buying those have their reasons.

  • zamadatix 4 days ago

    If motherboard vendors are doing this out of only leaving settings for the things they've properly QA'd and can guarantee to work 100% of the time then half of the settings of most boards are in need of disabling as well.

  • JackSlateur 3 days ago

    GPU and NVMe are PCI-e devices, they are both widespreads, so I do not get your point

devwastaken 4 days ago

Looking at asus's 4x nvme bifurcation card you cannot use an igpu enabled processor and maintain 4x4 with any CPU. I don't want to fill a slot with a dedicated GPU.

For a NAS I don't need 4 in a pool, just 2 per slot works fine with HDD backups.

  • crote 4 days ago

    I believe this has essentially been solved by the AM5 platform. The Ryzen 7000 IO die has a x16 slot which can usually be configured as 4x4, and has a tiny igpu which should be plenty for occasional server use.

    The budget Ryzen 8000 CPUs are indeed quite limited, which is a shame if you wanted to go for an 8000G APU to use its powerful iGPU for transcoding.

  • 3np 3 days ago

    Hm, what do you mean? Unless it's an Intel thing:

    For AM4: The ASUS card (at least the one I'm looking at) is PCIe4 4x4, while APUs only expose PCIe3. Should still work but over PCIe3 with X570. I've been running PCIe3 4x4 cards from other makers on that chipset and did not notice any issues.

    For AM5 I don't see why not?

    I have not tried but something like this might be what you are looking for? https://aliexpress.com/item/1005005311953959.html

    There are bifurcation cards from other makers with other configurations also exposing two m2 slots, if you look around. E.g. https://www.10gtek.com/nvmessdadapterforu.2ssd

    • 3np 3 days ago

      Follow-up: Just learned that so far AM5 is actually a regression compared to AM4[0]. The best they can give is exactly 16 usable lanes (4 of total 20 reserved for chipset). Zen4c models only give you 10. Between this and Pluton I'm quickly losing interest in AM5...

      [0]: OK, total bandwidth is still obviously up because we have gained a PCIe generation but still I'd have expected available lane-count to not drop.

      • wtallis 3 days ago

        AM5 is not a regression from AM4; it adds four more PCIe lanes. The laptop chips repackaged for the desktop socket don't have as many lanes, because laptop chips don't need that much IO.

        • 3np 15 hours ago

          I'm talking about the AM5 desktop APU options (including announced PRO APUs), not the platform itself or laptop SKUs.

nubinetwork 4 days ago

Most CPUs don't have the lanes for this anyways... after you install a video card and an NVME drive, you're lucky if you have 8 lanes to spare, let alone 16...

  • nisegami 4 days ago

    Isn't that exactly why bifurcation is desirable?

    • freeone3000 4 days ago

      Bifurcation lets you allocate the lanes in more configurations, but doesn’t actually give you more lanes or allow you to dynamically allocate a lane. Unless you’re connecting more devices than are planned for by the cpu and chipset, you’re not going to see an advantage.

      • atmanactive 4 days ago

        The problem is the physical slot waste. If you have x16 slot and you put an x16 GPU card into it (like majority of people do), then, there is no waste whatsoever. But, if you put an x8 GPU or some other x4 or x1 card, then, there is a lot of PCIe lanes wasted, simply because you can't physically access them anymore. When your MBD can bifurcate, for example x8+x8, then, a £10 PCIe dongle can split that one x16 PCIe slot into two x8 PCIe slots.

        • 3np 3 days ago

          And if you ever have 1-4 spare slots, you could always fit another SSD or USB ports in there. Also keeping in mind that PCIe being package-switched, you can actually have a good time connecting more lanes than the CPU s set up for as long as you're not going full throttle on all cylinders and the chipset can cope :)

      • myself248 4 days ago

        Right, but if there's 8 lanes to spare, and I could use 4 for a 10GbE NIC and 4 for another NVMe drive, that's better than wasting all 8 on one of those functions and not being able to install the other at all.

        • vardump 3 days ago

          Isn't 4 lanes for a 10 GbE NIC a complete overkill? Two lanes should be plenty. One would do, assuming your combined full duplex bandwidth doesn't exceed 16 Gbps (minus overhead).

          For PCIe4, 4 lanes corresponds to 64 Gbps of bandwidth.

      • 3np 3 days ago

        It's fine to underprovision lanes. PCIe is packet-switched.

  • segmondy 2 days ago

    and yet, cheap $20 CPUs do have enough. My GPU rig runs dual e5-2680 $20 each. For $40 I have a total of 88 lanes. $160 dual server motherboard from Aliepxress. I currently have 6 GPUs. The motherboard supports PCIe bifurcation, so I don't really need this article. Just additional hardware to split the physical slots.