gerdesj a day ago

I understand why people use RH and Rocky and even Oracle: the rpm wranglers. However its not for me.

My earliest mainstream distro was RH when they did it just for fun (pre IBM) and then I slid slightly sideways towards Mandrake. I started off with Yggdrassil.

I have to do jobs involving RH and co and its just a bit of a pain dealing with elderly stuff. Tomcat ... OK you can have one from 1863. There is a really good security back port effort but why on earth start off with a kernel that is using a walking stick.

Perhaps I am being unkind but for me the RH efforts are (probably) very stable and a bit old.

It's not the distro itself either. The users seem to have snags with updating it.

I (very generally) find that RH shops are the worst at [redacted]

  • thebeardisred a day ago

    Hi! I'm sorry this has been your experience. I'm one of the Red Hatters who's been working behind the scenes to get this over the finish line.

    I do say my genuine thanks for your earnest expression. The version and ABI guarantee is not for everyone. At the same time some folks around these parts know that I'm "not an apologist for running an out of date kernel". I can assure you that everything shipped in the forthcoming P550 image is fresh. GCC 15. LLVM 19, etc. It's intended for development to get more software over the finish line for RISC-V.

    Conflict of interest statement: I work for Red Hat (Formerly CoreOS), and I'm also the working group lead for the distro integration group within RISE (RISC-V Software Ecosystem).

    • jabl a day ago

      > The version and ABI guarantee is not for everyone.

      As an aside, that kABI guarantee only goes so far. I work in HPC/AI, and the out-of-tree drivers we use like MOFED and Lustre drivers would break with EVERY SINGLE RHEL minor update (like RHEL X.Y -> X.(Y+1) ). Using past form here because I haven't been using RHEL for this purpose for the past ~5 years, so maybe it has changed since although I doubt it.

      • kj4ips 16 hours ago

        In that boat now, weak-modules means you sometimes get lucky, and can reuse. However, since it's more effort to determine if a rebuild is needed than just slap the "build all the vendor kmods and slurm" button, we tend to build for each kernel. IIRC el8 added kernel symbol signature hashes as Provide/Requires, with automation to extract them at build time, so kmods got a lot easier to deal with.

        • jabl 15 hours ago

          Sorry, I was being imprecise. Rebuilds per se are no problem, as both MOFED and Lustre provide sources, and DKMS nicely automates rebuilding when installing a new kernel image. The actual problem is that RHEL minor releases would also break kernel internal API's, and thus building the kernel modules would fail.

      • pabs3 21 hours ago

        Is anyone trying to get those drivers upstreamed?

        • jabl 18 hours ago

          Yes, and yes.

          The kernel parts of MOFED are largely backports of the latest and greatest upstream kernel drivers to the various distro kernels their customers actually run. (The non-kernel parts of MOFED is mostly open source but does contain some proprietary special sauce on top, like IIRC SHARP support isn't available in FOSS.). The HPC community does tend to want to use the latest RDMA drivers as those are critical for at scale performance.

          For Lustre, the client driver was upstreamed into staging, where it sat AFAIU largely unused for a few years until it was ripped out again. The problem was that Lustre developers didn't adopt an upstream-first development approach, and thus the in-kernel driver was basically a throw over the fence fork that nobody cared about. I think there is an effort to try again and hopefully adopt an upstream-first approach, remains to be seen whether it'll succeed.

          • pabs3 18 hours ago

            For MOFED, why not just wholesale use a newer Linux kernel version?

            • jabl 15 hours ago

              Perhaps the cure is worse than the disease? There are several reasons to stay with a distro kernel:

              - Lustre releases target distro kernels, upstream would likely break.

              - Distro stays on top of CVE's etc. and provide updates when needed.

              - HW likely certified for a few supported distros only, use anything else and you're on your own.

        • globalc 19 hours ago

          That, and if not possible one can try to get the used kABI symbols graylisted at Red Hat, to get informed when they change.

        • aragilar 16 hours ago

          Lustre got dropped from the kernel, so it seems unlikely?

  • bigstrat2003 a day ago

    > Tomcat ... OK you can have one from 1863. There is a really good security back port effort but why on earth start off with a kernel that is using a walking stick.

    Because old software is battle-tested and reliable. Moreover, upgrading software is ever a pain so it's best to minimize how often you have to do it. With a support policy of 10 years, you just can't beat RHEL (and derivatives) for stability.

    • danieldk 20 hours ago

      Having had to use those kinds of machines often as a user, it is a total pain. For some reason, these enterprise distributions end up being used a lot on scientific and machine learning clusters. You have to deal with 5-10 year old bugs that are solved in every other distribution already and you have to jump through hoops to make modern software run.

      For me it always felt like the system administrators externalizing the cost on the users and developers (which are the same in many cases).

      Despite my dislike of enterprise Linux, Red Hat is doing a lot of awesome work all around the stack. IMO Fedora and the immutable distros are the real showcase of all the things they do.

      • znpy 20 hours ago

        You can run whatever you want in containers. You don't even need root permissions. Red Hat's podman can launch containers without the need for root privileges.

        > Despite my dislike of enterprise Linux, Red Hat is doing a lot of awesome work all around the stack. IMO Fedora and the immutable distros are the real showcase of all the things they do.

        Fedora today is what RHEL will be tomorrow. They quite literally freeze a Fedora release to use as a base for RHEL's next release. If you like Fedora today you're gonna like Fedora tomorrow.

        • tryauuum 8 hours ago

          it's still painful when you can't even use the os-provided version of git and have to install newer one with conda

          • carlhjerpe 5 hours ago

            If you can get Nix running on these ancient machines it'll bring you all up2date packages you want, you can create Nix profiles that you install in a pre-configured path so you can use the packages in systemd too if you fancy.

            It's really really great, even if you don't use or plan to use NixOS (Nix was born long before NixOS).

    • homebrewer 20 hours ago

      RHEL's kernel — the actual base of the operating system with the largest effect on stability — is not old. It might have a version number from the middle of the last century, but there are so many massive backports in there that in a few years time after release it gets closer to the latest mainline than to its original version. Don't expect too much from it.

    • rubitxxx10 a day ago

      I’m old. I used one of the original boxed RH distros. It was cool then. That was almost 30 years ago.

      I know they give back to Linux, and I’m thankful for the enterprises that pay for it because of that.

      It’s not a bad company, though it’s strange that you could be a great developer and lose your position there if your project gets cut, unless another team picks you up, from what I hear.

      But when Linus created Linux, he didn’t do it for money, and RH just seems corporate to me like the Microsoft of Linux, way back before Microsoft had their own Linux. I want my Linux free-range not cooped.

      They don’t do anything wrong. They just don’t give the vibe. Anyone asking for money for it doesn’t “get it” to me.

      • lttlrck 6 hours ago

        Redhat made Linux palatable for enterprise though. Without enterprise adoption where would Linux be?

      • pjmlp 18 hours ago

        Except Linux only took off thanks to those that didn't want to pay for UNIX, and the UNIX vendors that wanted to cut down R&D costs from their own in-house UNIX clones, and were uncertain if BSD was still safe to use with the ongoing AT&T lawsuit.

        • LeFantome 4 hours ago

          It is not that they did not want to pay for UNIX. After all, they pay for RHEL.

          They did not want to pay for big iron for sure, preferring commodity hardware. Even then though, many Linux boxes can get pretty expensive.

          I think it is more about openness and control than it is about cost. Linux brings flexibility and freedom.

          So does BSD of course. The timing of that lawsuit changed the world.

        • inejge 15 hours ago

          Re the last part: USL vs BSDi was filed in 1992 and settled in 1994, long before any sizeable vendor paid attention to Linux. (Version 1.0 of the Linux kernel was released at about the same time that lawsuit was settled.) So you shouldn't use that argument as part of your rationale.

          • LeFantome 3 hours ago

            You do not believe that what happened from 1991 to 1995 explains anything about how we got here?

            Red Hat was founded in 1993. When do you think they got the idea? When do you think companies like Red Hat decided to bet on Linux instead of BSD? Debian was founded in 1993 as well. When was that lawsuit settled again?

            An awful lot of the Linux momentum that carries us to this very day appeared after the BSD lawsuit was filed and before it was settled.

            What about the other “big and professional” competitor to Linux?

            GNU HURD was started in 1990. The original plan was to base it off the BSD kernel. The Linux kernel appeared in 1991. BSD fell under legal threat in 1992. Debian appeared in 1993. RMS lost interest in HURD. None of these dates had much impact you don’t think?

          • pjmlp 15 hours ago

            I should because perceptions take a very long time to change.

            If you ask random dev on the street about .NET, there is an high probability they will answer it is Windows only and requires Visual Studio.

      • danieldk 20 hours ago

        I’m old. I used one of the original boxed RH distros. It was cool then. That was almost 30 years ago.

        Does anyone remember glint (graphical UI for RPM) that was part of Red Hat? Must have been Red Hat 4.x or thereabout.

        • LeFantome 3 hours ago

          Yes indeed. How about AwesomeWM? Not the one that exists now. The one from Red Hat 4.x or so.

      • znpy 20 hours ago

        > But when Linus created Linux, he didn’t do it for money, and RH just seems corporate to me like the Microsoft of Linux, way back before Microsoft had their own Linux. I want my Linux free-range not cooped.

        You seem to forget that Red Hat has funded a lot of the development of the Linux ecosystem. There would be essentially no modern linux environment without Red Hat.

        • carlhjerpe 5 hours ago

          I'm thankful to RedHat, every other "cornerstone project" seems to be funded by them. The one that crossed my mind now is the PipeWire audio server, it just solved Linux audio for realsies this time.

          I wouldn't use their products for much though, too enterprisey. Their projects are great and I'm happy someone else buys their packages.

    • tanelpoder a day ago

      Yep, when you have thousands of different production apps, installed and running directly on Linux - not talking about containers or microservices here - you’ll have very little appetite to upgrade all of them to the latest and shiniest technologies every couple of years. Stability & compatibility with existing skillsets is more important.

  • copperx a day ago

    I have to confess that my early experiences with RedHat as a teenager and dealing with the nightmareish RPM dependencies soured me from the distribution. I went to Debian and then its many descendants and never looked back; APT seemed magical in comparison.

    I assume they have a package manager that resolves dependencies well now? Is that what an RPM wrangler is?

    • homebrewer 20 hours ago

      This is a very outdated view. dnf runs circles around apt. Try it out, or at least find man pages on the ole 'net and see what it can do.

      Probably the thing I like the most is transactional installation (or upgrades/downgrades/removals) of packages with proper structured history of all package operations (not just a bunch of log records which you have to parse yourself), and the ability to revert any of those transactions with a single command.

      • aragilar 16 hours ago

        Possibly I'm just more used to apt (though fedora was my first linux), but I've found apt has better interfaces for what I'm trying to do (e.g. querying the state of the system and ensuring consistency across machines), and I've not found an equivalent to aptitude for dnf.

        Side-note, the other difference I've noticed is Debian (and presumably its derivatives) has better defaults (and debconf) for packages, so whereas stock config would work on Debian, on Rocky I have to change config files, install missing packages etc.

      • anticodon 20 hours ago

        I had the same experience as the OP in the beginning of the century. I've built a lot of RPM packages back then and it was clear that system of dependencies built into RPM format itself (not apt or dnf, this is dpkg level in terms of Debian) was poorly thought out and clearly insufficient for any complex system.

        I've also migrated to Debian and it felt like a huge step forward.

        I'm on Arch now, BTW.

        • LeFantome 3 hours ago

          The equivalent of RPM on Debian is the .deb package format. The equivalent of apt is dbf (or yum before it or up2date before that).

          Red Hat is just old enough to exist before package managers existed on Linux. It was not there at first on Debian either.

          Slackware still has not package manager really.

    • bigfatkitten a day ago

      rpm dependencies has been a solved problem with yum (and now dnf) for about two decades.

      • speakspokespok 20 hours ago

        Yum was borrowed from yellow dog Linux.

        • danieldk 20 hours ago

          To be pedantic, yum was not from Yellow Dog, it is Yellow dog Updater Modified after all. It was a rewrite of the Yellow Dog Updater by people at Duke University. (Yellow Dog Linux was based on Red Hat.)

          There was a lot of competition around package managers back then. For RPM, there were also urpmi, apt-rpm, etc.

    • speakspokespok a day ago

      First impressions really matter. This is also why I went Debian. You shouldn't be getting marked down for saying it.

      Many of us were running on 28.8 dial-up. Internet search was not even close to a solved problem. Compiling a new kernel was an overnight or even weekend process. Finding and manually downloading rpm dependencies was slow and hard. Same era when compiling a kernel went overnight or over the weekend. You didn't download an ISO you bought a CD or soon a DVD that you could booted off of.

      Compare that to Debian's apt-get or Suse's yast/yast2 of the time, both just handled all that for you.

      Debian and Suse were fun and fit perfectly into the Web 1.0 world; RedHat was corporate. SystemD was pushed by RedHat.

      • danieldk 20 hours ago

        Compiling a new kernel was an overnight or even weekend process

        One friend and I had a competition who could make the smallest kernel configuration still functional on their hardware. I remember that at some point we could build it in ten minutes or so. This was somewhere in the nineties, I was envious of his DX2-50.

        Compare that to Debian's apt-get or Suse's yast/yast2 of the time, both just handled all that for you.

        One of the really huge benefits of S.u.S.E. in Europe in the nineties was that you could buy it in nearly every book shop and it came with an installation/administration book and multiple CD-ROMs with pretty much all packages. Since many people did not have internet at all or at most dial-up, it gave you everything to have a complete system.

        • anthk 14 hours ago

          Yes, I remember that too. I about a 3 DVD set of Debian Sarge. 2 DVD with everything and the 3rd was about source packages.

      • LeFantome 3 hours ago

        You are mixing a lot of history there.

        Red Hat had packages but not package management at first. However, the same is true of Debian. It depends when you used them.

        Red Hat Linux branched into RHEL (corporate) and Fedora (community).

        SuSE went down a similar road to Red Hat and has both OpenSUSE and SLE these days. Fedora is less corporate than OpenSUSE is.

        Debian is still Debian but a bit more pragmatic and a bit less GNU these days (eg. non-free firmware).

  • LeFantome 4 hours ago

    Things like Flatpak and Distrobox are game changes for these “stable” distros like RHEL or even Debian.

    Core system stays static and you do not get blindsided by changes in packages you do not care about. At the same time, you can easily install very up-to-date apps and dependencies if you need them.

  • worthless-trash a day ago

    Disclaimer: I'm very involved in the kernel part of this for $company.

    The RHEL kernels themselves do see many improvements over time, the code that you'll see when the product goes end of life is considerably updated compared to the original version string that you see in the package name / uname -a. There are customer and partner feature requests, cve fixes and general bug fixes that go in almost every day.

    The first problem of 'running old kernels' is exacerbated by the kernel version string not matching code reality.

    The second probelm is many companies don't start moving to newer rhels when its out, they often stick to current -1, which is a bit of a problem because by the time they roll out a release, n-1 is likely entering its first stage of "maintenance" so fixes are more difficult to include. If you can think of a solution to this, I'm all ears.

    The original reason behind not continually shipping newer kernel versions is to ensure stability by providing a stable whitelisted kABI that third party vendors can build on top of. This is not something that upstream and many OS vendors support, but with the "promise" of not breaking kabi, updates should happen smoothly without third party needing to update their drivers.

    The kabi maintenance happens behind the scenes while ensuring that CVE fixes and new features are delivered during the relevant stage of the product lifecycle.

    The kernel version is usually very close to the previous release, in the case of rhel10 its 6.13 and already with zero day fixes it has parts of newer code backported, tested, etc in the first errata release.

    The security landscape is changing, maybe sometime Red Hat Business Unit may wake up and decide to ship a rolling better tested kernel (Red Hat DOES have an internal/tested https://cki-project.gitlab.io/kernel-ark/ which is functionally this ). Shipping this has the downside is that the third party vendors would not have the same KABI stability guarantees that RHEL currently provides, muddy the waters of rhels value and confuse people on which kernel they should be running.

    I believe there are two customer types, ones who would love to see this, and get the newest features for their full lifecycle, and ones who would hate it, because the churn and change would be too much introducing risk and problems for them down the line.

    Its hard, and likely impossible to keep everyone happy.

    • jabl a day ago

      As I mentioned in another comment on this thread:

      > As an aside, that kABI guarantee only goes so far. I work in HPC/AI, and the out-of-tree drivers we use like MOFED and Lustre drivers would break with EVERY SINGLE RHEL minor update (like RHEL X.Y -> X.(Y+1) ). Using past form here because I haven't been using RHEL for this purpose for the past ~5 years, so maybe it has changed since although I doubt it.

      I'm not sure what the underlying problem here is, is the kABI guarantee worthless generally or is it just that MOFED and Lustre drivers need to use features not covered by some kind of "kABI stability guarantee"?

      • lustre-fan 3 hours ago

        I work on Lustre development. Lustre uses a lot of kernel symbols not covered by the kABI stability guarantee and we already need to maintain configure checks for all of the other kernels (SuSe, Ubuntu, mainline, etc) that don't offer kABI anyway. So in my opinion, it's not worth the effort to adhere to kABI just for RHEL. Especially when RHEL derivatives might not offer the same guarantees. DKMS works well enough, especially for something open source like Lustre.

        Honestly, I'm not sure who kABI is even designed for. All of the drivers I've interacted with the HPC space (NVIDIA, Lustre, vendor network drivers, etc.) don't seem to adhere to kABI. DKMS is far more standard. I'd be interested to know which vendors are making heavy use of it.

    • pabs3 21 hours ago

      How much of the RHEL kernels is stuff that isn't in Linux mainline or LTS?

      • Conan_Kudo 18 hours ago

        Pretty much all of it is in mainline modulo the secure boot lockdown patches, which are downstream for all distributions because Linus fundamentally believes those patches do not make sense.

        Linux longterm often is missing stuff the RHEL kernel has, because RHEL backports subsystems from mainline with features and hardware support.

    • znpy 20 hours ago

      What if, more than a rolling kernel, we get a new kernel every two years or so?

      Or maybe one in the middle of the (expected) lifetime of the major release ?

      Just thinking out loud, but I acknowledge that maintaining a kernel version is no small task (probably takes a lot of engineering time)

  • dgfitz a day ago

    I’d rather use redhat than Ubuntu. I was handed a machine the other week with Ubuntu 23.10 on it, OS supplied from a vendor with extensive customization. Apt was dead. Fuck that. At least RH doesn’t kill their repos.

    • gerdesj a day ago

      I've got Ubuntu 22.04 lying around that still update because they are LTS. Ubuntu has a well publicised policy for releases and you will have obviously read them.

      Try do-release-upgrade.

      You also mention "OS supplied from a vendor with extensive customization. Apt was dead."

      How on earth is that Ubuntu's problem?

      • hshdhdhj4444 a day ago

        Isn’t Ubuntu basically killing apt?

        My Ubuntu became unusable because it kept insisting on installing a snap version of Firefox breaking a whole bunch of workflows.

        I do want to try a RH based OS (maybe Fedora) so they don’t keep changing things on me, but just where I am in life right now I don’t have the time/energy to do so, so for now I’m relying on my Mac.

        Hopefully I can try a new Linux distro in a few months, because I can’t figure it out yet, but something about macOS simply doesn’t work for me from a getting work done perspective.

        • nine_k a day ago

          I've heard many good things about Pop OS. It's like Ubuntu done right, and it does have an apt package for Firefox.

          (I run Void myself, and stay merrily away from all these complications.)

        • mulmen a day ago

          I have been using Fedora Sway as my desktop operating system for a couple years now and I am very happy. It’s definitely worth a try. I have access to flatpak when I need it for apps like steam but the system is still managed by rpm/dnf. There’s of course some SELinux pain but that’s typically my fault for holding it wrong. Overall very impressed.

      • dgfitz a day ago

        I cannot update the OS per the contract.

        It’s Ubuntu’s problem because they decide they’re smarter than their users and nuke their repos.

        Fuck all of that.

        • Propelloni 17 hours ago

          I get your frustration but this is really a problem of the vendor. We had something similar about 3 years ago, where a vendor delivered a proprietary driver for a piece of hardware that only worked with a specific 2.6 Linux kernel version--making it at least six years old. Is this the Linux project's fault? I don't think so.

        • viraptor 21 hours ago

          The vendor should provide you with updates to the new version, or use LTS. There's absolutely nothing here bad on the Ubuntu part.

          Your contract is with the vendor if you have one. Unless you have a contract with Canonical and then you can ask them for support.

        • aragilar 16 hours ago

          Oh, you got that bit of fun... That's not an apt problem, that's a Ubuntu problem (and it's a reason I encourage people to either stick to LTS if they must use Ubuntu, or just run Debian or and other distro which doesn't block upgrades).

        • dismalaf a day ago

          It's well publicized that they don't maintain support for old, non-LTS distros. They literally delivered what they promised. Could have been avoided by using an LTS distro.

          Fedora does the same. No corporate vendor supports 6 month cycle distros for more than a year. RHEL releases come super slowly, for example.

          • dgfitz a day ago

            I didn’t have a say in the matter of OS choice, it doesn’t matter how well-publicized Ubuntu’s stance is, it’s wrong. I don’t care if it’s not an LTS, keep the fucking repos open and advertise you’re using an insecure OS. Let me, the user, make that choice. Don’t pretend I’m stupid and need some kind of benevolent dictator to make choices for me, or handicap me because they’re smarter than me. They’re not.

            • eddythompson80 21 hours ago

              That’s exactly how it works. If you want to use an unsupported, insecure OS, you just have to opt into it.

              You opt into it by changing your repositories to the https://old-releases.ubuntu.com archive mirror. You can install and use Ubuntu 6.10 if you want.

            • dismalaf a day ago

              "Keeping the repos open" has a cost on their part. Servers aren't free. If you think you're smart then mirror your own repos.

              • dgfitz a day ago

                Really? How much more can it cost to host their LTS and non LTS repos open at the same time?

                C’mon, that’s such a weak argument I think you know it.

                • dismalaf a day ago

                  If there's no cost in time, effort or equipment then mirror it yourself. It's easy, right?

                  Or just use an LTS distro like literally every single other organization that depends on Ubuntu for their business SMH. Like, it's absurd to even think about...

                  • dgfitz 7 hours ago

                    You ignored the premise. I can’t update. Shake you head, I didn’t make the call, canonical did.

                    • dismalaf 6 hours ago

                      Anyhow someone else showed they do indeed host old repos, just in a different place. Also, why can't you update? Who the hell made that contract? And is it the client the said you can't update or Canonical? Because the latter seems sus...

                      • dgfitz 4 hours ago

                        I can see that you’re now realizing, I didn’t have a say in the constraints. I also hate said constraints. Hand I was dealt.

                        Ubuntu is a cancer on the gnu/linux/open source community.

        • unmole a day ago

          Sounds made up.

    • ndsipa_pomu 19 hours ago

      Well that's just plain incompetent on the part of your vendor.

      23.10 is not an LTS version and Ubuntu only provide updates for a short period of time (6 months or so after the next version is released), so the vendor should have upgraded it to 24.04 which IS an LTS version.

      It's like you're complaining to Microsoft about a vendor giving you an old XP machine and that you can't update it.

      • chithanh 13 hours ago

        I think the more apt (pun not intended) comparison would be to macOS? Trying to install macOS High Sierra from the Internet without hackery will lead to "the recovery server could not be contacted" error message, because certificates have expired. Like if your Mac came with High Sierra and you want to do a factory reset.

        Windows from that era still updates. Though up next will be expiration of Windows UEFI CA 2011 which will certainly lead to boot problems for some.

  • znpy 20 hours ago

    > I have to do jobs involving RH and co and its just a bit of a pain dealing with elderly stuff. Tomcat ... OK you can have one from 1863.

    It's 2025, you can run whatever version you need in containers.

publicmail a day ago

Maybe a dumb question but how do non x86 boards normally boot Linux images in a generic way? When I was in the embedded space, our boards all relied on very specific device tree blobs. Is the same strategy used for these or does it use ACPI or something?

  • Arnavion a day ago

    All RISC-V consumer boards running Linux also use DT. RISC-V is also working on getting ACPI but primarily for the sake of servers, just like with ARM where ACPI is primarily used for servers (ARM SBBR / ServerReady).

    ARM Windows laptops only use ACPI because Windows has no interest in DTs, but under Linux these devices are still booted using DT. I don't know for sure, but the usual reason is that these ACPI implementations are hacked up by the manufacturer to be good enough to work with Windows, so supporting them on Linux requires more effort than just writing up the DT.

    • ChocolateGod a day ago

      > so supporting them on Linux requires more effort than just writing up the DT.

      More effort then producing unique images for every board?

      • mappu 18 hours ago

        The DT should really be put in the firmware (e.g u-boot), same as ACPI on x86 is in the firmware (the bios/efi).

        Then you wouldn't need a unique kernel/OS image. For devices that have u-boot in ROM the DT is usually there (fdt).

  • jabl a day ago

    The x86 platform uses a plethora of platforms services under different names like UEFI/ACPI/PCI/(ISA plug-n-play back in the day)/APIC (programmable interrupt controller and evolved variants thereof)/etc. that allows the generic kernel to discover what's available when it boots and load the correct drivers.

    ARM servers do the same with SBSA (a spec that mandates things like UEFI, ACPI etc. support) etc. I think there's some effort in RISC-V land to do the same, also using UEFI and ACPI.

    • skywal_l 21 hours ago

      Maybe I'm wrong but isn't it what SBI[0] is for?

      [0] Supervisor Binary Interface

  • IshKebab 16 hours ago

    They basically don't at the moment. RISC-V is working on ACPI and "universal discovery" as a solution but it doesn't exist yet.

  • beeflet a day ago

    I think windows ARM laptops use UEFI?

    • Arnavion 20 hours ago

      publicmail was asking about ACPI vs DT, not UEFI. Using UEFI and ACPI/DT are orthogonal; DT-using devices can also boot from UEFI if the firmware provides it. See https://github.com/TravMurav/dtbloader for example.

      • thebeardisred 19 hours ago

        This is explicitly what we're doing in RHEL with the P550.

        We use u-boot and it's EFI capabilities to init grub (instead of another instance of u-boot)

    • ChocolateGod a day ago

      They do, Windows Phone even use UEFI (not sure was completely compliant) back in the day.

    • pantalaimon 15 hours ago

      looks like they still require a custom device tree to boot Linux

kwanbix 16 hours ago

How do they get access to the source code? I read some time ago that RH has changed how they provided the source code and that it was (almost) impossible to get it now?

  • rwmj 14 hours ago

    I don't know where you heard that? The source code for Red Hat's RISC-V developer preview will be released alongside the binaries, on 1st June. However almost all of it is already in CentOS Stream 10 and you can browse it here: https://gitlab.com/redhat/centos-stream There are a few patched packages (and quite a large kernel patch), which is what we'll be releasing into a separate git repo when the developer preview is actually released.

    • kwanbix 14 hours ago

      I mean in general, you can read it here for example:

      https://www.theregister.com/2023/07/10/oracle_ibm_rhel_code/...

      • rwmj 12 hours ago

        Yeah I wouldn't believe what Oracle say, they're hardly a disinterested party here. You can go and grab the source for CS10 which is almost exactly RHEL 10 from the link I posted above, and RHEL 10 sources are distributed to our customers.

  • nhanlon 12 hours ago

    We've been building the RISC-V port from a combination of Fedora and CentOS Stream sources--the same as the core operating system--since early 2024.

    A lot of RISC-V support was already in F40 (which EL10 is cut from), so the rest was largely backporting and integrating into RHEL, which again, we've been tracking since CentOS Stream 10 was branched from Fedora last year.

arminiusreturns a day ago

I'm so looking forward to a RISC future!

  • agarren a day ago

    Ditto! I haven’t found any hardware that’s daily-driver ready, but I keep looking.

    https://store.deepcomputing.io/products/dc-roma-ai-pc-risc-v...

    I especially like the idea of getting a framework version in this case I want to swap in a different mainboard. By their own admission, the risc-v board is targeting developers and not ready for prime time. Also coming from the US, not sure how the tariff thing will workout…

    • 0x000xca0xfe a day ago

      RISC-V software ecosystem is really good already. It feels like everybody is just waiting for high performance CPU cores now. Sadly silicon cannot be built and released within seconds like software...

      Better to buy a SBC for now (I can recommend the OrangePi RV2 - it's fantastic!) and wait until actually desktop/laptop-class hardware is ready :)

      • tucnak 19 hours ago

        Or best buy something like https://www.crowdsupply.com/sutajio-kosagi/precursor or some other FPGA-based platform to retain the programmable logic capability, you never know whether you're going to need it, and should you need it after all, it helps knowing it's there.

        • rwmj 14 hours ago

          Love FPGAs, but they're not very practical if you need to run a non-toy Linux on RISC-V. They will typically top out at 100 MHz for the kind of FPGAs that you and I can afford to buy, and have other problems like limited RAM.

sylware 17 hours ago

I cannot wait for those ultra-performant rv64 micro-architecture manufactured with the latest silicon process. One less toxic PI lock and much cleaner assembly.

mrbluecoat a day ago

Better title: Rocky Linux 10 Will Support Two RISC-V Boards

  • NewJazz a day ago

    For a distro,just building packages for an architecture is notable support-wise. Those with custom firmware and kernels can pair them with the rocky 10 userspace.

    • nhanlon 12 hours ago

      Exactly! The AltArch SIG is exactly where those customization will come from, driven by community support.

  • rjsw a day ago

    They could easily support the Pine64 Star64 board as well, the VisionFive2 build of u-boot works on the Star64 too.

    • nhanlon 12 hours ago

      Yep, should work fine, just not stepping across the upstream (Fedora) support at the moment.

  • nine_k a day ago

    Even to support one board, they'd need the whole build / testing infrastructure for RISC-V. Likely adding more boards is booing to be easy now, and any architecture-specific regressions, easier to spot and fix timely.

    • nazunalika 11 hours ago

      For sure, we needed a build infrastructure for RISC-V. I started out with five VisionFive 2's in my lab, and they're still doing work as needed. Granted, those are quite slow and painful because some builds will take a long, long time on those (for example, GCC took 7 days at the beginning, but we have it at about 5 days plus change now). Ever since we've added SiFive P550's to the mix, it has made it much faster for us to identify build issues and get them rectified. I still happen to use my VF2's for the "tiny" builds.

      It's true that since we've had a usable build root since last 2024, it gives our AltArch group the opportunity to build different kernels to support other SBC's or boards like they already do for ARM SBC's (rasperry pi for example, since that support isn't native to the EL kernel). So while we support the VF2's and QEMU out of the box, that group will handle the additional kernels for more hardware support.

      I'm actually looking forward to seeing what other boards the AltArch group will happen to add support for.

  • rob_c a day ago

    Even better title: Rocky will take the RHEL work and rebrand and sell the boards at a discount from China and claim a win and that they're being attacked by IBM.

    • felbane a day ago

      Man some of y'all really have beef with Rocky...

      • dismalaf a day ago

        Because their model is the absolute laziest possible one.

      • rob_c 11 hours ago

        Yes, because the idea of iterate and claim ownership is dishonest and lazy at best.

    • nhanlon 13 hours ago

      We've actually been working with Fedora and RH on RISC-V for over a year now :)

      • rob_c 11 hours ago

        Still, past sins and all that. Not too mention the model and the directions from those at the top.

        Great I always applaud contributions and I want to encourage it. But please see the damage done by some quite senior persons on the project and please distance yourselves from them.