I understand why people use RH and Rocky and even Oracle: the rpm wranglers. However its not for me.
My earliest mainstream distro was RH when they did it just for fun (pre IBM) and then I slid slightly sideways towards Mandrake. I started off with Yggdrassil.
I have to do jobs involving RH and co and its just a bit of a pain dealing with elderly stuff. Tomcat ... OK you can have one from 1863. There is a really good security back port effort but why on earth start off with a kernel that is using a walking stick.
Perhaps I am being unkind but for me the RH efforts are (probably) very stable and a bit old.
It's not the distro itself either. The users seem to have snags with updating it.
I (very generally) find that RH shops are the worst at [redacted]
Hi! I'm sorry this has been your experience. I'm one of the Red Hatters who's been working behind the scenes to get this over the finish line.
I do say my genuine thanks for your earnest expression. The version and ABI guarantee is not for everyone. At the same time some folks around these parts know that I'm "not an apologist for running an out of date kernel". I can assure you that everything shipped in the forthcoming P550 image is fresh. GCC 15. LLVM 19, etc. It's intended for development to get more software over the finish line for RISC-V.
Conflict of interest statement: I work for Red Hat (Formerly CoreOS), and I'm also the working group lead for the distro integration group within RISE (RISC-V Software Ecosystem).
> The version and ABI guarantee is not for everyone.
As an aside, that kABI guarantee only goes so far. I work in HPC/AI, and the out-of-tree drivers we use like MOFED and Lustre drivers would break with EVERY SINGLE RHEL minor update (like RHEL X.Y -> X.(Y+1) ). Using past form here because I haven't been using RHEL for this purpose for the past ~5 years, so maybe it has changed since although I doubt it.
In that boat now, weak-modules means you sometimes get lucky, and can reuse. However, since it's more effort to determine if a rebuild is needed than just slap the "build all the vendor kmods and slurm" button, we tend to build for each kernel.
IIRC el8 added kernel symbol signature hashes as Provide/Requires, with automation to extract them at build time, so kmods got a lot easier to deal with.
Sorry, I was being imprecise. Rebuilds per se are no problem, as both MOFED and Lustre provide sources, and DKMS nicely automates rebuilding when installing a new kernel image. The actual problem is that RHEL minor releases would also break kernel internal API's, and thus building the kernel modules would fail.
The kernel parts of MOFED are largely backports of the latest and greatest upstream kernel drivers to the various distro kernels their customers actually run. (The non-kernel parts of MOFED is mostly open source but does contain some proprietary special sauce on top, like IIRC SHARP support isn't available in FOSS.). The HPC community does tend to want to use the latest RDMA drivers as those are critical for at scale performance.
For Lustre, the client driver was upstreamed into staging, where it sat AFAIU largely unused for a few years until it was ripped out again. The problem was that Lustre developers didn't adopt an upstream-first development approach, and thus the in-kernel driver was basically a throw over the fence fork that nobody cared about. I think there is an effort to try again and hopefully adopt an upstream-first approach, remains to be seen whether it'll succeed.
> Tomcat ... OK you can have one from 1863. There is a really good security back port effort but why on earth start off with a kernel that is using a walking stick.
Because old software is battle-tested and reliable. Moreover, upgrading software is ever a pain so it's best to minimize how often you have to do it. With a support policy of 10 years, you just can't beat RHEL (and derivatives) for stability.
Having had to use those kinds of machines often as a user, it is a total pain. For some reason, these enterprise distributions end up being used a lot on scientific and machine learning clusters. You have to deal with 5-10 year old bugs that are solved in every other distribution already and you have to jump through hoops to make modern software run.
For me it always felt like the system administrators externalizing the cost on the users and developers (which are the same in many cases).
Despite my dislike of enterprise Linux, Red Hat is doing a lot of awesome work all around the stack. IMO Fedora and the immutable distros are the real showcase of all the things they do.
You can run whatever you want in containers. You don't even need root permissions. Red Hat's podman can launch containers without the need for root privileges.
> Despite my dislike of enterprise Linux, Red Hat is doing a lot of awesome work all around the stack. IMO Fedora and the immutable distros are the real showcase of all the things they do.
Fedora today is what RHEL will be tomorrow. They quite literally freeze a Fedora release to use as a base for RHEL's next release. If you like Fedora today you're gonna like Fedora tomorrow.
If you can get Nix running on these ancient machines it'll bring you all up2date packages you want, you can create Nix profiles that you install in a pre-configured path so you can use the packages in systemd too if you fancy.
It's really really great, even if you don't use or plan to use NixOS (Nix was born long before NixOS).
RHEL's kernel — the actual base of the operating system with the largest effect on stability — is not old. It might have a version number from the middle of the last century, but there are so many massive backports in there that in a few years time after release it gets closer to the latest mainline than to its original version. Don't expect too much from it.
I’m old. I used one of the original boxed RH distros. It was cool then. That was almost 30 years ago.
I know they give back to Linux, and I’m thankful for the enterprises that pay for it because of that.
It’s not a bad company, though it’s strange that you could be a great developer and lose your position there if your project gets cut, unless another team picks you up, from what I hear.
But when Linus created Linux, he didn’t do it for money, and RH just seems corporate to me like the Microsoft of Linux, way back before Microsoft had their own Linux. I want my Linux free-range not cooped.
They don’t do anything wrong. They just don’t give the vibe. Anyone asking for money for it doesn’t “get it” to me.
Except Linux only took off thanks to those that didn't want to pay for UNIX, and the UNIX vendors that wanted to cut down R&D costs from their own in-house UNIX clones, and were uncertain if BSD was still safe to use with the ongoing AT&T lawsuit.
Re the last part: USL vs BSDi was filed in 1992 and settled in 1994, long before any sizeable vendor paid attention to Linux. (Version 1.0 of the Linux kernel was released at about the same time that lawsuit was settled.) So you shouldn't use that argument as part of your rationale.
You do not believe that what happened from 1991 to 1995 explains anything about how we got here?
Red Hat was founded in 1993. When do you think they got the idea? When do you think companies like Red Hat decided to bet on Linux instead of BSD? Debian was founded in 1993 as well. When was that lawsuit settled again?
An awful lot of the Linux momentum that carries us to this very day appeared after the BSD lawsuit was filed and before it was settled.
What about the other “big and professional” competitor to Linux?
GNU HURD was started in 1990. The original plan was to base it off the BSD kernel. The Linux kernel appeared in 1991. BSD fell under legal threat in 1992. Debian appeared in 1993. RMS lost interest in HURD. None of these dates had much impact you don’t think?
> But when Linus created Linux, he didn’t do it for money, and RH just seems corporate to me like the Microsoft of Linux, way back before Microsoft had their own Linux. I want my Linux free-range not cooped.
You seem to forget that Red Hat has funded a lot of the development of the Linux ecosystem. There would be essentially no modern linux environment without Red Hat.
I'm thankful to RedHat, every other "cornerstone project" seems to be funded by them. The one that crossed my mind now is the PipeWire audio server, it just solved Linux audio for realsies this time.
I wouldn't use their products for much though, too enterprisey. Their projects are great and I'm happy someone else buys their packages.
Yep, when you have thousands of different production apps, installed and running directly on Linux - not talking about containers or microservices here - you’ll have very little appetite to upgrade all of them to the latest and shiniest technologies every couple of years. Stability & compatibility with existing skillsets is more important.
I have to confess that my early experiences with RedHat as a teenager and dealing with the nightmareish RPM dependencies soured me from the distribution. I went to Debian and then its many descendants and never looked back; APT seemed magical in comparison.
I assume they have a package manager that resolves dependencies well now? Is that what an RPM wrangler is?
This is a very outdated view. dnf runs circles around apt. Try it out, or at least find man pages on the ole 'net and see what it can do.
Probably the thing I like the most is transactional installation (or upgrades/downgrades/removals) of packages with proper structured history of all package operations (not just a bunch of log records which you have to parse yourself), and the ability to revert any of those transactions with a single command.
Possibly I'm just more used to apt (though fedora was my first linux), but I've found apt has better interfaces for what I'm trying to do (e.g. querying the state of the system and ensuring consistency across machines), and I've not found an equivalent to aptitude for dnf.
Side-note, the other difference I've noticed is Debian (and presumably its derivatives) has better defaults (and debconf) for packages, so whereas stock config would work on Debian, on Rocky I have to change config files, install missing packages etc.
I had the same experience as the OP in the beginning of the century. I've built a lot of RPM packages back then and it was clear that system of dependencies built into RPM format itself (not apt or dnf, this is dpkg level in terms of Debian) was poorly thought out and clearly insufficient for any complex system.
I've also migrated to Debian and it felt like a huge step forward.
To be pedantic, yum was not from Yellow Dog, it is Yellow dog Updater Modified after all. It was a rewrite of the Yellow Dog Updater by people at Duke University. (Yellow Dog Linux was based on Red Hat.)
There was a lot of competition around package managers back then. For RPM, there were also urpmi, apt-rpm, etc.
First impressions really matter. This is also why I went Debian. You shouldn't be getting marked down for saying it.
Many of us were running on 28.8 dial-up. Internet search was not even close to a solved problem. Compiling a new kernel was an overnight or even weekend process. Finding and manually downloading rpm dependencies was slow and hard. Same era when compiling a kernel went overnight or over the weekend. You didn't download an ISO you bought a CD or soon a DVD that you could booted off of.
Compare that to Debian's apt-get or Suse's yast/yast2 of the time, both just handled all that for you.
Debian and Suse were fun and fit perfectly into the Web 1.0 world; RedHat was corporate. SystemD was pushed by RedHat.
Compiling a new kernel was an overnight or even weekend process
One friend and I had a competition who could make the smallest kernel configuration still functional on their hardware. I remember that at some point we could build it in ten minutes or so. This was somewhere in the nineties, I was envious of his DX2-50.
Compare that to Debian's apt-get or Suse's yast/yast2 of the time, both just handled all that for you.
One of the really huge benefits of S.u.S.E. in Europe in the nineties was that you could buy it in nearly every book shop and it came with an installation/administration book and multiple CD-ROMs with pretty much all packages. Since many people did not have internet at all or at most dial-up, it gave you everything to have a complete system.
Things like Flatpak and Distrobox are game changes for these “stable” distros like RHEL or even Debian.
Core system stays static and you do not get blindsided by changes in packages you do not care about. At the same time, you can easily install very up-to-date apps and dependencies if you need them.
Disclaimer: I'm very involved in the kernel part of this for $company.
The RHEL kernels themselves do see many improvements over time, the code that you'll see when the product goes end of life is considerably updated compared to the original version string that you see in the package name / uname -a. There are customer and partner feature requests, cve fixes and general bug fixes that go in almost every day.
The first problem of 'running old kernels' is exacerbated by the kernel version string not matching code reality.
The second probelm is many companies don't start moving to newer rhels when its out, they often stick to current -1, which is a bit of a problem because by the time they roll out a release, n-1 is likely entering its first stage of "maintenance" so fixes are more difficult to include. If you can think of a solution to this, I'm all ears.
The original reason behind not continually shipping newer kernel versions is to ensure stability by providing a stable whitelisted kABI that third party vendors can build on top of. This is not something that upstream and many OS vendors support, but with the "promise" of not breaking kabi, updates should happen smoothly without third party needing to update their drivers.
The kabi maintenance happens behind the scenes while ensuring that CVE fixes and new features are delivered during the relevant stage of the product lifecycle.
The kernel version is usually very close to the previous release, in the case of rhel10 its 6.13 and already with zero day fixes it has parts of newer code backported, tested, etc in the first errata release.
The security landscape is changing, maybe sometime Red Hat Business Unit may wake up and decide to ship a rolling better tested kernel (Red Hat DOES have an internal/tested https://cki-project.gitlab.io/kernel-ark/ which is functionally this ). Shipping this has the downside is that the third party vendors would not have the same KABI stability guarantees that RHEL currently provides, muddy the waters of rhels value and confuse people on which kernel they should be running.
I believe there are two customer types, ones who would love to see this, and get the newest features for their full lifecycle, and ones who would hate it, because the churn and change would be too much introducing risk and problems for them down the line.
Its hard, and likely impossible to keep everyone happy.
> As an aside, that kABI guarantee only goes so far. I work in HPC/AI, and the out-of-tree drivers we use like MOFED and Lustre drivers would break with EVERY SINGLE RHEL minor update (like RHEL X.Y -> X.(Y+1) ). Using past form here because I haven't been using RHEL for this purpose for the past ~5 years, so maybe it has changed since although I doubt it.
I'm not sure what the underlying problem here is, is the kABI guarantee worthless generally or is it just that MOFED and Lustre drivers need to use features not covered by some kind of "kABI stability guarantee"?
I work on Lustre development. Lustre uses a lot of kernel symbols not covered by the kABI stability guarantee and we already need to maintain configure checks for all of the other kernels (SuSe, Ubuntu, mainline, etc) that don't offer kABI anyway. So in my opinion, it's not worth the effort to adhere to kABI just for RHEL. Especially when RHEL derivatives might not offer the same guarantees. DKMS works well enough, especially for something open source like Lustre.
Honestly, I'm not sure who kABI is even designed for. All of the drivers I've interacted with the HPC space (NVIDIA, Lustre, vendor network drivers, etc.) don't seem to adhere to kABI. DKMS is far more standard. I'd be interested to know which vendors are making heavy use of it.
Pretty much all of it is in mainline modulo the secure boot lockdown patches, which are downstream for all distributions because Linus fundamentally believes those patches do not make sense.
Linux longterm often is missing stuff the RHEL kernel has, because RHEL backports subsystems from mainline with features and hardware support.
I’d rather use redhat than Ubuntu. I was handed a machine the other week with Ubuntu 23.10 on it, OS supplied from a vendor with extensive customization. Apt was dead. Fuck that. At least RH doesn’t kill their repos.
I've got Ubuntu 22.04 lying around that still update because they are LTS. Ubuntu has a well publicised policy for releases and you will have obviously read them.
Try do-release-upgrade.
You also mention "OS supplied from a vendor with extensive customization. Apt was dead."
My Ubuntu became unusable because it kept insisting on installing a snap version of Firefox breaking a whole bunch of workflows.
I do want to try a RH based OS (maybe Fedora) so they don’t keep changing things on me, but just where I am in life right now I don’t have the time/energy to do so, so for now I’m relying on my Mac.
Hopefully I can try a new Linux distro in a few months, because I can’t figure it out yet, but something about macOS simply doesn’t work for me from a getting work done perspective.
In Ubuntu, it's also possible to ditch Firefox from the snap store and install it using apt-get. Not from Ubuntu's repo, but from the official Firefox Debian repository:
I have been using Fedora Sway as my desktop operating system for a couple years now and I am very happy. It’s definitely worth a try. I have access to flatpak when I need it for apps like steam but the system is still managed by rpm/dnf. There’s of course some SELinux pain but that’s typically my fault for holding it wrong. Overall very impressed.
I get your frustration but this is really a problem of the vendor. We had something similar about 3 years ago, where a vendor delivered a proprietary driver for a piece of hardware that only worked with a specific 2.6 Linux kernel version--making it at least six years old. Is this the Linux project's fault? I don't think so.
Oh, you got that bit of fun... That's not an apt problem, that's a Ubuntu problem (and it's a reason I encourage people to either stick to LTS if they must use Ubuntu, or just run Debian or and other distro which doesn't block upgrades).
It's well publicized that they don't maintain support for old, non-LTS distros. They literally delivered what they promised. Could have been avoided by using an LTS distro.
Fedora does the same. No corporate vendor supports 6 month cycle distros for more than a year. RHEL releases come super slowly, for example.
I didn’t have a say in the matter of OS choice, it doesn’t matter how well-publicized Ubuntu’s stance is, it’s wrong. I don’t care if it’s not an LTS, keep the fucking repos open and advertise you’re using an insecure OS. Let me, the user, make that choice. Don’t pretend I’m stupid and need some kind of benevolent dictator to make choices for me, or handicap me because they’re smarter than me. They’re not.
If there's no cost in time, effort or equipment then mirror it yourself. It's easy, right?
Or just use an LTS distro like literally every single other organization that depends on Ubuntu for their business SMH. Like, it's absurd to even think about...
Anyhow someone else showed they do indeed host old repos, just in a different place. Also, why can't you update? Who the hell made that contract? And is it the client the said you can't update or Canonical? Because the latter seems sus...
Well that's just plain incompetent on the part of your vendor.
23.10 is not an LTS version and Ubuntu only provide updates for a short period of time (6 months or so after the next version is released), so the vendor should have upgraded it to 24.04 which IS an LTS version.
It's like you're complaining to Microsoft about a vendor giving you an old XP machine and that you can't update it.
I think the more apt (pun not intended) comparison would be to macOS? Trying to install macOS High Sierra from the Internet without hackery will lead to "the recovery server could not be contacted" error message, because certificates have expired. Like if your Mac came with High Sierra and you want to do a factory reset.
Windows from that era still updates. Though up next will be expiration of Windows UEFI CA 2011 which will certainly lead to boot problems for some.
you need to file a support ticket with the organization that provided the laptop. they chose to provide you with what amounts to a technology preview with a very limited lifespan. [0]
Maybe a dumb question but how do non x86 boards normally boot Linux images in a generic way? When I was in the embedded space, our boards all relied on very specific device tree blobs. Is the same strategy used for these or does it use ACPI or something?
All RISC-V consumer boards running Linux also use DT. RISC-V is also working on getting ACPI but primarily for the sake of servers, just like with ARM where ACPI is primarily used for servers (ARM SBBR / ServerReady).
ARM Windows laptops only use ACPI because Windows has no interest in DTs, but under Linux these devices are still booted using DT. I don't know for sure, but the usual reason is that these ACPI implementations are hacked up by the manufacturer to be good enough to work with Windows, so supporting them on Linux requires more effort than just writing up the DT.
The x86 platform uses a plethora of platforms services under different names like UEFI/ACPI/PCI/(ISA plug-n-play back in the day)/APIC (programmable interrupt controller and evolved variants thereof)/etc. that allows the generic kernel to discover what's available when it boots and load the correct drivers.
ARM servers do the same with SBSA (a spec that mandates things like UEFI, ACPI etc. support) etc. I think there's some effort in RISC-V land to do the same, also using UEFI and ACPI.
publicmail was asking about ACPI vs DT, not UEFI. Using UEFI and ACPI/DT are orthogonal; DT-using devices can also boot from UEFI if the firmware provides it. See https://github.com/TravMurav/dtbloader for example.
How do they get access to the source code? I read some time ago that RH has changed how they provided the source code and that it was (almost) impossible to get it now?
I don't know where you heard that? The source code for Red Hat's RISC-V developer preview will be released alongside the binaries, on 1st June. However almost all of it is already in CentOS Stream 10 and you can browse it here: https://gitlab.com/redhat/centos-stream There are a few patched packages (and quite a large kernel patch), which is what we'll be releasing into a separate git repo when the developer preview is actually released.
Yeah I wouldn't believe what Oracle say, they're hardly a disinterested party here. You can go and grab the source for CS10 which is almost exactly RHEL 10 from the link I posted above, and RHEL 10 sources are distributed to our customers.
We've been building the RISC-V port from a combination of Fedora and CentOS Stream sources--the same as the core operating system--since early 2024.
A lot of RISC-V support was already in F40 (which EL10 is cut from), so the rest was largely backporting and integrating into RHEL, which again, we've been tracking since CentOS Stream 10 was branched from Fedora last year.
I especially like the idea of getting a framework version in this case I want to swap in a different mainboard. By their own admission, the risc-v board is targeting developers and not ready for prime time. Also coming from the US, not sure how the tariff thing will workout…
RISC-V software ecosystem is really good already. It feels like everybody is just waiting for high performance CPU cores now. Sadly silicon cannot be built and released within seconds like software...
Better to buy a SBC for now (I can recommend the OrangePi RV2 - it's fantastic!) and wait until actually desktop/laptop-class hardware is ready :)
Or best buy something like https://www.crowdsupply.com/sutajio-kosagi/precursor or some other FPGA-based platform to retain the programmable logic capability, you never know whether you're going to need it, and should you need it after all, it helps knowing it's there.
Love FPGAs, but they're not very practical if you need to run a non-toy Linux on RISC-V. They will typically top out at 100 MHz for the kind of FPGAs that you and I can afford to buy, and have other problems like limited RAM.
I cannot wait for those ultra-performant rv64 micro-architecture manufactured with the latest silicon process. One less toxic PI lock and much cleaner assembly.
For a distro,just building packages for an architecture is notable support-wise. Those with custom firmware and kernels can pair them with the rocky 10 userspace.
Even to support one board, they'd need the whole build / testing infrastructure for RISC-V. Likely adding more boards is booing to be easy now, and any architecture-specific regressions, easier to spot and fix timely.
For sure, we needed a build infrastructure for RISC-V. I started out with five VisionFive 2's in my lab, and they're still doing work as needed. Granted, those are quite slow and painful because some builds will take a long, long time on those (for example, GCC took 7 days at the beginning, but we have it at about 5 days plus change now). Ever since we've added SiFive P550's to the mix, it has made it much faster for us to identify build issues and get them rectified. I still happen to use my VF2's for the "tiny" builds.
It's true that since we've had a usable build root since last 2024, it gives our AltArch group the opportunity to build different kernels to support other SBC's or boards like they already do for ARM SBC's (rasperry pi for example, since that support isn't native to the EL kernel). So while we support the VF2's and QEMU out of the box, that group will handle the additional kernels for more hardware support.
I'm actually looking forward to seeing what other boards the AltArch group will happen to add support for.
Even better title: Rocky will take the RHEL work and rebrand and sell the boards at a discount from China and claim a win and that they're being attacked by IBM.
Still, past sins and all that. Not too mention the model and the directions from those at the top.
Great I always applaud contributions and I want to encourage it. But please see the damage done by some quite senior persons on the project and please distance yourselves from them.
Red Hat announced RISC-V yesterday with RHEL 10. So this seems rather expected.
https://www.redhat.com/en/blog/red-hat-partners-with-sifive-...
[flagged]
I understand why people use RH and Rocky and even Oracle: the rpm wranglers. However its not for me.
My earliest mainstream distro was RH when they did it just for fun (pre IBM) and then I slid slightly sideways towards Mandrake. I started off with Yggdrassil.
I have to do jobs involving RH and co and its just a bit of a pain dealing with elderly stuff. Tomcat ... OK you can have one from 1863. There is a really good security back port effort but why on earth start off with a kernel that is using a walking stick.
Perhaps I am being unkind but for me the RH efforts are (probably) very stable and a bit old.
It's not the distro itself either. The users seem to have snags with updating it.
I (very generally) find that RH shops are the worst at [redacted]
Hi! I'm sorry this has been your experience. I'm one of the Red Hatters who's been working behind the scenes to get this over the finish line.
I do say my genuine thanks for your earnest expression. The version and ABI guarantee is not for everyone. At the same time some folks around these parts know that I'm "not an apologist for running an out of date kernel". I can assure you that everything shipped in the forthcoming P550 image is fresh. GCC 15. LLVM 19, etc. It's intended for development to get more software over the finish line for RISC-V.
Conflict of interest statement: I work for Red Hat (Formerly CoreOS), and I'm also the working group lead for the distro integration group within RISE (RISC-V Software Ecosystem).
> The version and ABI guarantee is not for everyone.
As an aside, that kABI guarantee only goes so far. I work in HPC/AI, and the out-of-tree drivers we use like MOFED and Lustre drivers would break with EVERY SINGLE RHEL minor update (like RHEL X.Y -> X.(Y+1) ). Using past form here because I haven't been using RHEL for this purpose for the past ~5 years, so maybe it has changed since although I doubt it.
In that boat now, weak-modules means you sometimes get lucky, and can reuse. However, since it's more effort to determine if a rebuild is needed than just slap the "build all the vendor kmods and slurm" button, we tend to build for each kernel. IIRC el8 added kernel symbol signature hashes as Provide/Requires, with automation to extract them at build time, so kmods got a lot easier to deal with.
Sorry, I was being imprecise. Rebuilds per se are no problem, as both MOFED and Lustre provide sources, and DKMS nicely automates rebuilding when installing a new kernel image. The actual problem is that RHEL minor releases would also break kernel internal API's, and thus building the kernel modules would fail.
Is anyone trying to get those drivers upstreamed?
Yes, and yes.
The kernel parts of MOFED are largely backports of the latest and greatest upstream kernel drivers to the various distro kernels their customers actually run. (The non-kernel parts of MOFED is mostly open source but does contain some proprietary special sauce on top, like IIRC SHARP support isn't available in FOSS.). The HPC community does tend to want to use the latest RDMA drivers as those are critical for at scale performance.
For Lustre, the client driver was upstreamed into staging, where it sat AFAIU largely unused for a few years until it was ripped out again. The problem was that Lustre developers didn't adopt an upstream-first development approach, and thus the in-kernel driver was basically a throw over the fence fork that nobody cared about. I think there is an effort to try again and hopefully adopt an upstream-first approach, remains to be seen whether it'll succeed.
For MOFED, why not just wholesale use a newer Linux kernel version?
Perhaps the cure is worse than the disease? There are several reasons to stay with a distro kernel:
- Lustre releases target distro kernels, upstream would likely break.
- Distro stays on top of CVE's etc. and provide updates when needed.
- HW likely certified for a few supported distros only, use anything else and you're on your own.
That, and if not possible one can try to get the used kABI symbols graylisted at Red Hat, to get informed when they change.
Lustre got dropped from the kernel, so it seems unlikely?
[dead]
> Tomcat ... OK you can have one from 1863. There is a really good security back port effort but why on earth start off with a kernel that is using a walking stick.
Because old software is battle-tested and reliable. Moreover, upgrading software is ever a pain so it's best to minimize how often you have to do it. With a support policy of 10 years, you just can't beat RHEL (and derivatives) for stability.
Having had to use those kinds of machines often as a user, it is a total pain. For some reason, these enterprise distributions end up being used a lot on scientific and machine learning clusters. You have to deal with 5-10 year old bugs that are solved in every other distribution already and you have to jump through hoops to make modern software run.
For me it always felt like the system administrators externalizing the cost on the users and developers (which are the same in many cases).
Despite my dislike of enterprise Linux, Red Hat is doing a lot of awesome work all around the stack. IMO Fedora and the immutable distros are the real showcase of all the things they do.
You can run whatever you want in containers. You don't even need root permissions. Red Hat's podman can launch containers without the need for root privileges.
> Despite my dislike of enterprise Linux, Red Hat is doing a lot of awesome work all around the stack. IMO Fedora and the immutable distros are the real showcase of all the things they do.
Fedora today is what RHEL will be tomorrow. They quite literally freeze a Fedora release to use as a base for RHEL's next release. If you like Fedora today you're gonna like Fedora tomorrow.
it's still painful when you can't even use the os-provided version of git and have to install newer one with conda
If you can get Nix running on these ancient machines it'll bring you all up2date packages you want, you can create Nix profiles that you install in a pre-configured path so you can use the packages in systemd too if you fancy.
It's really really great, even if you don't use or plan to use NixOS (Nix was born long before NixOS).
RHEL's kernel — the actual base of the operating system with the largest effect on stability — is not old. It might have a version number from the middle of the last century, but there are so many massive backports in there that in a few years time after release it gets closer to the latest mainline than to its original version. Don't expect too much from it.
Yup, one example that I noticed recently, RedHat backported the eBPF subsystem from Linux 6.8 to their 5.14 kernel (in RHEL 9.5):
https://docs.redhat.com/en/documentation/red_hat_enterprise_...
I’m old. I used one of the original boxed RH distros. It was cool then. That was almost 30 years ago.
I know they give back to Linux, and I’m thankful for the enterprises that pay for it because of that.
It’s not a bad company, though it’s strange that you could be a great developer and lose your position there if your project gets cut, unless another team picks you up, from what I hear.
But when Linus created Linux, he didn’t do it for money, and RH just seems corporate to me like the Microsoft of Linux, way back before Microsoft had their own Linux. I want my Linux free-range not cooped.
They don’t do anything wrong. They just don’t give the vibe. Anyone asking for money for it doesn’t “get it” to me.
Redhat made Linux palatable for enterprise though. Without enterprise adoption where would Linux be?
Except Linux only took off thanks to those that didn't want to pay for UNIX, and the UNIX vendors that wanted to cut down R&D costs from their own in-house UNIX clones, and were uncertain if BSD was still safe to use with the ongoing AT&T lawsuit.
It is not that they did not want to pay for UNIX. After all, they pay for RHEL.
They did not want to pay for big iron for sure, preferring commodity hardware. Even then though, many Linux boxes can get pretty expensive.
I think it is more about openness and control than it is about cost. Linux brings flexibility and freedom.
So does BSD of course. The timing of that lawsuit changed the world.
Re the last part: USL vs BSDi was filed in 1992 and settled in 1994, long before any sizeable vendor paid attention to Linux. (Version 1.0 of the Linux kernel was released at about the same time that lawsuit was settled.) So you shouldn't use that argument as part of your rationale.
You do not believe that what happened from 1991 to 1995 explains anything about how we got here?
Red Hat was founded in 1993. When do you think they got the idea? When do you think companies like Red Hat decided to bet on Linux instead of BSD? Debian was founded in 1993 as well. When was that lawsuit settled again?
An awful lot of the Linux momentum that carries us to this very day appeared after the BSD lawsuit was filed and before it was settled.
What about the other “big and professional” competitor to Linux?
GNU HURD was started in 1990. The original plan was to base it off the BSD kernel. The Linux kernel appeared in 1991. BSD fell under legal threat in 1992. Debian appeared in 1993. RMS lost interest in HURD. None of these dates had much impact you don’t think?
I should because perceptions take a very long time to change.
If you ask random dev on the street about .NET, there is an high probability they will answer it is Windows only and requires Visual Studio.
I’m old. I used one of the original boxed RH distros. It was cool then. That was almost 30 years ago.
Does anyone remember glint (graphical UI for RPM) that was part of Red Hat? Must have been Red Hat 4.x or thereabout.
Yes indeed. How about AwesomeWM? Not the one that exists now. The one from Red Hat 4.x or so.
> But when Linus created Linux, he didn’t do it for money, and RH just seems corporate to me like the Microsoft of Linux, way back before Microsoft had their own Linux. I want my Linux free-range not cooped.
You seem to forget that Red Hat has funded a lot of the development of the Linux ecosystem. There would be essentially no modern linux environment without Red Hat.
I'm thankful to RedHat, every other "cornerstone project" seems to be funded by them. The one that crossed my mind now is the PipeWire audio server, it just solved Linux audio for realsies this time.
I wouldn't use their products for much though, too enterprisey. Their projects are great and I'm happy someone else buys their packages.
Yep, when you have thousands of different production apps, installed and running directly on Linux - not talking about containers or microservices here - you’ll have very little appetite to upgrade all of them to the latest and shiniest technologies every couple of years. Stability & compatibility with existing skillsets is more important.
I have to confess that my early experiences with RedHat as a teenager and dealing with the nightmareish RPM dependencies soured me from the distribution. I went to Debian and then its many descendants and never looked back; APT seemed magical in comparison.
I assume they have a package manager that resolves dependencies well now? Is that what an RPM wrangler is?
This is a very outdated view. dnf runs circles around apt. Try it out, or at least find man pages on the ole 'net and see what it can do.
Probably the thing I like the most is transactional installation (or upgrades/downgrades/removals) of packages with proper structured history of all package operations (not just a bunch of log records which you have to parse yourself), and the ability to revert any of those transactions with a single command.
Possibly I'm just more used to apt (though fedora was my first linux), but I've found apt has better interfaces for what I'm trying to do (e.g. querying the state of the system and ensuring consistency across machines), and I've not found an equivalent to aptitude for dnf.
Side-note, the other difference I've noticed is Debian (and presumably its derivatives) has better defaults (and debconf) for packages, so whereas stock config would work on Debian, on Rocky I have to change config files, install missing packages etc.
I had the same experience as the OP in the beginning of the century. I've built a lot of RPM packages back then and it was clear that system of dependencies built into RPM format itself (not apt or dnf, this is dpkg level in terms of Debian) was poorly thought out and clearly insufficient for any complex system.
I've also migrated to Debian and it felt like a huge step forward.
I'm on Arch now, BTW.
The equivalent of RPM on Debian is the .deb package format. The equivalent of apt is dbf (or yum before it or up2date before that).
Red Hat is just old enough to exist before package managers existed on Linux. It was not there at first on Debian either.
Slackware still has not package manager really.
rpm dependencies has been a solved problem with yum (and now dnf) for about two decades.
Yum was borrowed from yellow dog Linux.
To be pedantic, yum was not from Yellow Dog, it is Yellow dog Updater Modified after all. It was a rewrite of the Yellow Dog Updater by people at Duke University. (Yellow Dog Linux was based on Red Hat.)
There was a lot of competition around package managers back then. For RPM, there were also urpmi, apt-rpm, etc.
Which in turn was based on RHEL/CentOS: https://en.wikipedia.org/wiki/Yellow_Dog_Linux
First impressions really matter. This is also why I went Debian. You shouldn't be getting marked down for saying it.
Many of us were running on 28.8 dial-up. Internet search was not even close to a solved problem. Compiling a new kernel was an overnight or even weekend process. Finding and manually downloading rpm dependencies was slow and hard. Same era when compiling a kernel went overnight or over the weekend. You didn't download an ISO you bought a CD or soon a DVD that you could booted off of.
Compare that to Debian's apt-get or Suse's yast/yast2 of the time, both just handled all that for you.
Debian and Suse were fun and fit perfectly into the Web 1.0 world; RedHat was corporate. SystemD was pushed by RedHat.
Compiling a new kernel was an overnight or even weekend process
One friend and I had a competition who could make the smallest kernel configuration still functional on their hardware. I remember that at some point we could build it in ten minutes or so. This was somewhere in the nineties, I was envious of his DX2-50.
Compare that to Debian's apt-get or Suse's yast/yast2 of the time, both just handled all that for you.
One of the really huge benefits of S.u.S.E. in Europe in the nineties was that you could buy it in nearly every book shop and it came with an installation/administration book and multiple CD-ROMs with pretty much all packages. Since many people did not have internet at all or at most dial-up, it gave you everything to have a complete system.
Yes, I remember that too. I about a 3 DVD set of Debian Sarge. 2 DVD with everything and the 3rd was about source packages.
You are mixing a lot of history there.
Red Hat had packages but not package management at first. However, the same is true of Debian. It depends when you used them.
Red Hat Linux branched into RHEL (corporate) and Fedora (community).
SuSE went down a similar road to Red Hat and has both OpenSUSE and SLE these days. Fedora is less corporate than OpenSUSE is.
Debian is still Debian but a bit more pragmatic and a bit less GNU these days (eg. non-free firmware).
Things like Flatpak and Distrobox are game changes for these “stable” distros like RHEL or even Debian.
Core system stays static and you do not get blindsided by changes in packages you do not care about. At the same time, you can easily install very up-to-date apps and dependencies if you need them.
Disclaimer: I'm very involved in the kernel part of this for $company.
The RHEL kernels themselves do see many improvements over time, the code that you'll see when the product goes end of life is considerably updated compared to the original version string that you see in the package name / uname -a. There are customer and partner feature requests, cve fixes and general bug fixes that go in almost every day.
The first problem of 'running old kernels' is exacerbated by the kernel version string not matching code reality.
The second probelm is many companies don't start moving to newer rhels when its out, they often stick to current -1, which is a bit of a problem because by the time they roll out a release, n-1 is likely entering its first stage of "maintenance" so fixes are more difficult to include. If you can think of a solution to this, I'm all ears.
The original reason behind not continually shipping newer kernel versions is to ensure stability by providing a stable whitelisted kABI that third party vendors can build on top of. This is not something that upstream and many OS vendors support, but with the "promise" of not breaking kabi, updates should happen smoothly without third party needing to update their drivers.
The kabi maintenance happens behind the scenes while ensuring that CVE fixes and new features are delivered during the relevant stage of the product lifecycle.
The kernel version is usually very close to the previous release, in the case of rhel10 its 6.13 and already with zero day fixes it has parts of newer code backported, tested, etc in the first errata release.
The security landscape is changing, maybe sometime Red Hat Business Unit may wake up and decide to ship a rolling better tested kernel (Red Hat DOES have an internal/tested https://cki-project.gitlab.io/kernel-ark/ which is functionally this ). Shipping this has the downside is that the third party vendors would not have the same KABI stability guarantees that RHEL currently provides, muddy the waters of rhels value and confuse people on which kernel they should be running.
I believe there are two customer types, ones who would love to see this, and get the newest features for their full lifecycle, and ones who would hate it, because the churn and change would be too much introducing risk and problems for them down the line.
Its hard, and likely impossible to keep everyone happy.
As I mentioned in another comment on this thread:
> As an aside, that kABI guarantee only goes so far. I work in HPC/AI, and the out-of-tree drivers we use like MOFED and Lustre drivers would break with EVERY SINGLE RHEL minor update (like RHEL X.Y -> X.(Y+1) ). Using past form here because I haven't been using RHEL for this purpose for the past ~5 years, so maybe it has changed since although I doubt it.
I'm not sure what the underlying problem here is, is the kABI guarantee worthless generally or is it just that MOFED and Lustre drivers need to use features not covered by some kind of "kABI stability guarantee"?
I work on Lustre development. Lustre uses a lot of kernel symbols not covered by the kABI stability guarantee and we already need to maintain configure checks for all of the other kernels (SuSe, Ubuntu, mainline, etc) that don't offer kABI anyway. So in my opinion, it's not worth the effort to adhere to kABI just for RHEL. Especially when RHEL derivatives might not offer the same guarantees. DKMS works well enough, especially for something open source like Lustre.
Honestly, I'm not sure who kABI is even designed for. All of the drivers I've interacted with the HPC space (NVIDIA, Lustre, vendor network drivers, etc.) don't seem to adhere to kABI. DKMS is far more standard. I'd be interested to know which vendors are making heavy use of it.
How much of the RHEL kernels is stuff that isn't in Linux mainline or LTS?
Pretty much all of it is in mainline modulo the secure boot lockdown patches, which are downstream for all distributions because Linus fundamentally believes those patches do not make sense.
Linux longterm often is missing stuff the RHEL kernel has, because RHEL backports subsystems from mainline with features and hardware support.
What if, more than a rolling kernel, we get a new kernel every two years or so?
Or maybe one in the middle of the (expected) lifetime of the major release ?
Just thinking out loud, but I acknowledge that maintaining a kernel version is no small task (probably takes a lot of engineering time)
I’d rather use redhat than Ubuntu. I was handed a machine the other week with Ubuntu 23.10 on it, OS supplied from a vendor with extensive customization. Apt was dead. Fuck that. At least RH doesn’t kill their repos.
I've got Ubuntu 22.04 lying around that still update because they are LTS. Ubuntu has a well publicised policy for releases and you will have obviously read them.
Try do-release-upgrade.
You also mention "OS supplied from a vendor with extensive customization. Apt was dead."
How on earth is that Ubuntu's problem?
Isn’t Ubuntu basically killing apt?
My Ubuntu became unusable because it kept insisting on installing a snap version of Firefox breaking a whole bunch of workflows.
I do want to try a RH based OS (maybe Fedora) so they don’t keep changing things on me, but just where I am in life right now I don’t have the time/energy to do so, so for now I’m relying on my Mac.
Hopefully I can try a new Linux distro in a few months, because I can’t figure it out yet, but something about macOS simply doesn’t work for me from a getting work done perspective.
I've heard many good things about Pop OS. It's like Ubuntu done right, and it does have an apt package for Firefox.
(I run Void myself, and stay merrily away from all these complications.)
In Ubuntu, it's also possible to ditch Firefox from the snap store and install it using apt-get. Not from Ubuntu's repo, but from the official Firefox Debian repository:
https://www.omgubuntu.co.uk/2022/04/how-to-install-firefox-d...
I know it's not the best but at least it can be done with little effort.
I can highly recommend it. Have been using it for a couple years or so now, haven't had any serious issues.
> It's like Ubuntu done right
But it is Ubuntu?
It's based on Ubuntu, but it's different enough: https://en.wikipedia.org/wiki/Pop!_OS
I have been using Fedora Sway as my desktop operating system for a couple years now and I am very happy. It’s definitely worth a try. I have access to flatpak when I need it for apps like steam but the system is still managed by rpm/dnf. There’s of course some SELinux pain but that’s typically my fault for holding it wrong. Overall very impressed.
I cannot update the OS per the contract.
It’s Ubuntu’s problem because they decide they’re smarter than their users and nuke their repos.
Fuck all of that.
I get your frustration but this is really a problem of the vendor. We had something similar about 3 years ago, where a vendor delivered a proprietary driver for a piece of hardware that only worked with a specific 2.6 Linux kernel version--making it at least six years old. Is this the Linux project's fault? I don't think so.
The vendor should provide you with updates to the new version, or use LTS. There's absolutely nothing here bad on the Ubuntu part.
Your contract is with the vendor if you have one. Unless you have a contract with Canonical and then you can ask them for support.
Oh, you got that bit of fun... That's not an apt problem, that's a Ubuntu problem (and it's a reason I encourage people to either stick to LTS if they must use Ubuntu, or just run Debian or and other distro which doesn't block upgrades).
It's well publicized that they don't maintain support for old, non-LTS distros. They literally delivered what they promised. Could have been avoided by using an LTS distro.
Fedora does the same. No corporate vendor supports 6 month cycle distros for more than a year. RHEL releases come super slowly, for example.
I didn’t have a say in the matter of OS choice, it doesn’t matter how well-publicized Ubuntu’s stance is, it’s wrong. I don’t care if it’s not an LTS, keep the fucking repos open and advertise you’re using an insecure OS. Let me, the user, make that choice. Don’t pretend I’m stupid and need some kind of benevolent dictator to make choices for me, or handicap me because they’re smarter than me. They’re not.
That’s exactly how it works. If you want to use an unsupported, insecure OS, you just have to opt into it.
You opt into it by changing your repositories to the https://old-releases.ubuntu.com archive mirror. You can install and use Ubuntu 6.10 if you want.
"Keeping the repos open" has a cost on their part. Servers aren't free. If you think you're smart then mirror your own repos.
Really? How much more can it cost to host their LTS and non LTS repos open at the same time?
C’mon, that’s such a weak argument I think you know it.
If there's no cost in time, effort or equipment then mirror it yourself. It's easy, right?
Or just use an LTS distro like literally every single other organization that depends on Ubuntu for their business SMH. Like, it's absurd to even think about...
You ignored the premise. I can’t update. Shake you head, I didn’t make the call, canonical did.
Anyhow someone else showed they do indeed host old repos, just in a different place. Also, why can't you update? Who the hell made that contract? And is it the client the said you can't update or Canonical? Because the latter seems sus...
I can see that you’re now realizing, I didn’t have a say in the constraints. I also hate said constraints. Hand I was dealt.
Ubuntu is a cancer on the gnu/linux/open source community.
[flagged]
Sounds made up.
Well that's just plain incompetent on the part of your vendor.
23.10 is not an LTS version and Ubuntu only provide updates for a short period of time (6 months or so after the next version is released), so the vendor should have upgraded it to 24.04 which IS an LTS version.
It's like you're complaining to Microsoft about a vendor giving you an old XP machine and that you can't update it.
I think the more apt (pun not intended) comparison would be to macOS? Trying to install macOS High Sierra from the Internet without hackery will lead to "the recovery server could not be contacted" error message, because certificates have expired. Like if your Mac came with High Sierra and you want to do a factory reset.
Windows from that era still updates. Though up next will be expiration of Windows UEFI CA 2011 which will certainly lead to boot problems for some.
you need to file a support ticket with the organization that provided the laptop. they chose to provide you with what amounts to a technology preview with a very limited lifespan. [0]
0: https://ubuntu.com/about/release-cycle
> I have to do jobs involving RH and co and its just a bit of a pain dealing with elderly stuff. Tomcat ... OK you can have one from 1863.
It's 2025, you can run whatever version you need in containers.
Maybe a dumb question but how do non x86 boards normally boot Linux images in a generic way? When I was in the embedded space, our boards all relied on very specific device tree blobs. Is the same strategy used for these or does it use ACPI or something?
It's what the RISC-V Server TG is trying to standardize: https://lists.riscv.org/g/tech-server-platform https://lists.riscv.org/g/tech-announce/topic/public_review_...
All RISC-V consumer boards running Linux also use DT. RISC-V is also working on getting ACPI but primarily for the sake of servers, just like with ARM where ACPI is primarily used for servers (ARM SBBR / ServerReady).
ARM Windows laptops only use ACPI because Windows has no interest in DTs, but under Linux these devices are still booted using DT. I don't know for sure, but the usual reason is that these ACPI implementations are hacked up by the manufacturer to be good enough to work with Windows, so supporting them on Linux requires more effort than just writing up the DT.
> so supporting them on Linux requires more effort than just writing up the DT.
More effort then producing unique images for every board?
The DT should really be put in the firmware (e.g u-boot), same as ACPI on x86 is in the firmware (the bios/efi).
Then you wouldn't need a unique kernel/OS image. For devices that have u-boot in ROM the DT is usually there (fdt).
Yes. See https://github.com/aarch64-laptops/edk2/tree/dtbloader-app?t... for example.
It's a shame the DT approach encourages land fill of boards when the manufacturer stops providing updates.
Not necessarily. DT can be loaded separately from u-boot tree / kernel tree / dtoverlay file.
The x86 platform uses a plethora of platforms services under different names like UEFI/ACPI/PCI/(ISA plug-n-play back in the day)/APIC (programmable interrupt controller and evolved variants thereof)/etc. that allows the generic kernel to discover what's available when it boots and load the correct drivers.
ARM servers do the same with SBSA (a spec that mandates things like UEFI, ACPI etc. support) etc. I think there's some effort in RISC-V land to do the same, also using UEFI and ACPI.
Maybe I'm wrong but isn't it what SBI[0] is for?
[0] Supervisor Binary Interface
They basically don't at the moment. RISC-V is working on ACPI and "universal discovery" as a solution but it doesn't exist yet.
I think windows ARM laptops use UEFI?
publicmail was asking about ACPI vs DT, not UEFI. Using UEFI and ACPI/DT are orthogonal; DT-using devices can also boot from UEFI if the firmware provides it. See https://github.com/TravMurav/dtbloader for example.
This is explicitly what we're doing in RHEL with the P550.
We use u-boot and it's EFI capabilities to init grub (instead of another instance of u-boot)
Why not use systemd-boot?
They do, Windows Phone even use UEFI (not sure was completely compliant) back in the day.
looks like they still require a custom device tree to boot Linux
Debian too: https://news.ycombinator.com/item?id=44034528
How do they get access to the source code? I read some time ago that RH has changed how they provided the source code and that it was (almost) impossible to get it now?
I don't know where you heard that? The source code for Red Hat's RISC-V developer preview will be released alongside the binaries, on 1st June. However almost all of it is already in CentOS Stream 10 and you can browse it here: https://gitlab.com/redhat/centos-stream There are a few patched packages (and quite a large kernel patch), which is what we'll be releasing into a separate git repo when the developer preview is actually released.
I mean in general, you can read it here for example:
https://www.theregister.com/2023/07/10/oracle_ibm_rhel_code/...
Yeah I wouldn't believe what Oracle say, they're hardly a disinterested party here. You can go and grab the source for CS10 which is almost exactly RHEL 10 from the link I posted above, and RHEL 10 sources are distributed to our customers.
We've been building the RISC-V port from a combination of Fedora and CentOS Stream sources--the same as the core operating system--since early 2024.
A lot of RISC-V support was already in F40 (which EL10 is cut from), so the rest was largely backporting and integrating into RHEL, which again, we've been tracking since CentOS Stream 10 was branched from Fedora last year.
I'm so looking forward to a RISC future!
Ditto! I haven’t found any hardware that’s daily-driver ready, but I keep looking.
https://store.deepcomputing.io/products/dc-roma-ai-pc-risc-v...
I especially like the idea of getting a framework version in this case I want to swap in a different mainboard. By their own admission, the risc-v board is targeting developers and not ready for prime time. Also coming from the US, not sure how the tariff thing will workout…
RISC-V software ecosystem is really good already. It feels like everybody is just waiting for high performance CPU cores now. Sadly silicon cannot be built and released within seconds like software...
Better to buy a SBC for now (I can recommend the OrangePi RV2 - it's fantastic!) and wait until actually desktop/laptop-class hardware is ready :)
Or best buy something like https://www.crowdsupply.com/sutajio-kosagi/precursor or some other FPGA-based platform to retain the programmable logic capability, you never know whether you're going to need it, and should you need it after all, it helps knowing it's there.
Love FPGAs, but they're not very practical if you need to run a non-toy Linux on RISC-V. They will typically top out at 100 MHz for the kind of FPGAs that you and I can afford to buy, and have other problems like limited RAM.
I miss my RISC past.
I cannot wait for those ultra-performant rv64 micro-architecture manufactured with the latest silicon process. One less toxic PI lock and much cleaner assembly.
Better title: Rocky Linux 10 Will Support Two RISC-V Boards
For a distro,just building packages for an architecture is notable support-wise. Those with custom firmware and kernels can pair them with the rocky 10 userspace.
Exactly! The AltArch SIG is exactly where those customization will come from, driven by community support.
They could easily support the Pine64 Star64 board as well, the VisionFive2 build of u-boot works on the Star64 too.
Yep, should work fine, just not stepping across the upstream (Fedora) support at the moment.
Even to support one board, they'd need the whole build / testing infrastructure for RISC-V. Likely adding more boards is booing to be easy now, and any architecture-specific regressions, easier to spot and fix timely.
For sure, we needed a build infrastructure for RISC-V. I started out with five VisionFive 2's in my lab, and they're still doing work as needed. Granted, those are quite slow and painful because some builds will take a long, long time on those (for example, GCC took 7 days at the beginning, but we have it at about 5 days plus change now). Ever since we've added SiFive P550's to the mix, it has made it much faster for us to identify build issues and get them rectified. I still happen to use my VF2's for the "tiny" builds.
It's true that since we've had a usable build root since last 2024, it gives our AltArch group the opportunity to build different kernels to support other SBC's or boards like they already do for ARM SBC's (rasperry pi for example, since that support isn't native to the EL kernel). So while we support the VF2's and QEMU out of the box, that group will handle the additional kernels for more hardware support.
I'm actually looking forward to seeing what other boards the AltArch group will happen to add support for.
Even better title: Rocky will take the RHEL work and rebrand and sell the boards at a discount from China and claim a win and that they're being attacked by IBM.
Man some of y'all really have beef with Rocky...
Because their model is the absolute laziest possible one.
Yes, because the idea of iterate and claim ownership is dishonest and lazy at best.
We've actually been working with Fedora and RH on RISC-V for over a year now :)
Still, past sins and all that. Not too mention the model and the directions from those at the top.
Great I always applaud contributions and I want to encourage it. But please see the damage done by some quite senior persons on the project and please distance yourselves from them.
[flagged]