This is a cache of https://lobste.rs/s/p8fj5d/future_32_bit_support_kernel. It is a snapshot of the page as it appeared on 2025-09-03T16:28:19.703+0200.
The future of 32-bit support in the kernel | Lobsters
  1. 32
    The future of 32-bit support in the kernel linux lwn.net
  1.  

    1. 14

      Raspberry Pi promise to keep making the models 1 A+, 1 B+, 0, 0W until 2030. They are based on the BCM2835 with its weedy armv6 core that was obsolete even when it was used in the first Raspberry Pi in 2012. Raspberry Pi OS is supported on all models for sale.

      The BCM2835 gets a line on one of the slides, but isn’t otherwise mentioned. I suppose they don’t hit many of the other pain points: not high memory, not many cores, not big endian. The Raspberry Pi 2 might be more awkward since it has 1GB RAM which gets into high memory issues.

      1. 8

        Support does not have to mean latest and greatest kernel versions. They could stay on an older kernel release as long as it receives patches.

        For example, linux 6.12 (which of course still has 32-bit support) is an SLTS release, it will be supported until mid-2035 by the CIP. The 32-bit removal is only beginning to be discussed for mainline, so there will almost surely be one or more SLTS releases cut before it happens, stretching this out to 2037 or further.

        1.  

          Even with 1GB, part of it is used by the GPU so it’s not a huge issue.

        2. 10

          All the discussion of dropping everything in linux just makes it look more and more like a corporate project. The author, Arnd Bergmann, seems to only want common, popular hardware to be supported, and even advocates for big endian to go away. Why? No technical reasons are given. It’s quite handwavy and short on details.

          When linux moves to only caring about corporate needs, NetBSD will still be there to support older systems.

          1. 33

            Simply maintenance. linux development is moving fairly quickly, and the less old stuff with edge cases you need to worry about, the less work it is to maintain it.

            1. 8

              The highmem part in particular adds quite a bit of complexity, so I can see why they would like to rip that out. If you can assume that your virtual memory space is larger than your physical RAM, things get simpler.

              1.  

                Do you think supporting big endianness is “maintenance”?

                You seem to just be parroting. Where’s the maintenance in not assuming endianness? Can you point to examples?

                Sure, people can come up with “maintenance” examples such as ISA, BIOS booting, hairy code needed for early x86 and so on, but that’s not justification, nor is it even related, to support for big endian.

                Good code should compile and run on architectures that the author doesn’t even necessarily know exist. linux is more bare metal, sure,, but suggesting that big endian support should go away because it’s not popular is strongly implying that assuming endianness is fine, which is most certainly is not.

                1. 10

                  Do you think supporting big endianness is “maintenance”?

                  Absolutely. IBM is willing to pay for that maintenance, so it gets done. But if we’re not for that hardware and funding, it would be simpler to rip that code out.

                  Good code should compile and run on architectures that the author doesn’t even necessarily know exist.

                  Without the ability to test against hardware, you can’t know if a refactor is correct. As a consequence of linux’s lack of stable internal interface guarantees, anything that can’t be tested against gets dropped.

                  I largely agree with you in the abstract. I believe linux could do a lot more to reduce that long term maintenance burden. But in a world of finite resources and underpaid maintainers, you have to make choices about where to spend your engineering resources.

                  1. 5

                    strongly implying that assuming endianness is fine, which is most certainly is not.

                    I disagree. In a project that specifically only supports little endian, assuming endianness is perfectly fine. That is the entire point.

                    There is no a priori moral duty to write endian-agnostic code.

                    1.  

                      Are you saying that linux “specifically only supports little endian”, or are you saying that it should “specifically only supports little endian”?

                      1.  

                        Neither. I’m saying that once a project (linux or any other one) decides to only support little endian, then it is fine for them to assume endianness in their code.

                        Your comment to me read like “linux needs to support big endian because if they drop big endian that implies they can assume endianness, which is bad, because they need to support big endian”, which is circular reasoning.

                        1.  

                          I believe the point is conditional: if linux supports only little endian, then it’s fine to assume endianness.

                  2. 23

                    Just for clarity, the author of the post is Jonathan Corbet, summarizing a talk by Arnd Bergmann.

                    I can’t read what you read out of that summary. He very much tries to make sense of a messy space and makes a very clear statement that things will not be removed, while there is still a tangible impact on users and things are still in support.

                    Supporting unsupported hardware in the mainline kernel is a huge problem for quality control, you are literally fighting the vendor there, if it still exists. E.g. i know of a few discussions where the vendor of an architecture actively wants people to unmerge support for their old hardware. This is an uphill battle that i really just for a niche group of maintainers.

                    Don’t get me wrong: I’m very sympathetic to the retro/homebrew-scene. I’ve tried to figure out good ways how we can e.g. support them in the Rust compiler without stepping on the toes of vendors. That still does not mean that we should forget that there is a labour cost in all this. It all doesn’t come for free.

                    Talks that explore the space are never free of opinions, but I appreciate the overview very much.

                    1. 5

                      Just for clarity, the author of the post is Jonathan Corbet, summarizing a talk by Arnd Bergmann.

                      Thanks - that’s an important distinction. Thanks!

                      WRT the rest, it’s hard to hear someone advocate for ending big endian support without thinking that he’s on a crusade of sorts.

                      1. 10

                        WRT the rest, it’s hard to hear someone advocate for ending big endian support without thinking that he’s on a crusade of sorts.

                        I don’t quite get that. I haven’t touched a BE platform for years. As stated before: I know hardware vendors that lobby tool vendors to drop their BE support so that they can get rid of that. I’m having a hard time to construct a crusade out of that.

                        1.  

                          Advocating for things you don’t understand can be considered crusading.

                          If he truly understands the importance of maintaining portable code, he wouldn’t advocate for ending support for big endian. The only reason to do so is to corporatize the project - that is, show the “shareholders” that nobody is “wasting time” on things that aren’t immediately relevant to “corporate interests”.

                          There’s no other legitimate reason whatsoever (although please try to explain any, if they’re more substantial than repeating what you’ve heard). People who don’t understand programming will happily parrot the “but the maintenance!” schtick they’ve heard from others, but that’s meaningless.

                          1. 8

                            There’s no other legitimate reason whatsoever (although please try to explain any, if they’re more substantial than repeating what you’ve heard). People who don’t understand programming will happily parrot the “but the maintenance!” schtick they’ve heard from others, but that’s meaningless.

                            I don’t think that can be that easily waved away. BE is so fringe now that compiler vendors stop supporting it and stop including it in builds. (e.g. Arm does not ship their GCC and clang profiles with BE anymore). Those targets already experience rot and will experience more in the future.

                            Yes, I’m totally up for a world where we enable people who still work on these devices (for whatever reasons), but I don’t think we can waive away maintenance issues as an unreasonable argument.

                            1.  

                              Big endian is fringe, and you determine this based on the fact that vendor specific toolchains don’t come with it? Does that mean x86 is fringe because Arm toolchains don’t come with x86 support? That’s just… odd.

                              Point to this “rot”, please.

                              I think you’re confusing corporate support with project support, and by doing so you’re illustrating my point for me: that linux, the project, seems to be moving more and more towards being a corporate project.

                              1.  

                                Some ARM processors have a big-endian mode (in fact can switch dynamically between LE and BE!), so it’s not quite the same as not supporting x86.

                                According to the latest release notes, their toolchain does support AArch64 bigendian, though oddly only when cross-compiling on x86! It also sounds like it’s now hard to find the precompiled libc for big-endian ARM.

                            2. 7

                              If he truly understands the importance of maintaining portable code, he wouldn’t advocate for ending support for big endian. The only reason to do so is to corporatize the project - that is, show the “shareholders” that nobody is “wasting time” on things that aren’t immediately relevant to “corporate interests”.

                              There’s no other legitimate reason whatsoever (although please try to explain any, if they’re more substantial than repeating what you’ve heard). People who don’t understand programming will happily parrot the “but the maintenance!” schtick they’ve heard from others, but that’s meaningless.

                              If you don’t support something, the portability issues are basically academic. Corporate mindset isn’t the issue, maintainer workload, understanding, and bandwidth is.

                              Disclaimer: I’m paid to support open source projects for big endian platforms. So I do know about the workload issues.

                              1. 5

                                People who don’t understand programming will happily parrot the “but the maintenance!” schtick they’ve heard from others, but that’s meaningless.

                                I don’t agree that it’s meaningless. It’s not as simple as saying code is either portable or not. I think every axis along which platforms can vary adds extra work to creating and maintaining portable code, and that work has to be justified. If almost no common platforms are big endian any more, it’s not clear to me that the extra work is worth it compared to investing that effort in other improvements.

                                1.  

                                  Please give one example of how writing code to be endian agnostic is, as you put it, “extra work”.

                                  1. 5

                                    The burden of proof is on you when making claims. But such examples include pretty much any code that’s blitting structs.

                                    N.B:. This whole comment thread feels really accusatory and isn’t helping make your case or any allies. At best, it’s asking people to do a bunch of work to support your hobbies.

                                    1.  

                                      People, including the speaker that the article is about, claim that supporting big endian somehow means extra work or extra maintenance. People seem more than happy to keep parroting “but the maintenance!”

                                      So people can assert a thing with no examples and nothing that can actually be pointed to, and I’m the one who has to “prove” that this simply isn’t true?

                                      Discussions like these are contentious, for sure, and they’re hardly all that productive, but these are the kinds of discussions I can point to later on as example of the very problems we have as a society: people engage because they feel like something is right and true, they repeat what they’ve heard, often with conviction, yet they leave out any real examples backing up their “conviction”, and many do other things to show they’ve never really thought things through.

                                      My questions aren’t rhetorical. If programmers here can think of examples where writing endian agnostic code is onerous, I’d love to hear them. The fact that nobody has examples comes from decades of learning how to simply do things in ways where we don’t have to think about endianness.

                                      So the real issue is one level removed: why are people adamant about supporting the corporationification of the linux project using excuses for which they have spurious evidence?

                                      THAT is what I find fascinating, and that’s what I’ll ask people to wonder when I point them to this thread and ask them whether the reasoning they see is emotional or technical.

                                      1.  

                                        I will freely admit that I have not worked on code that is portable across endianness. But I dispute that this means I’m parroting - my argument comes from a general sense of what it means to maintain portable software, not just echoing statements other people wrote. I find the claim that supporting both big and little endianness is ‘no extra work’ kind of bizarre - why would anybody even mention endianness if no code required you to think about it?

                                        From the comments of the linked articale I found https://lore.kernel.org/all/20250822165248.289802-1-ben.dooks@codethink.co.uk/, which seems like a not insignificant amount of work being invested to add support for big-endian RISC-V. In there I found lots of code changes in places that involve understanding the byte level representation of data or instructions, which I guess is not surprising.

                                        1.  

                                          Endianness is only an issue if you are writing out binary data to storage or protocols that might be used on systems of different endianness. At my previous job, I had to work on endianness issues with our project, but it was running on SPARC, a big-endian system. For reasons, we didn’t have a lot of SPARC machines for development or testing, so I put in the work to support both big and little endian. It only affected the code at the I/O boundaries. No other code needed change.

                                          1.  

                                            Endianness also affects data structure layouts when there are variant records and/or bit packing for compactness. It’s often possible to avoid it by making sure that particular data fields are always accessed using a consistent word size, and by shunning endian-dependent misfeatures such as C bitfields. But sometimes it will end up favouring one endianness, and the performance of the disfavoured endianness can be improved with some endian-aware layout.

                                        2.  

                                          In SIMD programming, re-interpreting u16x8 as u8x16 or vice versa should be trivial in the high-level language (on the machine, the same thing stays in the register). I have been in discussions where there was a threat of putting scary-looking ceremony on this operation for the sake of endian-agnostic portability.

                                      2.  

                                        One extra step is ensuring that the endian agnostic code is actually endian agnostic by having to test it on systems with different endianness.

                                    2.  

                                      Advocating for things you don’t understand can be considered crusading.

                                      I don’t understand big endian so I wouldn’t advocate for maintaining it. How can I maintain something I don’t understand?

                                      1. 7

                                        It’s sometimes unclear if the people designing big-endian systems understand it https://mastodon.gamedev.place/@rygorous/114989194745387977

                                        (tho little-endian systems are not without fuckups, such as long ints on the PDP-11 and double precision floats on early ARMs)

                                        1.  

                                          People are advocating for making little endian the default without understanding it.

                                          You can both not understand a thing AND not advocate for or against it.

                                          1.  

                                            Portability to legacy designs has bad externalities for everyone else. Getting C++ to commit to 8-bit bytes and two’s complement is a major endeavor. z/OS and a single locale on AIX (see https://www.ibm.com/docs/en/aix/7.1.0?topic=representation-wide-character-data ) get way more attention in C++ standardization than what would be proportionate considering the number of programmers targeting these systems.

                                            Rust has been very fortunate that it started in a setting where it could ignore z/OS, AIX, TI DSPs, and Unisys. And even so, I think big endian SIMD consideration have at times seemed to put practical usage that’s portable between x86_64 and aarch64 at risk.

                                            1.  

                                              Maintenance is not neutral. It requires positively supporting something.

                                              With little endian I can test my assumptions on any machine within my reach. I don’t even have to specifically test for endianess issues because if I do something wrong I can get immediate feedback that it’s wrong. With big endian the situation is different. I’ll need to get my hands on special hardware (or an emulated environment) and go well out of my way to run tests.

                                              I’m not advocating for or against removing big endian support but I find it silly to claim that they are on equal footing. Little endian is (and has long been) the default. Big endian is limited to a niche (and network protocols, but that’s simpler as you can use native endian except when serialising or deserialising).

                                              1.  

                                                People advocate for making 8 bit bytes, ASCII, and two’s complement the default without understanding either. There’s going to be a line somewhere.

                                    3. 12

                                      It’s a matter of weighing costs vs benefits, which shouldn’t be limited to corporations. Volunteer projects also have a finite maintainer time.

                                      Support for different architectures has a cost. It’s not just a matter of leaving old code alone. Support for certain architectures adds ongoing cost of added code complexity and additional effort of keeping it working, even when developing new features aimed at new hardware.

                                      I think it’s right to question where to draw the line. How much cost for how little benefit is acceptable?

                                      1. 5

                                        One of those costs for volunteer projects is that it is prohibitively time consuming and expensive to get their hands on BE hardware to test their work on, let alone BE hardware that is acceptably fast so that you can compile big projects on it. I have a vague recollection that IBM might have once had a program that was giving out free cycles on s390x CI runners but I can’t find anything about that now so I’m not confident I didn’t just misremember.

                                        1.  

                                          This for GitHub Actions (which is usually the default for CI nowadays). IBM does tend to have a lot of churn with offerings like this though, so I don’t know how long it might last. At least for what I’m supporting, I’m paying (well, colo, the hardware was cheap) for a big endian PPC system to do CI on.

                                        2.  

                                          I feel the corporation factor does matter in the linux case though. With majority (all?) of maintainers being paid to work on linux, a volunteer to maintain 32-bit may not be able to keep up and considered dragging the project back.

                                          1. 6

                                            You would get the same dynamic if nobody was paid, simply because usefulness×popularity of 64-bit platforms completely crushes the dying 32-bit ones.

                                            1.  

                                              Right, fundamentally support is about whoever shows up, be it because of money or passion. A lot of what Arnd mentions in the talk is axing support for things where no one has shown up.

                                              And as another example, LLVM supports 68k purely because of community volunteers stepping up. No company was interested in that.

                                        3. 9

                                          You somehow missed all the effort that is going into figuring out which platforms are still in use and which are not? I don’t understand how you reach your conclusions from Arnd’s talk at all.

                                          1. 8

                                            FreeBSD have already planned to drop support for most 32-bit architectures so it is not like linux is special in that regard.

                                            https://www.freebsd.org/platforms/

                                            1. 6

                                              All the discussion of dropping everything in linux just makes it look more and more like a corporate project.

                                              I find these critiques of linux strange as corporations have been the primary funder of kernel development for the vast majority of the project’s life. Corporations figured out it was cheaper to pool resources and just add their support to one Unix-like OS than maintaining their own.

                                              linux is a very special project in that the CTO (Linus) has source-neutral funding and gets to shape policy based primarily on his engineering tastes. If anything (as others have pointed out) their considerations of real-world usage of non-contributing (financially or engineering) downstream users is a big deal. But AFAIK the majority of device support is contributed by the manufacturers themselves (at least that’s what the linux corporate contributor percentages would suggest).

                                              When linux moves to only caring about corporate needs, NetBSD will still be there to support older systems.

                                              I don’t have insights into the difference between NetBSD and linux 32-bit support directly. In general, however, NetBSD is able to support so many platforms through the use of HALs. HALs are generally not allowed in linux but Windows and Android use them because they care about supporting out-of-tree drivers. Android added this extra layer of abstraction to their kernel fork because linux’s 2-year LTS window is just long enough for a smartphone to be unsupported by the time it ships. Again, HAL’s may be irrelevant to the 32-bit situation (I would love for someone to chime in on this point) but dropping hardware is not as simple as linux being ruined due to some nebulous profit motive.

                                              1.  

                                                The big-endian thing was a tell the guy doesn’t really know who butters the bread though. I’ve not heard of any plans to move zArch to little-endian, so RedHat is automatically out.

                                              2. 5

                                                Note that you can still run 32-bit apps on a 64-bit kernel:

                                                There are still some people who need to run 32-bit applications that cannot be updated; the solution he has been pushing people toward is to run a 32-bit user space on a 64-bit kernel. This is a good solution for memory-constrained systems; switching to 32-bit halves the memory usage of the system. Since, on most systems, almost all memory is used by user space, running a 64-bit kernel has a relatively small cost.

                                                Also I have been wondering what processors are actually big endian. ~20 years ago I worked on GameCube, which was big endian (I think PowerPC), but it seems like there is little awareness of the issue now. And yeah it seems like big endian has steeply dropped off since then:

                                                A related problem is big-endian support, which is also 32-bit only, and also obsolete. Its removal is blocked because IBM is still supporting big-endian mainframe and PowerPC systems; as long as that support continues, big-endian support will stay in the kernel.

                                                1. 8

                                                  Now that routers appear to have migrated from big-endian MIPS to little-endian ARM, the main cases keeping big-endian alive are IBM and Oracle (on the Solaris side) offering long-term support for IBM Z (POWER has gone little-endian) and Sparc.

                                                  It’s kinda weird that even this thread has complaints about removing support for legacy architectures being “corporate” when there’s and IBM & Oracle interest in big endian.

                                                  1.  

                                                    Now that routers appear to have migrated from big-endian MIPS to little-endian ARM, the main cases keeping big-endian alive are IBM and Oracle (on the Solaris side) offering long-term support for IBM Z (POWER has gone little-endian) and Sparc.

                                                    Power has gone mostly little endian for linux, but i and AIX aren’t. SPARC is basically dead, but they’re still heavily investing in new z.

                                                    1.  

                                                      And the BSDs still largely run Power big, too.

                                                  2. 7

                                                    Yes, Gamecube was PowerPC (a modified PPC 750 a/k/a G3). Probably the last major big-endian-only devices were the Xbox 360 and PS3, which were also PowerPC derivatives.

                                                    The POWER9 under the desk here is bi-endian. Right now I run it little in linux (ppc64le) but I may run it big later in FreeBSD or OpenBSD. I still find it easier to think and work big-endian personally, and if nothing else at minimum I try to mark code sections I expect to have endianness issues.

                                                    1.  

                                                      run a 32-bit user space on a 64-bit kernel.

                                                      This, out of mere side-interest, is what the Raspberry Pi Desktop does, or did.

                                                      For those who do not know it exists, the RPiD was the x86 edition of RPi OS, formerly known as Raspbian.

                                                      https://projects.raspberrypi.org/en/projects/install-raspberry-pi-desktop/4

                                                      It is still available but it’s based on Debian 11 so it’s obsolete.

                                                      However it is the single smallest distro with a full GUI desktop on the PC I’ve seen. It uses about 1/4 the RAM of lightweight not-a-full-desktop OpenBox environments like Crunchbang++.

                                                      On a 32-bit PC it’s a 32-bit OS. On a 64-bit PC it’s a 32-bit userland on a 64-bit kernel that can use 8GB of RAM.