Linux Kernel Monkey Log

Random bits from Greg Kroah-Hartman

8 Bits Are Enough for a Version Number…

As was pointed out to us stable kernel maintainers last week, the overflow of the .y release number was going to happen soon, and our proposed solution for it (use 16 bits instead of 8), turns out to be breaking a userspace-visable api.

As we can’t really break this, I did a release of the 4.4.256 and 4.9.256 releases today that contain nothing but a new version number. See the links for the full technical details if curious.

Right now I’m asking that everyone who uses these older kernel releases to upgrade to this release, and do a full rebuild of their systems in order to see what might, or might not, break. If problems happen, please let us know on the stable@vger.kernel.org mailing list as soon as possible as I can only hold off on doing new stable releases for these branches for a single week only (i.e. February 12, 2021).

Helping Out With LTS Kernel Releases

A recent email thread about “Why isn’t the 5.10 stable kernel listed as supported for 6 years yet!” on the linux-kernel mailing list ended up generating a bunch of direct emails to me asking what could different companies and individuals due to help out. What exactly was I looking for here?

Instead of having to respond to private emails with the same information over and over, I figured it was better to just put it here so that everyone can see what exactly I am expecting with regards to support in order to be able to maintain a kernel for longer than 2 years:

What I need help with

All I request is that people test the -rc releases when I announce them, and let me know if they work or not for their systems/workloads/tests/whatever.

If you look at the -rc announcements today, you will see a number of different people/groups responding with this information. If they want, they can provide a Tested-by: ... line that I will add to the release commit, or not, that’s up to them.

Here and here and here and here and here are all great examples of how people let me know that all is ok with the -rc kernels so that I know it is “safe” to do the release.

I also have a few companies send me private emails that all is good, there’s no requirement to announce this in public if you don’t want to (but it is nice, as kernel development should be done in public.)

Some companies can’t do tests on -rc releases due to their build infrastructures not handling that very well, so they email me after the stable release is out, saying all is good. Worst case, we end up reverting a patch in a released kernel, but it’s better to quickly do that based on testing than to miss it entirely because no one is testing at all.

And that’s it!

But, if you want to do more, I always really appreciate when people email me, or stable@vger.kernel.org, git commit ids that are needed to be backported to specific stable kernel trees because they found them in their testing/development efforts. You know what problems you hit better than anyone, and once those issues are found and fixed, making sure they get backported is a good thing, so I always want to know that.

Again, if you look on the stable@vger.kernel.org list, you will see different companies and developers providing backports of things they want backported, or just a list of the git commit ids if the backports apply cleanly.

Does that sound reasonable? I want to make sure that the LTS kernels that you rely on actually work for you without regressions, so testing is key, as is finding any fixes that are needed for them.

It’s not much, but I can’t do it alone :)

So, 6 years or not for 5.10?

The above is what I need in order to be able to support a kernel for 6 years, constant testing by users of the kernels. If we don’t have that, then why even do these releases because that must mean that no one is using them? So email me and let me know.

As of this point in time (February 3, 2021), I do not have enough committments by companies to help out with this effort to be able to say I can do this for 6 years right now (note, no response yet from the company that originally asked this question…) Hopefully that changes soon, and if it does, the kernel.org release page will be updated with the new date.

Fast Kernel Builds - 2020

A number of months ago I did an “Ask Me Anything” interview on r/linux on redit. As part of that, a discussion of the hardware I used came up, and someone said, “I know someone that can get you a new machine” “get that person a new machine!” or something like that.

Fast forward a few months, and a “beefy” AMD Threadwripper 3970X shows up on my doorstep thanks to the amazing work of Wendell Wilson at Level One Techs.

Ever since I started doing Linux kernel development the hardware I use has been a mix of things donated to me for development (workstations from Intel and IBM, laptops from Dell) machines my employer have bought for me (various laptops over the years), and machines I’ve bought on my own because I “needed” it (workstations built from scratch, Apple Mac Minis, laptops from Apple and Dell and ASUS and Panasonic). I know I am extremely lucky in this position, and anything that has been donated to me, has been done so only to ensure that the hardware works well on Linux. “Will code for hardware” was an early mantra of many kernel developers, myself included, and hardware companies are usually willing to donate machines and peripherals to ensure kernel support.

This new AMD machine is just another in a long line of good workstations that help me read email really well. Oops, I mean, “do kernel builds really fast”…

For full details on the system, see this forum description, and this video that Wendell did in building the machine, and then this video of us talking about it before it was sent out. We need to do a follow-on one now that I’ve had it for a few months and have gotten used to it.

Benchmark tools

Below I post the results of some benchmarks that I have done to try to show the speed of different systems. I’ve used the tool Fio version fio-3.23-28-g7064, kcbench version v0.9.0 (from git), and perf version 5.7.g3d77e6a8804a. All of these are great for doing real-world tests of I/O systems (fio), kernel build tests (kcbench), and “what is my system doing at the moment” queries (perf). I recommend trying all of these out yourself if you haven’t done so already.

Fast Builds

I’ve been using a laptop for my primary development system for a number of years now, due to travel and moving around a bit, and because it was just “good enough” at the time. I do some local builds and testing, but have a “build machine” in a data center somewhere, that I do all of my normal stable kernel builds on, as it is much much faster than any laptop. It is set up to do kernel builds directly off of a RAM disk, ensuring that I/O isn’t an issue. Given that is has 128Gb of RAM, carving out a 40Gb ramdisk for kernel builds to run on (room for 4-5 at once), this has worked really well, with kernel builds of a full kernel tree in a few minutes.

Here’s the output of kcbench on my data center build box which is running Fedora 32:

Processor:           Intel Core Processor (Broadwell) [40 CPUs]
Cpufreq; Memory:     Unknown; 120757 MiB
Linux running:       5.8.7-200.fc32.x86_64 [x86_64]
Compiler:            gcc (GCC) 10.2.1 20200723 (Red Hat 10.2.1-1)
Linux compiled:      5.7.0 [/home/gregkh/.cache/kcbench/linux-5.7]
Config; Environment: defconfig; CCACHE_DISABLE="1"
Build command:       make vmlinux
Filling caches:      This might take a while... Done
Run 1 (-j 40):       81.92 seconds / 43.95 kernels/hour [P:3033%]
Run 2 (-j 40):       83.38 seconds / 43.18 kernels/hour [P:2980%]
Run 3 (-j 46):       82.11 seconds / 43.84 kernels/hour [P:3064%]
Run 4 (-j 46):       81.43 seconds / 44.21 kernels/hour [P:3098%]

Contrast that with my current laptop:

Processor:           Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz [8 CPUs]
Cpufreq; Memory:     powersave [intel_pstate]; 15678 MiB
Linux running:       5.8.8-arch1-1 [x86_64]
Compiler:            gcc (GCC) 10.2.0
Linux compiled:      5.7.0 [/home/gregkh/.cache/kcbench/linux-5.7]
Config; Environment: defconfig; CCACHE_DISABLE="1"
Build command:       make vmlinux
Filling caches:      This might take a while... Done
Run 1 (-j 8):        392.69 seconds / 9.17 kernels/hour [P:768%]
Run 2 (-j 8):        393.37 seconds / 9.15 kernels/hour [P:768%]
Run 3 (-j 10):       394.14 seconds / 9.13 kernels/hour [P:767%]
Run 4 (-j 10):       392.94 seconds / 9.16 kernels/hour [P:769%]
Run 5 (-j 4):        441.86 seconds / 8.15 kernels/hour [P:392%]
Run 6 (-j 4):        440.31 seconds / 8.18 kernels/hour [P:392%]
Run 7 (-j 6):        413.48 seconds / 8.71 kernels/hour [P:586%]
Run 8 (-j 6):        412.95 seconds / 8.72 kernels/hour [P:587%]

Then the new workstation:

Processor:           AMD Ryzen Threadripper 3970X 32-Core Processor [64 CPUs]
Cpufreq; Memory:     schedutil [acpi-cpufreq]; 257693 MiB
Linux running:       5.8.8-arch1-1 [x86_64]
Compiler:            gcc (GCC) 10.2.0
Linux compiled:      5.7.0 [/home/gregkh/.cache/kcbench/linux-5.7/]
Config; Environment: defconfig; CCACHE_DISABLE="1"
Build command:       make vmlinux
Filling caches:      This might take a while... Done
Run 1 (-j 64):       37.15 seconds / 96.90 kernels/hour [P:4223%]
Run 2 (-j 64):       37.14 seconds / 96.93 kernels/hour [P:4223%]
Run 3 (-j 71):       37.16 seconds / 96.88 kernels/hour [P:4240%]
Run 4 (-j 71):       37.12 seconds / 96.98 kernels/hour [P:4251%]
Run 5 (-j 32):       43.12 seconds / 83.49 kernels/hour [P:2470%]
Run 6 (-j 32):       43.81 seconds / 82.17 kernels/hour [P:2435%]
Run 7 (-j 38):       41.57 seconds / 86.60 kernels/hour [P:2850%]
Run 8 (-j 38):       42.53 seconds / 84.65 kernels/hour [P:2787%]

Having a local machine that builds kernels faster than my external build box has been a liberating experience. I can do many more local tests before sending things off to the build systems for “final test builds” there.

Here’s a picture of my local box doing kernel builds, and the remote machine doing builds at the same time, both running bpytop to monitor what is happening (htop doesn’t work well for huge numbers of cpus). It’s not really all that useful, but is fun eye-candy:

SSD vs. NVME

As shipped to me, the machine booted from a raid array of an NVME disk. Outside of laptops, I’ve not used NVME disks, only SSDs. Given that I didn’t really “trust” the Linux install on the disk, I deleted the data on the disks, and installed a trusty SATA SSD disk and got Linux up and running well on it.

After that was all up and running well (btw, I use Arch Linux), I looked into the NVME disk, to see if it really would help my normal workflow out or not.

Firing up fio, here are the summary numbers of the different disk systems using the default “examples/ssd-test.fio” test settings:

SSD:

Run status group 0 (all jobs):
   READ: bw=219MiB/s (230MB/s), 219MiB/s-219MiB/s (230MB/s-230MB/s), io=10.0GiB (10.7GB), run=46672-46672msec

Run status group 1 (all jobs):
   READ: bw=114MiB/s (120MB/s), 114MiB/s-114MiB/s (120MB/s-120MB/s), io=6855MiB (7188MB), run=60001-60001msec

Run status group 2 (all jobs):
  WRITE: bw=177MiB/s (186MB/s), 177MiB/s-177MiB/s (186MB/s-186MB/s), io=10.0GiB (10.7GB), run=57865-57865msec

Run status group 3 (all jobs):
  WRITE: bw=175MiB/s (183MB/s), 175MiB/s-175MiB/s (183MB/s-183MB/s), io=10.0GiB (10.7GB), run=58539-58539msec

Disk stats (read/write):
  sda: ios=4375716/5243124, merge=548/5271, ticks=404842/436889, in_queue=843866, util=99.73%

NVME:

Run status group 0 (all jobs):
   READ: bw=810MiB/s (850MB/s), 810MiB/s-810MiB/s (850MB/s-850MB/s), io=10.0GiB (10.7GB), run=12636-12636msec

Run status group 1 (all jobs):
   READ: bw=177MiB/s (186MB/s), 177MiB/s-177MiB/s (186MB/s-186MB/s), io=10.0GiB (10.7GB), run=57875-57875msec

Run status group 2 (all jobs):
  WRITE: bw=558MiB/s (585MB/s), 558MiB/s-558MiB/s (585MB/s-585MB/s), io=10.0GiB (10.7GB), run=18355-18355msec

Run status group 3 (all jobs):
  WRITE: bw=553MiB/s (580MB/s), 553MiB/s-553MiB/s (580MB/s-580MB/s), io=10.0GiB (10.7GB), run=18516-18516msec

Disk stats (read/write):
    md0: ios=5242880/5237386, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=1310720/1310738, aggrmerge=0/23, aggrticks=63986/25048, aggrin_queue=89116, aggrutil=97.67%
  nvme3n1: ios=1310720/1310729, merge=0/0, ticks=63622/25626, in_queue=89332, util=97.63%
  nvme0n1: ios=1310720/1310762, merge=0/92, ticks=63245/25529, in_queue=88858, util=97.67%
  nvme1n1: ios=1310720/1310735, merge=0/3, ticks=64009/24018, in_queue=88114, util=97.58%
  nvme2n1: ios=1310720/1310729, merge=0/0, ticks=65070/25022, in_queue=90162, util=97.49%

Full logs of both tests can be found here for the SSD, and here for the NVME array.

Basically the NVME array is up to 3 times faster than the SSD, depending on the specific read/write test, and is faster for everything overall.

But, does my normal workload of kernel builds matter when building on such fast storage? Normally a kernel build is very I/O intensive, but only up to a point. If the storage system can keep the CPU “full” of new data to build, and writes do not stall, a kernel build should be limited by CPU power, if the storage system can go fast enough.

So, is a SSD “fast” enough on a huge AMD Threadripper system?

In short, yes, here’s the output of kcbench running on the NVME disk:

Processor:           AMD Ryzen Threadripper 3970X 32-Core Processor [64 CPUs]
Cpufreq; Memory:     schedutil [acpi-cpufreq]; 257693 MiB
Linux running:       5.8.8-arch1-1 [x86_64]
Compiler:            gcc (GCC) 10.2.0
Linux compiled:      5.7.0 [/home/gregkh/.cache/kcbench/linux-5.7/]
Config; Environment: defconfig; CCACHE_DISABLE="1"
Build command:       make vmlinux
Filling caches:      This might take a while... Done
Run 1 (-j 64):       36.97 seconds / 97.38 kernels/hour [P:4238%]
Run 2 (-j 64):       37.18 seconds / 96.83 kernels/hour [P:4220%]
Run 3 (-j 71):       37.14 seconds / 96.93 kernels/hour [P:4248%]
Run 4 (-j 71):       37.22 seconds / 96.72 kernels/hour [P:4241%]
Run 5 (-j 32):       44.77 seconds / 80.41 kernels/hour [P:2381%]
Run 6 (-j 32):       42.93 seconds / 83.86 kernels/hour [P:2485%]
Run 7 (-j 38):       42.41 seconds / 84.89 kernels/hour [P:2797%]
Run 8 (-j 38):       42.68 seconds / 84.35 kernels/hour [P:2787%]

Almost the exact same number of kernels built per hour.

So for a kernel developer, right now, a SSD is “good enough”, right?

It’s not just all builds

While kernel builds are the most time-consuming thing that I do on my systems, the other “heavy” thing that I do is lots of git commands on the Linux kernel tree. git is really fast, but it is limited by the speed of the storage medium for lots of different operations (clones, switching branches, and the like).

After I switched to running my kernel trees off of the NVME storage, it “felt” like git was going faster now, so I came up with some totally-artifical benchmarks to try to see if this was really true or not.

One common thing is cloning a whole kernel tree from a local version in a new directory to do different things with it. Git is great in that you can keep the “metadata” in one place, and only check out the source files in the new location, but dealing with 70 thousand files is not “free”.

$ cat clone_test.sh
#!/bin/bash
git clone -s ../work/torvalds/ test
sync

And, to make sure the data isn’t just coming out of the kernel cache, be sure to flush all caches first.

SSD output:

$ sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
$ perf stat ./clone_test.sh
Cloning into 'test'...
done.
Updating files: 100% (70006/70006), done.

 Performance counter stats for './clone_test.sh':

          4,971.83 msec task-clock:u              #    0.536 CPUs utilized
                 0      context-switches:u        #    0.000 K/sec
                 0      cpu-migrations:u          #    0.000 K/sec
            92,713      page-faults:u             #    0.019 M/sec
    14,623,046,712      cycles:u                  #    2.941 GHz                      (83.18%)
       720,522,572      stalled-cycles-frontend:u #    4.93% frontend cycles idle     (83.40%)
     3,179,466,779      stalled-cycles-backend:u  #   21.74% backend cycles idle      (83.06%)
    21,254,471,305      instructions:u            #    1.45  insn per cycle
                                                  #    0.15  stalled cycles per insn  (83.47%)
     2,842,560,124      branches:u                #  571.734 M/sec                    (83.21%)
       257,505,571      branch-misses:u           #    9.06% of all branches          (83.68%)

       9.270460632 seconds time elapsed

       3.505774000 seconds user
       1.435931000 seconds sys

NVME disk:

$ sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
~/linux/tmp $ perf stat ./clone_test.sh
Cloning into 'test'...
done.
Updating files: 100% (70006/70006), done.

 Performance counter stats for './clone_test.sh':

          5,183.64 msec task-clock:u              #    0.833 CPUs utilized
                 0      context-switches:u        #    0.000 K/sec
                 0      cpu-migrations:u          #    0.000 K/sec
            87,409      page-faults:u             #    0.017 M/sec
    14,660,739,004      cycles:u                  #    2.828 GHz                      (83.46%)
       712,429,063      stalled-cycles-frontend:u #    4.86% frontend cycles idle     (83.40%)
     3,262,636,019      stalled-cycles-backend:u  #   22.25% backend cycles idle      (83.09%)
    21,241,797,894      instructions:u            #    1.45  insn per cycle
                                                  #    0.15  stalled cycles per insn  (83.50%)
     2,839,260,818      branches:u                #  547.735 M/sec                    (83.30%)
       258,942,077      branch-misses:u           #    9.12% of all branches          (83.25%)

       6.219492326 seconds time elapsed

       3.336154000 seconds user
       1.593855000 seconds sys

So a “clone” is faster by 3 seconds, nothing earth shattering, but noticable.

But clones are rare, what’s more common is switching between branches, which checks out a subset of the different files depending on what is contained in the branches. It’s a lot of logic to figure out exactly what files need to change.

Here’s the test script:

$ cat branch_switch_test.sh
#!/bin/bash
cd test
git co -b old_kernel v4.4
sync
git co -b new_kernel v5.8
sync

And the results on the different disks:

SSD:

$ sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
$ perf stat ./branch_switch_test.sh
Updating files: 100% (79044/79044), done.
Switched to a new branch 'old_kernel'
Updating files: 100% (77961/77961), done.
Switched to a new branch 'new_kernel'

 Performance counter stats for './branch_switch_test.sh':

     10,500.82 msec task-clock:u              #    0.613 CPUs utilized
         0      context-switches:u        #    0.000 K/sec
         0      cpu-migrations:u          #    0.000 K/sec
       195,900      page-faults:u             #    0.019 M/sec
    27,773,264,048      cycles:u                  #    2.645 GHz                      (83.35%)
     1,386,882,131      stalled-cycles-frontend:u #    4.99% frontend cycles idle     (83.54%)
     6,448,903,713      stalled-cycles-backend:u  #   23.22% backend cycles idle      (83.22%)
    39,512,908,361      instructions:u            #    1.42  insn per cycle
                          #    0.16  stalled cycles per insn  (83.15%)
     5,316,543,747      branches:u                #  506.298 M/sec                    (83.55%)
       472,900,788      branch-misses:u           #    8.89% of all branches          (83.18%)

      17.143453331 seconds time elapsed

       6.589942000 seconds user
       3.849337000 seconds sys

NVME:

$ sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
~/linux/tmp $ perf stat ./branch_switch_test.sh
Updating files: 100% (79044/79044), done.
Switched to a new branch 'old_kernel'
Updating files: 100% (77961/77961), done.
Switched to a new branch 'new_kernel'

 Performance counter stats for './branch_switch_test.sh':

         10,945.41 msec task-clock:u              #    0.921 CPUs utilized
                 0      context-switches:u        #    0.000 K/sec
                 0      cpu-migrations:u          #    0.000 K/sec
           197,776      page-faults:u             #    0.018 M/sec
    28,194,940,134      cycles:u                  #    2.576 GHz                      (83.37%)
     1,380,829,465      stalled-cycles-frontend:u #    4.90% frontend cycles idle     (83.14%)
     6,657,826,665      stalled-cycles-backend:u  #   23.61% backend cycles idle      (83.37%)
    41,291,161,076      instructions:u            #    1.46  insn per cycle
                                                  #    0.16  stalled cycles per insn  (83.00%)
     5,353,402,476      branches:u                #  489.100 M/sec                    (83.25%)
       469,257,145      branch-misses:u           #    8.77% of all branches          (83.87%)

      11.885845725 seconds time elapsed

       6.741741000 seconds user
       4.141722000 seconds sys

Just over 5 seconds faster on an nvme disk array.

Now 5 seconds doesn’t sound like much, but I’ll take it…

Conclusion

If you haven’t looked into new hardware in a while, or are stuck doing kernel development on a laptop, please seriously consider doing so, the power in a small desktop tower these days (and who is traveling anymore that needs a laptop?) is well worth it if possible.

Again, many thanks to Level1Techs for the hardware, it’s been put to very good use.

Patch Workflow With Mutt - 2019

Given that the main development workflow for most kernel maintainers is with email, I spend a lot of time in my email client. For the past few decades I have used (mutt), but every once in a while I look around to see if there is anything else out there that might work better.

One project that looks promising is (aerc) which was started by (Drew DeVault). It is a terminal-based email client written in Go, and relies on a lot of other go libraries to handle a lot of the “grungy” work in dealing with imap clients, email parsing, and other fun things when it comes to free-flow text parsing that emails require.

aerc isn’t in a usable state for me just yet, but Drew asked if I could document exactly how I use an email client for my day-to-day workflow to see what needs to be done to aerc to have me consider switching.

Note, this isn’t a criticism of mutt at all. I love the tool, and spend more time using that userspace program than any other. But as anyone who knows email clients, they all suck, it’s just that mutt sucks less than everything else (that’s literally their motto)

I did a (basic overview of how I apply patches to the stable kernel trees quite a few years ago) but my workflow has evolved over time, so instead of just writing a private email to Drew, I figured it was time to post something showing others just how the sausage really is made.

Anyway, my email workflow can be divided up into 3 different primary things that I do:

  • basic email reading, management, and sorting
  • reviewing new development patches and applying them to a source repository.
  • reviewing potential stable kernel patches and applying them to a source repository.

Given that all stable kernel patches need to already be in Linus’s kernel tree first, the workflow of the how to work with the stable tree is much different from the new patch workflow.

Basic email reading

All of my email ends up in either two “inboxes” on my local machine. One for everything that is sent directly to me (either with To: or Cc:) as well as a number of mailing lists that I ensure I read all messages that are sent to it because I am a maintainer of those subsystems (like (USB), or (stable)). The second inbox consists of other mailing lists that I do not read all messages of, but review as needed, and can be referenced when I need to look something up. Those mailing lists are the “big” linux-kernel mailing list to ensure I have a local copy to search from when I am offline (due to traveling), as well as other “minor” development mailing lists that I like to keep a copy locally like linux-pci, linux-fsdevel, and a few other smaller vger lists.

I get these maildir folders synced with the mail server using (mbsync) which works really well and is much faster than using (offlineimap), which I used for many many years ends up being really slow for when you do not live on the same continent as the mail server. (Luis’s) recent post of switching to mbsync finally pushed me to take the time to configure it all properly and I am glad that I did.

Let’s ignore my “lists” inbox, as that should be able to be read by any email client by just pointing it at it. I do this with a simple alias:

alias muttl='mutt -f ~/mail_linux/'

which allows me to type muttl at any command line to instantly bring it up:

What I spend most of the time in is my “main” mailbox, and that is in a local maildir that gets synced when needed in ~/mail/INBOX/. A simple mutt on the command line brings this up:

Yes, everything just ends up in one place, in handling my mail, I prune relentlessly. Everything ends up in one of 3 states for what I need to do next:

  • not read yet
  • read and left in INBOX as I need to do something “soon” about it
  • read and it is a patch to do something with

Everything that does not require a response, or I’ve already responded to it, gets deleted from the main INBOX at that point in time, or saved into an archive in case I need to refer back to it again (like mailing list messages).

That last state makes me save the message into one of two local maildirs, todo and stable. Everything in todo is a new patch that I need to review, comment on, or apply to a development tree. Everything in stable is something that has to do with patches that need to get applied to the stable kernel tree.

Side note, I have scripts that run frequently that email me any patches that need to be applied to the stable kernel trees, when they hit Linus’s tree. That way I can just live in my email client and have everything that needs to be applied to a stable release in one place.

I sweep my main INBOX ever few hours, and sort things out either quickly responding, deleting, archiving, or saving into the todo or stable directory. I don’t achieve a constant “inbox zero”, but if I only have 40 or so emails in there, I am doing well.

So, for this main workflow, I need an easy way to:

  • filter the INBOX by a pattern so that I only see one “type” of message at a time (more below)
  • read an email
  • write an email
  • respond to existing email, and use vim as an editor as my hands have those key bindings burned into them.
  • delete an email
  • save an email to one of two mboxes with a press of a few keystrokes
  • bulk delete/archive/save emails all at once

These are all tasks that I bet almost everyone needs to do all the time, so a tool like aerc should be able to do that easily.

A note about filtering. As everything comes into one inbox, it is easier to filter that mbox based on things so I can process everything at once.

As an example, I want to read all of the messages sent to the linux-usb mailing list right now, and not see anything else. To do that, in mutt, I press l (limit) which brings up a prompt for a filter to apply to the mbox. This ability to limit messages to one type of thing is really powerful and I use it in many different ways within mutt.

Here’s an example of me just viewing all of the messages that are sent to the linux-usb mailing list, and saving them off after I have read them:

This isn’t that complex, but it has to work quickly and well on mailboxes that are really really big. As an example, here’s me opening my “all lists” mbox and filtering on the linux-api mailing list messages that I have not read yet. It’s really fast as mutt caches lots of information about the mailbox and does not require reading all of the messages each time it starts up to generate its internal structures.

All messages that I want to save to the todo directory I can do with a two keystroke sequence, .t which saves the message there automatically

Again, that’s a binding I set up years ago, , jumps to the specific mbox, and . copies the message to that location.

Now you see why using mutt is not exactly obvious, those bindings are not part of the default configuration and everyone ends up creating their own custom key bindings for whatever they want to do. It takes a good amount of time to figure this out and set things up how you want, but once you are over that learning curve, you can do very complex things easily. Much like an editor (emacs, vim), you can configure them to do complex things easily, but getting to that level can take a lot of time and knowledge. It’s a tool, and if you are going to rely on it, you should spend the time to learn how to use your tools really well.

Hopefully aerc can get to this level of functionality soon. Odds are everyone else does something much like this, as my use-case is not unusual.

Now let’s get to the unusual use cases, the fun things:

Development Patch review and apply

When I decide it’s time to review and apply patches, I do so by subsystem (as I maintain a number of different ones). As all pending patches are in one big maildir, I filter the messages by the subsystem I care about at the moment, and save all of the messages out to a local mbox file that I call s (hey, naming is hard, it gets worse, just wait…)

So, in my linux/work/ local directory, I keep the development trees for different subsystems like usb, char-misc, driver-core, tty, and staging.

Let’s look at how I handle some staging patches.

First, I go into my ~/linux/work/staging/ directory, which I will stay in while doing all of this work. I open the todo mbox with a quick ,t pressed within mutt (a macro I picked from somewhere long ago, I don’t remember where…), and then filter all staging messages, and save them to a local mbox with the following keystrokes:

mutt
,t
l staging
T
s ../s

Yes, I could skip the l staging step, and just do T staging instead of T, but it’s nice to see what I’m going to save off first before doing so:

Now all of those messages are in a local mbox file that I can open with a single keystroke, ’s’ on the command line. That is an alias:

alias s='mutt -f ../s'

I then dig around in that mbox, sort patches by driver type to see everything for that driver at once by filtering on the name and then save those messages to another mbox called ‘s1’ (see, I told you the names got worse.)

s
l erofs
T
s ../s1

I have lots of local mbox files all “intuitively” named ‘s1’, ‘s2’, and ‘s3’. Of course I have aliases to open those files quickly:

alias s1='mutt -f ../s1'
alias s2='mutt -f ../s2'
alias s3='mutt -f ../s3'

I have a number of these mbox files as sometimes I need to filter even further by patch set, or other things, and saving them all to different mboxes makes things go faster.

So, all the erofs patches are in one mbox, let’s open it up and review them, and save the patches that look good enough to apply to another mbox:

Turns out that not all patches need to be dealt with right now (moving erofs out of the staging directory requires other people to review it, so I just save those messages back to the todo mbox:

Now I have a single patch that I want to apply, but I need to add some acks from the maintainers of erofs provided. I do this by editing the “raw” message directly from within mutt. I open the individual messages from the maintainers, cut their reviewed-by line, and then edit the original patch and add those lines to the patch:

Some kernel maintainers right now are screaming something like “Automate this!”, “Patchwork does this for you!”, “Are you crazy?” Yeah, this is one place that I need to work on, but the time involved to do this is not that much and it’s not common that others actually review patches for subsystems I maintain, unfortunately.

The ability to edit a single message directly within my email client is essential. I end up having to fix up changelog text, editing the subject line to be correct, fixing the mail headers to not do foolish things with text formats, and in some cases, editing the patch itself for when it is corrupted or needs to be fixed (I want a Linkedin skill badge for “can edit diff files by hand and have them still work”)

So one hard requirement I have is “editing a raw message from within the email client.” If an email client can not do this, it’s a non-starter for me, sorry.

So we now have a single patch that needs to be applied to the tree. I am already in the ~/linux/work/staging/ directory, and on the correct git branch for where this patch needs to go (how I handle branches and how patches move between them deserve a totally different blog post…)

I can apply this patch in one of two different ways, using git am -s ../s1 on the command line, piping the whole mbox into git and applying the patches directly, or I can apply them within mutt individually by using a macro.

When I have a lot of patches to apply, I just pipe the mbox file to git am -s as I’m comfortable with that, and it goes quick for multiple patches. It also works well as I have lots of different terminal windows open in the same directory when doing this and I can quickly toggle between them.

But we are talking about email clients at the moment, so here’s me applying a single patch to the local git tree:

All it took was hitting the L key. That key is set up as a macro in my mutt configuration file with a single line:

macro index L '| git am -s'\n

This macro pipes the output of the current message to git am -s.

The ability of mutt to pipe the current message (or messages) to external scripts is essential for my workflow in a number of different places. Not having to leave the email client but being able to run something else with that message, is a very powerful functionality, and again, a hard requirement for me.

So that’s it for applying development patches. It’s a bunch of the same tasks over and over:

  • collect patches by a common theme
  • filter the patches by a smaller subset
  • review them manually and responding if there are problems
  • saving “good” patches off to apply
  • applying the good patches
  • jump back to the first step

Doing that all within the email program and being able to quickly get in, and out of the program, as well as do work directly from the email program, is key.

Of course I do a “test build and sometimes test boot and then push git trees and notify author that the patch is applied” set of steps when applying patches too, but those are outside of my email client workflow and happen in a separate terminal window.

Stable patch review and apply

The process of reviewing patches for the stable tree is much like the development patch process, but it differs in that I never use ‘git am’ for applying anything.

The stable kernel tree, while under development, is kept as a series of patches that need to be applied to the previous release. This series of patches is maintained by using a tool called (quilt). Quilt is very powerful and handles sets of patches that need to be applied on top of a moving base very easily. The tool was based on a crazy set of shell scripts written by Andrew Morton a long time ago, and is currently maintained by Jean Delvare and has been rewritten in perl to make them more maintainable. It handles thousands of patches easily and quickly and is used by many developers to handle kernel patches for distributions as well as other projects.

I highly recommend it as it allows you to reorder, drop, add in the middle of the series, and manipulate patches in all sorts of ways, as well as create new patches directly. I do this for the stable tree as lots of times we end up dropping patches from the middle of the series when reviewers say they should not be applied, adding new patches where needed as prerequisites of existing patches, and other changes that with git, would require lots of rebasing.

Rebasing a git does not work for when you have developers working “down” from your tree. We usually have the rule with kernel development that if you have a public tree, it never gets rebased otherwise no one can use it for development.

Anyway, the stable patches are kept in a quilt series in a repository that is kept under version control in git (complex, yeah, sorry.) That queue can always be found (here).

I do create a linux-stable-rc git tree that is constantly rebased based on the stable queue for those who run test systems that can not handle quilt patches. That tree is found (here) and should not ever be used by anyone for anything other than automated testing. See (this email for a bit more explanation of how these git trees should, and should not, be used.

With all that background information behind us, let’s look at how I take patches that are in Linus’s tree, and apply them to the current stable kernel queues:

First I open the stable mbox. Then I filter by everything that has upstream in the subject line. Then I filter again by alsa to only look at the alsa patches. I look at the individual patches, looking at the patch to verify that it really is something that should be applied to the stable tree and determine what order to apply the patches in based on the date of the original commit.

I then hit F to pipe the message to a script that looks up the Fixes: tag in the message to determine what stable tree, if any, the commit that this fix was contained in.

In this example, the patch only should go back to the 4.19 kernel tree, so when I apply it, I know to stop at that place and not go further.

To apply the patch, I hit A which is another macro that I define in my mutt configuration

macro index A |'~/linux/stable/apply_it_from_email'\n
macro pager A |'~/linux/stable/apply_it_from_email'\n

It is defined “twice” as you can have different key bindings when you are looking at mailbox’s index of all messages from when you are looking at the contents of a single message.

In both cases, I pipe the whole email message to my apply_it_from_email script.

That script digs through the message, finds the git commit id of the patch in Linus’s tree, then runs a different script that takes the commit id, exports the patch associated with that id, edits the message to add my signed-off-by to the patch as well as dropping me into my editor to make any needed tweaks that might be needed (sometimes files get renamed so I have to do that by hand, and it gives me one final change to review the patch in my editor which is usually easier than in the email client directly as I have better syntax highlighting and can search and review the text better.

If all goes well, I save the file and the script continues and applies the patch to a bunch of stable kernel trees, one after another, adding the patch to the quilt series for that specific kernel version. To do all of this I had to spawn a separate terminal window as mutt does fun things to standard input/output when piping messages to a script, and I couldn’t ever figure out how to do this all without doing the extra spawn process.

Here it is in action, as a video as (asciinema) can’t show multiple windows at the same time.

Once I have applied the patch, I save it away as I might need to refer to it again, and I move on to the next one.

This sounds like a lot of different steps, but I can process a lot of these relatively quickly. The patch review step is the slowest one here, as that of course can not be automated.

I later take those new patches that have been applied and run kernel build tests and other things before sending out emails saying they have been applied to the tree. But like with development patches, that happens outside of my email client workflow.

Bonus, sending email from the command line

In writing this up, I remembered that I do have some scripts that use mutt to send email out. I don’t normally use mutt for this for patch reviews, as I use other scripts for that (ones that eventually got turned into git send-email), so it’s not a hard requirement, but it is nice to be able to do a simple:

mutt -s "${subject}" "${address}" <  ${msg} >> error.log 2>&1

from within a script when needed.

Thunderbird also can do this, I have used:

thunderbird --compose "to='${address}',subject='${subject}',message=${msg}"

at times in the past when dealing with email servers that mutt can not connect to easily (i.e. gmail when using oauth tokens).

Summary of what I need from an email client

So, to summarize it all for Drew, here’s my list of requirements for me to be able to use an email client for kernel maintainership roles:

  • work with local mbox and maildir folders easily
  • open huge mbox and maildir folders quickly.
  • custom key bindings for any command. Defaults that are sane is always good, but everyone is used to a previous program and training fingers can be hard.
  • create new key bindings for common tasks (like save a message to a specific mbox)
  • easily filter messages based on various things. Full regexes are not needed, see the PATTERNS section of ‘man muttrc’ for examples of what people have come up with over the years as being needed by an email client.
  • when sending/responding to an email, bring it up in the editor of my choice, with full headers. I know aerc already uses vim for this, which is great as that makes it easy to send patches or include other files directly in an email body
  • edit a message directly from the email client and then save it back to the local mbox it came from
  • pipe the current message to an external program

That’s what I use for kernel development.

Oh, I forgot:

  • handle gpg encrypted email. Some mailing lists I am on send everything encrypted with a mailing list key which is needed to both decrypt the message and to encrypt messages sent to the list. SMIME could be used if GPG can’t work, but both of them are probably equally horrid to code support for, so I recommend GPG as that’s probably used more often.

Bonus things that I have grown to rely on when using mutt is:

  • handle displaying html email by piping to an external tool like w3m
  • send a message from the command line with a specific subject and a specific attchment if needed.
  • specify the configuration file to use as a command line option. It is usually easier to have one configuration file for a “work” account, and another one for a “personal” one, with different email servers and settings provided for both.
  • configuration files that can include other configuration files. Mutt allows me to keep all of my “core” keybindings in one config file and just specific email server options in separate config files, allowing me to make configuration management easier.

If you have made it this far, and you aren’t writing an email client, that’s amazing, it must be a slow news day, which is a good thing. I hope this writeup helps others to show them how mutt can be used to handle developer workflows easily, and what is required of a good email client in order to be able to do all of this.

Hopefully other email clients can get to state where they too can do all of this. Competition is good and maybe aerc can get there someday.

Linux Stable Tree Mirror at Github

As everyone seems to like to put kernel trees up on github for random projects (based on the crazy notifications I get all the time), I figured it was time to put up a semi-official mirror of all of the stable kernel releases on github.com

It can be found at: https://github.com/gregkh/linux and I will try to keep it up to date with the real source of all kernel stable releases at https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/

It differs from Linus’s tree at: https://github.com/torvalds/linux in that it contains all of the different stable tree branches and stable releases and tags, which many devices end up building on top of.

So, mirror away!

Also note, this is a read-only mirror, any pull requests created on it will be gleefully ignored, just like happens on Linus’s github mirror.

If people think this is needed on any other git hosting site, just let me know and I will be glad to push to other places as well.

This notification was also cross-posted on the new http://people.kernel.org/ site, go follow that for more kernel developer stuff.

What Stable Kernel Should I Use?

I get a lot of questions about people asking me about what stable kernel should they be using for their product/device/laptop/server/etc. all the time. Especially given the now-extended length of time that some kernels are being supported by me and others, this isn’t always a very obvious thing to determine. So this post is an attempt to write down my opinions on the matter. Of course, you are free to use what ever kernel version you want, but here’s what I recommend.

As always, the opinions written here are my own, I speak for no one but myself.

What kernel to pick

Here’s the my short list of what kernel you should use, raked from best to worst options. I’ll go into the details of all of these below, but if you just want the summary of all of this, here it is:

Hierarchy of what kernel to use, from best solution to worst:

  • Supported kernel from your favorite Linux distribution
  • Latest stable release
  • Latest LTS release
  • Older LTS release that is still being maintained

What kernel to never use:

  • Unmaintained kernel release

To give numbers to the above, today, as of August 24, 2018, the front page of kernel.org looks like this:

So, based on the above list that would mean that:

  • 4.18.5 is the latest stable release
  • 4.14.67 is the latest LTS release
  • 4.9.124, 4.4.152, and 3.16.57 are the older LTS releases that are still being maintained
  • 4.17.19 and 3.18.119 are “End of Life” kernels that have had a release in the past 60 days, and as such stick around on the kernel.org site for those who still might want to use them.

Quite easy, right?

Ok, now for some justification for all of this:

Distribution kernels

The best solution for almost all Linux users is to just use the kernel from your favorite Linux distribution. Personally, I prefer the community based Linux distributions that constantly roll along with the latest updated kernel and it is supported by that developer community. Distributions in this category are Fedora, openSUSE, Arch, Gentoo, CoreOS, and others.

All of these distributions use the latest stable upstream kernel release and make sure that any needed bugfixes are applied on a regular basis. That is the one of the most solid and best kernel that you can use when it comes to having the latest fixes (remember all fixes are security fixes) in it.

There are some community distributions that take a bit longer to move to a new kernel release, but eventually get there and support the kernel they currently have quite well. Those are also great to use, and examples of these are Debian and Ubuntu.

Just because I did not list your favorite distro here does not mean its kernel is not good. Look on the web site for the distro and make sure that the kernel package is constantly updated with the latest security patches, and all should be well.

Lots of people seem to like the old, “traditional” model of a distribution and use RHEL, SLES, CentOS or the “LTS” Ubuntu release. Those distros pick a specific kernel version and then camp out on it for years, if not decades. They do loads of work backporting the latest bugfixes and sometimes new features to these kernels, all in a Quixote quest to keep the version number from never being changed, despite having many thousands of changes on top of that older kernel version. This work is a truly thankless job, and the developers assigned to these tasks do some wonderful work in order to achieve these goals. If you like never seeing your kernel version number change, then use these distributions. They usually cost some money to use, but the support you get from these companies is worth it when something goes wrong.

So again, the best kernel you can use is one that someone else supports, and you can turn to for help. Use that support, usually you are already paying for it (for the enterprise distributions), and those companies know what they are doing.

But, if you do not want to trust someone else to manage your kernel for you, or you have hardware that a distribution does not support, then you want to run the Latest stable release:

Latest stable release

This kernel is the latest one from the Linux kernel developer community that they declare as “stable”. About every three months, the community releases a new stable kernel that contains all of the newest hardware support, the latest performance improvements, as well as the latest bugfixes for all parts of the kernel. Over the next 3 months, bugfixes that go into the next kernel release to be made are backported into this stable release, so that any users of this kernel are sure to get them as soon as possible.

This is usually the kernel that most community distributions use as well, so you can be sure it is tested and has a large audience of users. Also, the kernel community (all 4000+ developers) are willing to help support users of this release, as it is the latest one that they made.

After 3 months, a new kernel is released and you should move to it to ensure that you stay up to date, as support for this kernel is usually dropped a few weeks after the newer release happens.

If you have new hardware that is purchased after the last LTS release came out, you almost are guaranteed to have to run this kernel in order to have it supported. So for desktops or new servers, this is usually the recommended kernel to be running.

Latest LTS release

If your hardware relies on a vendors out-of-tree patch in order to make it work properly (like almost all embedded devices these days), then the next best kernel to be using is the latest LTS release. That release gets all of the latest kernel fixes that goes into the stable releases where applicable, and lots of users test and use it.

Note, no new features and almost no new hardware support is ever added to these kernels, so if you need to use a new device, it is better to use the latest stable release, not this release.

Also this release is common for users that do not like to worry about “major” upgrades happening on them every 3 months. So they stick to this release and upgrade every year instead, which is a fine practice to follow.

The downsides of using this release is that you do not get the performance improvements that happen in newer kernels, except when you update to the next LTS kernel, potentially a year in the future. That could be significant for some workloads, so be very aware of this.

Also, if you have problems with this kernel release, the first thing that any developer whom you report the issue to is going to ask you to do is, “does the latest stable release have this problem?” So you will need to be aware that support might not be as easy to get as with the latest stable releases.

Now if you are stuck with a large patchset and can not update to a new LTS kernel once a year, perhaps you want the older LTS releases:

Older LTS release

These releases have traditionally been supported by the community for 2 years, sometimes longer for when a major distribution relies on this (like Debian or SLES). However in the past year, thanks to a lot of suport and investment in testing and infrastructure from Google, Linaro, Linaro member companies, kernelci.org, and others, these kernels are starting to be supported for much longer.

Here’s the latest LTS releases and how long they will be supported for, as shown at kernel.org/category/releases.html on August 24, 2018:

The reason that Google and other companies want to have these kernels live longer is due to the crazy (some will say broken) development model of almost all SoC chips these days. Those devices start their development lifecycle a few years before the chip is released, however that code is never merged upstream, resulting in a brand new chip being released based on a 2 year old kernel. These SoC trees usually have over 2 million lines added to them, making them something that I have started calling “Linux-like” kernels.

If the LTS releases stop happening after 2 years, then support from the community instantly stops, and no one ends up doing bugfixes for them. This results in millions of very insecure devices floating around in the world, not something that is good for any ecosystem.

Because of this dependency, these companies now require new devices to constantly update to the latest LTS releases as they happen for their specific release version (i.e. every 4.9.y release that happens). An example of this is the Android kernel requirements for new devices shipping for the “O” and now “P” releases specified the minimum kernel version allowed, and Android security releases might start to require those “.y” releases to happen more frequently on devices.

I will note that some manufacturers are already doing this today. Sony is one great example of this, updating to the latest 4.4.y release on many of their new phones for their quarterly security release. Another good example is the small company Essential which has been tracking the 4.4.y releases faster than anyone that I know of.

There is one huge caveat when using a kernel like this. The number of security fixes that get backported are not as great as with the latest LTS release, because the traditional model of the devices that use these older LTS kernels is a much more reduced user model. These kernels are not to be used in any type of “general computing” model where you have untrusted users or virtual machines, as the ability to do some of the recent Spectre-type fixes for older releases is greatly reduced, if present at all in some branches.

So again, only use older LTS releases in a device that you fully control, or lock down with a very strong security model (like Android enforces using SELinux and application isolation). Never use these releases on a server with untrusted users, programs, or virtual machines.

Also, support from the community for these older LTS releases is greatly reduced even from the normal LTS releases, if available at all. If you use these kernels, you really are on your own, and need to be able to support the kernel yourself, or rely on you SoC vendor to provide that support for you (note that almost none of them do provide that support, so beware…)

Unmaintained kernel release

Surprisingly, many companies do just grab a random kernel release, slap it into their product and proceed to ship it in hundreds of thousands of units without a second thought. One crazy example of this would be the Lego Mindstorm systems that shipped a random -rc release of a kernel in their device for some unknown reason. A -rc release is a development release that not even the Linux kernel developers feel is ready for everyone to use just yet, let alone millions of users.

You are of course free to do this if you want, but note that you really are on your own here. The community can not support you as no one is watching all kernel versions for specific issues, so you will have to rely on in-house support for everything that could go wrong. Which for some companies and systems, could be just fine, but be aware of the “hidden” cost this might cause if you do not plan for this up front.

Summary

So, here’s a short list of different types of devices, and what I would recommend for their kernels:

  • Laptop / Desktop: Latest stable release
  • Server: Latest stable release or latest LTS release
  • Embedded device: Latest LTS release or older LTS release if the security model used is very strong and tight.

And as for me, what do I run on my machines? My laptops run the latest development kernel (i.e. Linus’s development tree) plus whatever kernel changes I am currently working on and my servers run the latest stable release. So despite being in charge of the LTS releases, I don’t run them myself, except in testing systems. I rely on the development and latest stable releases to ensure that my machines are running the fastest and most secure releases that we know how to create at this point in time.

My Affidavit in the Geniatech vs. McHardy Case

As many people know, last week there was a court hearing in the Geniatech vs. McHardy case. This was a case brought claiming a license violation of the Linux kernel in Geniatech devices in the German court of OLG Cologne.

Harald Welte has written up a wonderful summary of the hearing, I strongly recommend that everyone go read that first.

In Harald’s summary, he refers to an affidavit that I provided to the court. Because the case was withdrawn by McHardy, my affidavit was not entered into the public record. I had always assumed that my affidavit would be made public, and since I have had a number of people ask me about what it contained, I figured it was good to just publish it for everyone to be able to see it.

There are some minor edits from what was exactly submitted to the court such as the side-by-side German translation of the English text, and some reformatting around some footnotes in the text, because I don’t know how to do that directly here, and they really were not all that relevant for anyone who reads this blog. Exhibit A is also not reproduced as it’s just a huge list of all of the kernel releases in which I felt that were no evidence of any contribution by Patrick McHardy.

AFFIDAVIT

I, the undersigned, Greg Kroah-Hartman,
declare in lieu of an oath and in the
knowledge that a wrong declaration in
lieu of an oath is punishable, to be
submitted before the Court:

I. With regard to me personally:

1. I have been an active contributor to
   the Linux Kernel since 1999.

2. Since February 1, 2012 I have been a
   Linux Foundation Fellow.  I am currently
   one of five Linux Foundation Fellows
   devoted to full time maintenance and
   advancement of Linux. In particular, I am
   the current Linux stable Kernel maintainer
   and manage the stable Kernel releases. I
   am also the maintainer for a variety of
   different subsystems that include USB,
   staging, driver core, tty, and sysfs,
   among others.

3. I have been a member of the Linux
   Technical Advisory Board since 2005.

4. I have authored two books on Linux Kernel
   development including Linux Kernel in a
   Nutshell (2006) and Linux Device Drivers
   (co-authored Third Edition in 2009.)

5. I have been a contributing editor to Linux
   Journal from 2003 - 2006.

6. I am a co-author of every Linux Kernel
   Development Report. The first report was
   based on my Ottawa Linux Symposium keynote
   in 2006, and the report has been published
   every few years since then. I have been
   one of the co-author on all of them. This
   report includes a periodic in-depth
   analysis of who is currently contributing
   to Linux. Because of this work, I have an
   in-depth knowledge of the various records
   of contributions that have been maintained
   over the course of the Linux Kernel
   project.

   For many years, Linus Torvalds compiled a
   list of contributors to the Linux kernel
   with each release. There are also usenet
   and email records of contributions made
   prior to 2005. In April of 2005, Linus
   Torvalds created a program now known as
   “Git” which is a version control system
   for tracking changes in computer files and
   coordinating work on those files among
   multiple people. Every Git directory on
   every computer contains an accurate
   repository with complete history and full
   version tracking abilities.  Every Git
   directory captures the identity of
   contributors.  Development of the Linux
   kernel has been tracked and managed using
   Git since April of 2005.

   One of the findings in the report is that
   since the 2.6.11 release in 2005, a total
   of 15,637 developers have contributed to
   the Linux Kernel.

7. I have been an advisor on the Cregit
   project and compared its results to other
   methods that have been used to identify
   contributors and contributions to the
   Linux Kernel, such as a tool known as “git
   blame” that is used by developers to
   identify contributions to a git repository
   such as the repositories used by the Linux
   Kernel project.

8. I have been shown documents related to
   court actions by Patrick McHardy to
   enforce copyright claims regarding the
   Linux Kernel. I have heard many people
   familiar with the court actions discuss
   the cases and the threats of injunction
   McHardy leverages to obtain financial
   settlements. I have not otherwise been
   involved in any of the previous court
   actions.

II. With regard to the facts:

1. The Linux Kernel project started in 1991
   with a release of code authored entirely
   by Linus Torvalds (who is also currently a
   Linux Foundation Fellow).  Since that time
   there have been a variety of ways in which
   contributions and contributors to the
   Linux Kernel have been tracked and
   identified. I am familiar with these
   records.

2. The first record of any contribution
   explicitly attributed to Patrick McHardy
   to the Linux kernel is April 23, 2002.
   McHardy’s last contribution to the Linux
   Kernel was made on November 24, 2015.

3. The Linux Kernel 2.5.12 was released by
   Linus Torvalds on April 30, 2002.

4. After review of the relevant records, I
   conclude that there is no evidence in the
   records that the Kernel community relies
   upon to identify contributions and
   contributors that Patrick McHardy made any
   code contributions to versions of the
   Linux Kernel earlier than 2.4.18 and
   2.5.12. Attached as Exhibit A is a list of
   Kernel releases which have no evidence in
   the relevant records of any contribution
   by Patrick McHardy.

Linux Kernel Release Model

Note

This post is based on a whitepaper I wrote at the beginning of 2016 to be used to help many different companies understand the Linux kernel release model and encourage them to start taking the LTS stable updates more often. I then used it as a basis of a presentation I gave at the Linux Recipes conference in September 2017 which can be seen here.

With the recent craziness of Meltdown and Spectre , I’ve seen lots of things written about how Linux is released and how we handle handles security patches that are totally incorrect, so I figured it is time to dust off the text, update it in a few places, and publish this here for everyone to benefit from.

I would like to thank the reviewers who helped shape the original whitepaper, which has helped many companies understand that they need to stop “cherry picking” random patches into their device kernels. Without their help, this post would be a total mess. All problems and mistakes in here are, of course, all mine. If you notice any, or have any questions about this, please let me know.

Overview

This post describes how the Linux kernel development model works, what a long term supported kernel is, how the kernel developers approach security bugs, and why all systems that use Linux should be using all of the stable releases and not attempting to pick and choose random patches.

Linux Kernel development model

The Linux kernel is the largest collaborative software project ever. In 2017, over 4,300 different developers from over 530 different companies contributed to the project. There were 5 different releases in 2017, with each release containing between 12,000 and 14,500 different changes. On average, 8.5 changes are accepted into the Linux kernel every hour, every hour of the day. A non-scientific study (i.e. Greg’s mailbox) shows that each change needs to be submitted 2-3 times before it is accepted into the kernel source tree due to the rigorous review and testing process that all kernel changes are put through, so the engineering effort happening is much larger than the 8 changes per hour.

At the end of 2017 the size of the Linux kernel was just over 61 thousand files consisting of 25 million lines of code, build scripts, and documentation (kernel release 4.14). The Linux kernel contains the code for all of the different chip architectures and hardware drivers that it supports. Because of this, an individual system only runs a fraction of the whole codebase. An average laptop uses around 2 million lines of kernel code from 5 thousand files to function properly, while the Pixel phone uses 3.2 million lines of kernel code from 6 thousand files due to the increased complexity of a SoC.

Kernel release model

With the release of the 2.6 kernel in December of 2003, the kernel developer community switched from the previous model of having a separate development and stable kernel branch, and moved to a “stable only” branch model. A new release happened every 2 to 3 months, and that release was declared “stable” and recommended for all users to run. This change in development model was due to the very long release cycle prior to the 2.6 kernel (almost 3 years), and the struggle to maintain two different branches of the codebase at the same time.

The numbering of the kernel releases started out being 2.6.x, where x was an incrementing number that changed on every release The value of the number has no meaning, other than it is newer than the previous kernel release. In July 2011, Linus Torvalds changed the version number to 3.x after the 2.6.39 kernel was released. This was done because the higher numbers were starting to cause confusion among users, and because Greg Kroah-Hartman, the stable kernel maintainer, was getting tired of the large numbers and bribed Linus with a fine bottle of Japanese whisky.

The change to the 3.x numbering series did not mean anything other than a change of the major release number, and this happened again in April 2015 with the movement from the 3.19 release to the 4.0 release number. It is not remembered if any whisky exchanged hands when this happened. At the current kernel release rate, the number will change to 5.x sometime in 2018.

Stable kernel releases

The Linux kernel stable release model started in 2005, when the existing development model of the kernel (a new release every 2-3 months) was determined to not be meeting the needs of most users. Users wanted bugfixes that were made during those 2-3 months, and the Linux distributions were getting tired of trying to keep their kernels up to date without any feedback from the kernel community. Trying to keep individual kernels secure and with the latest bugfixes was a large and confusing effort by lots of different individuals.

Because of this, the stable kernel releases were started. These releases are based directly on Linus’s releases, and are released every week or so, depending on various external factors (time of year, available patches, maintainer workload, etc.)

The numbering of the stable releases starts with the number of the kernel release, and an additional number is added to the end of it.

For example, the 4.9 kernel is released by Linus, and then the stable kernel releases based on this kernel are numbered 4.9.1, 4.9.2, 4.9.3, and so on. This sequence is usually shortened with the number “4.9.y” when referring to a stable kernel release tree. Each stable kernel release tree is maintained by a single kernel developer, who is responsible for picking the needed patches for the release, and doing the review/release process. Where these changes are found is described below.

Stable kernels are maintained for as long as the current development cycle is happening. After Linus releases a new kernel, the previous stable kernel release tree is stopped and users must move to the newer released kernel.

Long-Term Stable kernels

After a year of this new stable release process, it was determined that many different users of Linux wanted a kernel to be supported for longer than just a few months. Because of this, the Long Term Supported (LTS) kernel release came about. The first LTS kernel was 2.6.16, released in 2006. Since then, a new LTS kernel has been picked once a year. That kernel will be maintained by the kernel community for at least 2 years. See the next section for how a kernel is chosen to be a LTS release.

Currently the LTS kernels are the 4.4.y, 4.9.y, and 4.14.y releases, and a new kernel is released on average, once a week. Along with these three kernel releases, a few older kernels are still being maintained by some kernel developers at a slower release cycle due to the needs of some users and distributions.

Information about all long-term stable kernels, who is in charge of them, and how long they will be maintained, can be found on the kernel.org release page.

LTS kernel releases average 9-10 patches accepted per day, while the normal stable kernel releases contain 10-15 patches per day. The number of patches fluctuates per release given the current time of the corresponding development kernel release, and other external variables. The older a LTS kernel is, the less patches are applicable to it, because many recent bugfixes are not relevant to older kernels. However, the older a kernel is, the harder it is to backport the changes that are needed to be applied, due to the changes in the codebase. So while there might be a lower number of overall patches being applied, the effort involved in maintaining a LTS kernel is greater than maintaining the normal stable kernel.

Choosing the LTS kernel

The method of picking which kernel the LTS release will be, and who will maintain it, has changed over the years from an semi-random method, to something that is hopefully more reliable.

Originally it was merely based on what kernel the stable maintainer’s employer was using for their product (2.6.16.y and 2.6.27.y) in order to make the effort of maintaining that kernel easier. Other distribution maintainers saw the benefit of this model and got together and colluded to get their companies to all release a product based on the same kernel version without realizing it (2.6.32.y). After that was very successful, and allowed developers to share work across companies, those companies decided to not do that anymore, so future LTS kernels were picked on an individual distribution’s needs and maintained by different developers (3.0.y, 3.2.y, 3.12.y, 3.16.y, and 3.18.y) creating more work and confusion for everyone involved.

This ad-hoc method of catering to only specific Linux distributions was not beneficial to the millions of devices that used Linux in an embedded system and were not based on a traditional Linux distribution. Because of this, Greg Kroah-Hartman decided that the choice of the LTS kernel needed to change to a method in which companies can plan on using the LTS kernel in their products. The rule became “one kernel will be picked each year, and will be maintained for two years.” With that rule, the 3.4.y, 3.10.y, and 3.14.y kernels were picked.

Due to a large number of different LTS kernels being released all in the same year, causing lots of confusion for vendors and users, the rule of no new LTS kernels being based on an individual distribution’s needs was created. This was agreed upon at the annual Linux kernel summit and started with the 4.1.y LTS choice.

During this process, the LTS kernel would only be announced after the release happened, making it hard for companies to plan ahead of time what to use in their new product, causing lots of guessing and misinformation to be spread around. This was done on purpose as previously, when companies and kernel developers knew ahead of time what the next LTS kernel was going to be, they relaxed their normal stringent review process and allowed lots of untested code to be merged (2.6.32.y). The fallout of that mess took many months to unwind and stabilize the kernel to a proper level.

The kernel community discussed this issue at its annual meeting and decided to mark the 4.4.y kernel as a LTS kernel release, much to the surprise of everyone involved, with the goal that the next LTS kernel would be planned ahead of time to be based on the last kernel release of 2016 in order to provide enough time for companies to release products based on it in the next holiday season (2017). This is how the 4.9.y and 4.14.y kernels were picked as the LTS kernel releases.

This process seems to have worked out well, without many problems being reported against the 4.9.y tree, despite it containing over 16,000 changes, making it the largest kernel to ever be released.

Future LTS kernels should be planned based on this release cycle (the last kernel of the year). This should allow SoC vendors to plan ahead on their development cycle to not release new chipsets based on older, and soon to be obsolete, LTS kernel versions.

Stable kernel patch rules

The rules for what can be added to a stable kernel release have remained almost identical for the past 12 years. The full list of the rules for patches to be accepted into a stable kernel release can be found in the Documentation/process/stable_kernel_rules.rst kernel file and are summarized here. A stable kernel change:

  • must be obviously correct and tested.
  • must not be bigger than 100 lines.
  • must fix only one thing.
  • must fix something that has been reported to be an issue.
  • can be a new device id or quirk for hardware, but not add major new functionality
  • must already be merged into Linus’s tree

The last rule, “a change must be in Linus’s tree”, prevents the kernel community from losing fixes. The community never wants a fix to go into a stable kernel release that is not already in Linus’s tree so that anyone who upgrades should never see a regression. This prevents many problems that other projects who maintain a stable and development branch can have.

Kernel Updates

The Linux kernel community has promised its userbase that no upgrade will ever break anything that is currently working in a previous release. That promise was made in 2007 at the annual Kernel developer summit in Cambridge, England, and still holds true today. Regressions do happen, but those are the highest priority bugs and are either quickly fixed, or the change that caused the regression is quickly reverted from the Linux kernel tree.

This promise holds true for both the incremental stable kernel updates, as well as the larger “major” updates that happen every three months.

The kernel community can only make this promise for the code that is merged into the Linux kernel tree. Any code that is merged into a device’s kernel that is not in the kernel.org releases is unknown and interactions with it can never be planned for, or even considered. Devices based on Linux that have large patchsets can have major issues when updating to newer kernels, because of the huge number of changes between each release. SoC patchsets are especially known to have issues with updating to newer kernels due to their large size and heavy modification of architecture specific, and sometimes core, kernel code.

Most SoC vendors do want to get their code merged upstream before their chips are released, but the reality of project-planning cycles and ultimately the business priorities of these companies prevent them from dedicating sufficient resources to the task. This, combined with the historical difficulty of pushing updates to embedded devices, results in almost all of them being stuck on a specific kernel release for the entire lifespan of the device.

Because of the large out-of-tree patchsets, most SoC vendors are starting to standardize on using the LTS releases for their devices. This allows devices to receive bug and security updates directly from the Linux kernel community, without having to rely on the SoC vendor’s backporting efforts, which traditionally are very slow to respond to problems.

It is encouraging to see that the Android project has standardized on the LTS kernels as a “minimum kernel version requirement”. Hopefully that will allow the SoC vendors to continue to update their device kernels in order to provide more secure devices for their users.

Security

When doing kernel releases, the Linux kernel community almost never declares specific changes as “security fixes”. This is due to the basic problem of the difficulty in determining if a bugfix is a security fix or not at the time of creation. Also, many bugfixes are only determined to be security related after much time has passed, so to keep users from getting a false sense of security by not taking patches, the kernel community strongly recommends always taking all bugfixes that are released.

Linus summarized the reasoning behind this behavior in an email to the Linux Kernel mailing list in 2008:

On Wed, 16 Jul 2008, pageexec@freemail.hu wrote:
>
> you should check out the last few -stable releases then and see how
> the announcement doesn't ever mention the word 'security' while fixing
> security bugs

Umm. What part of "they are just normal bugs" did you have issues with?

I expressly told you that security bugs should not be marked as such,
because bugs are bugs.

> in other words, it's all the more reason to have the commit say it's
> fixing a security issue.

No.

> > I'm just saying that why mark things, when the marking have no meaning?
> > People who believe in them are just _wrong_.
>
> what is wrong in particular?

You have two cases:

 - people think the marking is somehow trustworthy.

   People are WRONG, and are misled by the partial markings, thinking that
   unmarked bugfixes are "less important". They aren't.

 - People don't think it matters

   People are right, and the marking is pointless.

In either case it's just stupid to mark them. I don't want to do it,
because I don't want to perpetuate the myth of "security fixes" as a
separate thing from "plain regular bug fixes".

They're all fixes. They're all important. As are new features, for that
matter.

> when you know that you're about to commit a patch that fixes a security
> bug, why is it wrong to say so in the commit?

It's pointless and wrong because it makes people think that other bugs
aren't potential security fixes.

What was unclear about that?

    Linus

This email can be found here, and the whole thread is recommended reading for anyone who is curious about this topic.

When security problems are reported to the kernel community, they are fixed as soon as possible and pushed out publicly to the development tree and the stable releases. As described above, the changes are almost never described as a “security fix”, but rather look like any other bugfix for the kernel. This is done to allow affected parties the ability to update their systems before the reporter of the problem announces it.

Linus describes this method of development in the same email thread:

On Wed, 16 Jul 2008, pageexec@freemail.hu wrote:
>
> we went through this and you yourself said that security bugs are *not*
> treated as normal bugs because you do omit relevant information from such
> commits

Actually, we disagree on one fundamental thing. We disagree on
that single word: "relevant".

I do not think it's helpful _or_ relevant to explicitly point out how to
tigger a bug. It's very helpful and relevant when we're trying to chase
the bug down, but once it is fixed, it becomes irrelevant.

You think that explicitly pointing something out as a security issue is
really important, so you think it's always "relevant". And I take mostly
the opposite view. I think pointing it out is actually likely to be
counter-productive.

For example, the way I prefer to work is to have people send me and the
kernel list a patch for a fix, and then in the very next email send (in
private) an example exploit of the problem to the security mailing list
(and that one goes to the private security list just because we don't want
all the people at universities rushing in to test it). THAT is how things
should work.

Should I document the exploit in the commit message? Hell no. It's
private for a reason, even if it's real information. It was real
information for the developers to explain why a patch is needed, but once
explained, it shouldn't be spread around unnecessarily.

    Linus

Full details of how security bugs can be reported to the kernel community in order to get them resolved and fixed as soon as possible can be found in the kernel file Documentation/admin-guide/security-bugs.rst

Because security bugs are not announced to the public by the kernel team, CVE numbers for Linux kernel-related issues are usually released weeks, months, and sometimes years after the fix was merged into the stable and development branches, if at all.

Keeping a secure system

When deploying a device that uses Linux, it is strongly recommended that all LTS kernel updates be taken by the manufacturer and pushed out to their users after proper testing shows the update works well. As was described above, it is not wise to try to pick and choose various patches from the LTS releases because:

  • The releases have been reviewed by the kernel developers as a whole, not in individual parts
  • It is hard, if not impossible, to determine which patches fix “security” issues and which do not. Almost every LTS release contains at least one known security fix, and many yet “unknown”.
  • If testing shows a problem, the kernel developer community will react quickly to resolve the issue. If you wait months or years to do an update, the kernel developer community will not be able to even remember what the updates were given the long delay.
  • Changes to parts of the kernel that you do not build/run are fine and can cause no problems to your system. To try to filter out only the changes you run will cause a kernel tree that will be impossible to merge correctly with future upstream releases.

Note, this author has audited many SoC kernel trees that attempt to cherry-pick random patches from the upstream LTS releases. In every case, severe security fixes have been ignored and not applied.

As proof of this, I demoed at the Kernel Recipes talk referenced above how trivial it was to crash all of the latest flagship Android phones on the market with a tiny userspace program. The fix for this issue was released 6 months prior in the LTS kernel that the devices were based on, however none of the devices had upgraded or fixed their kernels for this problem. As of this writing (5 months later) only two devices have fixed their kernel and are now not vulnerable to that specific bug.

Meltdown and Spectre Linux Kernel Status - Update

I keep getting a lot of private emails about my previous post about the latest status of the Linux kernel patches to resolve both the Meltdown and Spectre issues.

These questions all seem to break down into two different categories, “What is the state of the Spectre kernel patches?”, and “Is my machine vunlerable?”

State of the kernel patches

As always, lwn.net covers the technical details about the latest state of the kernel patches to resolve the Spectre issues, so please go read that to find out that type of information.

And yes, it is behind a paywall for a few more weeks. You should be buying a subscription to get this type of thing!

Is my machine vunlerable?

For this question, it’s now a very simple answer, you can check it yourself.

Just run the following command at a terminal window to determine what the state of your machine is:

1
$ grep . /sys/devices/system/cpu/vulnerabilities/*

On my laptop, right now, this shows:

1
2
3
4
$ grep . /sys/devices/system/cpu/vulnerabilities/*
/sys/devices/system/cpu/vulnerabilities/meltdown:Mitigation: PTI
/sys/devices/system/cpu/vulnerabilities/spectre_v1:Vulnerable
/sys/devices/system/cpu/vulnerabilities/spectre_v2:Vulnerable: Minimal generic ASM retpoline

This shows that my kernel is properly mitigating the Meltdown problem by implementing PTI (Page Table Isolation), and that my system is still vulnerable to the Spectre variant 1, but is trying really hard to resolve the variant 2, but is not quite there (because I did not build my kernel with a compiler to properly support the retpoline feature).

If your kernel does not have that sysfs directory or files, then obviously there is a problem and you need to upgrade your kernel!

Some “enterprise” distributions did not backport the changes for this reporting, so if you are running one of those types of kernels, go bug the vendor to fix that, you really want a unified way of knowing the state of your system.

Note that right now, these files are only valid for the x86-64 based kernels, all other processor types will show something like “Not affected”. As everyone knows, that is not true for the Spectre issues, except for very old CPUs, so it’s a huge hint that your kernel really is not up to date yet. Give it a few months for all other processor types to catch up and implement the correct kernel hooks to properly report this.

And yes, I need to go find a working microcode update to fix my laptop’s CPU so that it is not vulnerable against Spectre…

Meltdown and Spectre Linux Kernel Status

By now, everyone knows that something “big” just got announced regarding computer security. Heck, when the Daily Mail does a report on it , you know something is bad…

Anyway, I’m not going to go into the details about the problems being reported, other than to point you at the wonderfully written Project Zero paper on the issues involved here. They should just give out the 2018 Pwnie award right now, it’s that amazingly good.

If you do want technical details for how we are resolving those issues in the kernel, see the always awesome lwn.net writeup for the details.

Also, here’s a good summary of lots of other postings that includes announcements from various vendors.

As for how this was all handled by the companies involved, well this could be described as a textbook example of how NOT to interact with the Linux kernel community properly. The people and companies involved know what happened, and I’m sure it will all come out eventually, but right now we need to focus on fixing the issues involved, and not pointing blame, no matter how much we want to.

What you can do right now

If your Linux systems are running a normal Linux distribution, go update your kernel. They should all have the updates in them already. And then keep updating them over the next few weeks, we are still working out lots of corner case bugs given that the testing involved here is complex given the huge variety of systems and workloads this affects. If your distro does not have kernel updates, then I strongly suggest changing distros right now.

However there are lots of systems out there that are not running “normal” Linux distributions for various reasons (rumor has it that it is way more than the “traditional” corporate distros). They rely on the LTS kernel updates, or the normal stable kernel updates, or they are in-house franken-kernels. For those people here’s the status of what is going on regarding all of this mess in the upstream kernels you can use.

Meltdown – x86

Right now, Linus’s kernel tree contains all of the fixes we currently know about to handle the Meltdown vulnerability for the x86 architecture. Go enable the CONFIG_PAGE_TABLE_ISOLATION kernel build option, and rebuild and reboot and all should be fine.

However, Linus’s tree is currently at 4.15-rc6 + some outstanding patches. 4.15-rc7 should be out tomorrow, with those outstanding patches to resolve some issues, but most people do not run a -rc kernel in a “normal” environment.

Because of this, the x86 kernel developers have done a wonderful job in their development of the page table isolation code, so much so that the backport to the latest stable kernel, 4.14, has been almost trivial for me to do. This means that the latest 4.14 release (4.14.12 at this moment in time), is what you should be running. 4.14.13 will be out in a few more days, with some additional fixes in it that are needed for some systems that have boot-time problems with 4.14.12 (it’s an obvious problem, if it does not boot, just add the patches now queued up.)

I would personally like to thank Andy Lutomirski, Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen, Peter Zijlstra, Josh Poimboeuf, Juergen Gross, and Linus Torvalds for all of the work they have done in getting these fixes developed and merged upstream in a form that was so easy for me to consume to allow the stable releases to work properly. Without that effort, I don’t even want to think about what would have happened.

For the older long term stable (LTS) kernels, I have leaned heavily on the wonderful work of Hugh Dickins, Dave Hansen, Jiri Kosina and Borislav Petkov to bring the same functionality to the 4.4 and 4.9 stable kernel trees. I had also had immense help from Guenter Roeck, Kees Cook, Jamie Iles, and many others in tracking down nasty bugs and missing patches. I want to also call out David Woodhouse, Eduardo Valentin, Laura Abbott, and Rik van Riel for their help with the backporting and integration as well, their help was essential in numerous tricky places.

These LTS kernels also have the CONFIG_PAGE_TABLE_ISOLATION build option that should be enabled to get complete protection.

As this backport is very different from the mainline version that is in 4.14 and 4.15, there are different bugs happening, right now we know of some VDSO issues that are getting worked on, and some odd virtual machine setups are reporting strange errors, but those are the minority at the moment, and should not stop you from upgrading at all right now. If you do run into problems with these releases, please let us know on the stable kernel mailing list.

If you rely on any other kernel tree other than 4.4, 4.9, or 4.14 right now, and you do not have a distribution supporting you, you are out of luck. The lack of patches to resolve the Meltdown problem is so minor compared to the hundreds of other known exploits and bugs that your kernel version currently contains. You need to worry about that more than anything else at this moment, and get your systems up to date first.

Also, go yell at the people who forced you to run an obsoleted and insecure kernel version, they are the ones that need to learn that doing so is a totally reckless act.

Meltdown – ARM64

Right now the ARM64 set of patches for the Meltdown issue are not merged into Linus’s tree. They are staged and ready to be merged into 4.16-rc1 once 4.15 is released in a few weeks. Because these patches are not in a released kernel from Linus yet, I can not backport them into the stable kernel releases (hey, we have rules for a reason…)

Due to them not being in a released kernel, if you rely on ARM64 for your systems (i.e. Android), I point you at the Android Common Kernel tree All of the ARM64 fixes have been merged into the 3.18, 4.4, and 4.9 branches as of this point in time.

I would strongly recommend just tracking those branches as more fixes get added over time due to testing and things catch up with what gets merged into the upstream kernel releases over time, especially as I do not know when these patches will land in the stable and LTS kernel releases at this point in time.

For the 4.4 and 4.9 LTS kernels, odds are these patches will never get merged into them, due to the large number of prerequisite patches required. All of those prerequisite patches have been long merged and tested in the android-common kernels, so I think it is a better idea to just rely on those kernel branches instead of the LTS release for ARM systems at this point in time.

Also note, I merge all of the LTS kernel updates into those branches usually within a day or so of being released, so you should be following those branches no matter what, to ensure your ARM systems are up to date and secure.

Spectre

Now things get “interesting”…

Again, if you are running a distro kernel, you might be covered as some of the distros have merged various patches into them that they claim mitigate most of the problems here. I suggest updating and testing for yourself to see if you are worried about this attack vector

For upstream, well, the status is there is no fixes merged into any upstream tree for these types of issues yet. There are numerous patches floating around on the different mailing lists that are proposing solutions for how to resolve them, but they are under heavy development, some of the patch series do not even build or apply to any known trees, the series conflict with each other, and it’s a general mess.

This is due to the fact that the Spectre issues were the last to be addressed by the kernel developers. All of us were working on the Meltdown issue, and we had no real information on exactly what the Spectre problem was at all, and what patches were floating around were in even worse shape than what have been publicly posted.

Because of all of this, it is going to take us in the kernel community a few weeks to resolve these issues and get them merged upstream. The fixes are coming in to various subsystems all over the kernel, and will be collected and released in the stable kernel updates as they are merged, so again, you are best off just staying up to date with either your distribution’s kernel releases, or the LTS and stable kernel releases.

It’s not the best news, I know, but it’s reality. If it’s any consolation, it does not seem that any other operating system has full solutions for these issues either, the whole industry is in the same boat right now, and we just need to wait and let the developers solve the problem as quickly as they can.

The proposed solutions are not trivial, but some of them are amazingly good. The Retpoline post from Paul Turner is an example of some of the new concepts being created to help resolve these issues. This is going to be an area of lots of research over the next years to come up with ways to mitigate the potential problems involved in hardware that wants to try to predict the future before it happens.

Other arches

Right now, I have not seen patches for any other architectures than x86 and arm64. There are rumors of patches floating around in some of the enterprise distributions for some of the other processor types, and hopefully they will surface in the weeks to come to get merged properly upstream. I have no idea when that will happen, if you are dependant on a specific architecture, I suggest asking on the arch-specific mailing list about this to get a straight answer.

Conclusion

Again, update your kernels, don’t delay, and don’t stop. The updates to resolve these problems will be continuing to come for a long period of time. Also, there are still lots of other bugs and security issues being resolved in the stable and LTS kernel releases that are totally independent of these types of issues, so keeping up to date is always a good idea.

Right now, there are a lot of very overworked, grumpy, sleepless, and just generally pissed off kernel developers working as hard as they can to resolve these issues that they themselves did not cause at all. Please be considerate of their situation right now. They need all the love and support and free supply of their favorite beverage that we can provide them to ensure that we all end up with fixed systems as soon as possible.