Hey, I'm Dave, Welcome to my Shop!
I'm Dave
Plummer, a retired operating systems engineer from Microsoft going back to the MS-DOS and Windows 95
days, and today it's the classic confrontation: Windows versus Linux. Now don't let the fact that
I was one of the programmers that actually wrote Windows fool you into thinking I would ever
dare to be anything but fair and objective. I absolutely guarantee everything you hear
from me today will be completely unbiased. We'll also discuss the possibility of an
open-source Windows kernel and I'll tell you whether I think it's a good idea
or not, and you might be surprised. Right now, it's Round 1 in our epic
operating systems faceoff.
[Intro]
What? Am I joking? One of the actual Windows
developers is going to riff on Linux and tell you why it's lame? Some ill-informed diatribe on
how easy to use Windows is and how you have to recompile the Linux kernel in order to change your
page margins perhaps?
Not at all. And I'll let you in on a little secret. Like a lot of people, I
experimented a bit back in college. And way, way back in my Commodore 64 days I knew a guy that had
an honest to goodness PDP-11 in his dining room. He'd bought it surplus from the phone company and
I had some serious Unix envy ever since I first saw it. I started college in 1989 and I began
using Linux, which was first released in 1991, sometime around third year which means I've been
a Linux user pretty much since the very beginning. And that was back when merely installing and
booting the thing made you the stuff of legend.
Not only that, I fixed a few handle leaks
in the early source code and sent my changes off to Linus at Rutgers, so it's quite
possible that if he did incorporate them, I then have code not only on Windows and
Mac Office systems but also within Linux, making me pretty much inescapable. No matter
where you go, you're living the davepl life.
Well, enough of my vainglorious boasting for
now, let's start talking about operating systems. I always figure that context matters, and in
this case, the context is that it's you and me talking here. If the question is which operating
system is better for grandma, unless your grandma is Margaret Hamilton then perhaps the odds are
that she will be best served by the polished user interface, feature richness and application
library available for Windows. Conversely, for a database or web server, you'd be a
fool to at least not consider a Linux distro. But even if one operating system were a whopping
10% faster than the other, that's simply not the greatest concern for every individual. Neither
are the final few nines in 5 nines reliability, at least not in the consumer space like that.
The
context that I'm talking about then is for folks like us. I'm a programmer and you do whatever
it is that you do, but odds are it's something in a technical capacity. Chances are that you're
an early adopter and a bit of an evangelist for whatever technologies tickle your fancy. Friends
and even sometimes strangers rely on you for advice on not only what products to buy but also
on how to use them once they've finally got them. You might just be the kind of person that's
done Technical Support for your doctor's office. During an appointment. On the X-Ray
machine. Maybe some light repair work on it? I know I've done it!
To people like you and
me a 10% performance hit might be material. On a box running huge raid arrays the interval
between crashes has to be enormous because the rebuild times are also incredibly long. It
all depends on what you're doing, and odds are folks such as us are using our machines for
more demanding tasks than the average consumer. In our case, it's valid to split hairs on things
like performance and robustness.
One thing you can't do, though, is just spout generalities as
facts. You can't say "Linux crashes less" because you heard it in a chat room. You can't even say
one is absolutely more secure than the other, really, but you can certainly make a case for
one over the other. And you can bet I will!
Back in the 90s when we were working on
NT, I was wandering around the hallway on a Saturday afternoon and I happened to walk by the
offices of one of the original engineers who had come over from working on VMS at Digital
Equipment along with Dave Cutler. I asked him a simple question: VMS was perhaps the most solid
and reliable operating system ever known, or at least it was back in its day. Since those very
same VMS designers and developers were now working on Windows NT, why would it be any less reliable?
Shouldn't they be getting smarter and better with each attempt at creating a new operating system?
His answer, at least as I recall it today, was that on reference hardware it likely is as
solid, if not more so. But in real life, people run it on all kinds of different hardware, and
it's that endless matrix of systems and BIOSs and chipsets and drivers that makes it challenging.
VMS ran on a handful of hardware variants at most. MacOS runs on maybe half a dozen. Windows has
to support millions of different configurations. It doesn't explain everything, but it is a
valid point. You just can't test them all, and so the user is often the first person to
test that particular configuration, especially in terms of peripheral drivers and applications,
only to find some kind of edge case.
We'll get to benchmarks and feature comparisons and so on,
but we begin with the biggest and most obvious difference between the two systems: open-source vs
proprietary. It's Round One in the Epic Linux vs Windows showdown. No quarter given, none asked.
[Customizing the OS]
Linux is and always has been open source. While the central authority, which
I presume is still ultimately Linus himself, can decide whether or not to accept
a change, anyone can contribute. If the Japanese Yakuza needs a feature to help
facilitate their legitimate business interests, they can simply add it. No need to lobby
the manufacturer of the software, and they can design and implement the feature exactly as
they see fit. As long as it's plausibly useful to other people and well done, there's no reason the
change wouldn't be accepted. And if you can't get your change into the official kernel, nothing
stops you from simply running your own kernel, or even making your own Linux distribution
entirely.
That's just not the case with Windows, of course. If you need a feature that for some
reason needs to be in the operating system, you're going to have to lobby pretty hard and have a very
compelling case for why Microsoft should invest the development, test, and other resources to your
Yakuza needs. Even if you can somehow convince the right decision makers, it would take much longer
for Microsoft to design, implement, test and release the feature than it would for you to make
the change in an open-source product yourself. For flexibility of customization, there's no
question: Linux wins that one hands down.
Now before you Linux nerds get too excited
about scoring that first point, however, there's one question you've got to ask yourself,
and that's just how common is it to need to modify the operating system in a way that cannot be done
by a software developer on Windows by writing an application, driver, DLL, or similar? Since
Windows allows you dynamically load and unload even kernel code on the fly, there's not a lot you
can't do with a copy of Visual C++ and a little knowledge. But there are cases where it matters.
Let's say you're working on developing something like the ZFS filesystem. I'd imagine that being
able to see and even modify the code for the io subsystem and cache manager is hugely valuable.
But for the average developer, and especially the average user, being able to hack on the kernel
just isn't something you need to do all that often. Still, the point goes to Linux.
[1-0]
[API
Source Code for Developers]
When I was working on certain Windows features it was super handy for
me to be able to look at the operating system API source code. You have no idea how valuable it
can be when a complex API like CreateWindow fails to be able to step through the source code, or
even source-level debug the API code itself, and see why exactly it's failing. Without
the source code, it's far more difficult. Having had that privilege, it's something I can
truly appreciate, and it's something that Linux gives to developers that Windows, by its closed
nature, simply cannot.
I was a bit of a rebel in my early years at Microsoft. I preferred
a visual debugger, like the one included with the Visual C development environment, over the
internal ones such as our ntsd debugger. You had to know it anyway because remote debugging
was still always done in it, but for local use on your own machine it meant that you could build
the system with Visual C style symbols and then debug those portions with VC. One downside is that
the resultant binaries are fairly enormous. You're only going to have such symbols for components
you compile yourself, but in those cases, it allows you to source level debug and single step
through the source code, whereas 99% of my other debugging had to be done in assembly language.
One day when I was working on the OLE RPC team, I was debugging a fairly complicated rendering
and printing case that involved Microsoft Word. To make my life easier I convinced someone on
the Word team to share the source code out and I compiled and built the entire Microsoft Word
product for my own use complete with those Visual C symbols. I was then able to load up Word under
the Visual C debugger and single step through the source code all day. It made debugging just that
much more pleasant, except for the fact that I was probably doing it on a dev machine with maybe
twelve megabytes of RAM and I think the binary image for Word with symbols was three times that
that size! It was paging pretty hard.
Back to the point, however: it's not always true but sometimes
the source code is the best type of documentation. And with Linux, it's freely available
to all developers.
[2 - 0]
Uh-oh!
If you're a developer working on one of these
operating systems, there are two main resources that you're going to rely on: searching
the Internet and places like StackExchange, and the official documentation.
When it comes to
official documentation, once you've spent any time using MSDN and the official samples you realize
pretty quickly that it makes a huge difference that Microsoft employs teams of professional
developers and authors to create that content. It's a huge and valuable resource, and one that
doesn't have an official parallel in the Linux universe. Man pages only go so far. Combine that
with the fact that you can, if your budget allows, get paid developer support right from Microsoft
means that this point goes to Windows.
[2 - 1]
What about the community though? These days,
if you can't find the answer by searching, odds are you may resort to posting your
question on a site like Stack Exchange. You are then are the mercy of the quality
and size of the attendant audience for your operating system of choice. To find out if
there was actually a difference, I compared the size of communities and their activity levels
on Stack Exchange.
The top questions of the month for Unix and Linux receive on average of 5 good
answers, 25 upvotes, and about 2500 user views.
The Windows developer site, by contrast, features
topics that average 10 useful answers, 50 votes, and about 10,000 views. There are simply more
eyeballs looking at Windows installations, and it appears true for the number of experts
available to answer questions online as well. The size of the Windows development community is
much, much larger and apparently more responsive to developer questions.
[2 -2]
You probably think
I'm cleverly staggering these topics to maintain a virtual tie between the operating systems
all throughout the episode until some lame draw at the end? The honest truth is I'm not
- I'm working my way through a set of topics I produced and wrote down in advance and letting
the chips fall where they may. And I don't know the final score yet, because I haven't answered
them all!
The next category of consideration is useability. Linux itself lacks a proper user
interface beyond the command line. That command line can be incredibly POWERFUL, particularly
if you're adept with bash or zsh or similar, but it can't really be described as particularly
useable. Of course, most distributions come with a desktop user interface of some kind if you
prefer, but as a bit of shell designer myself, if I might be so bold, they're generally quite
terrible. At least the Mint distribution looks pretty nice.
Windows, on the other hand,
includes by default a desktop shell interface that, if you set aside the entirely subjective
design aesthetics, is professionally designed, usability tested, and takes into consideration
the varying levels of accessibility required by different people with different limitations. In
terms of usability, particular if you do include accessibility in that metric, Windows comes
out ahead.
[2 - 3]
One argument I hear put forth by the various proponents of the open-source
model is that open-source software is generally developed by the people who actually use the
software, and therefore, it is assumed that they are thus more familiar with the software
and perhaps even more passionate about it. While I'm not even sure I'm ready to concede
those points, I can render them moot by pointing out that Windows developers eat their own dogfood
daily, by which I mean a developer writing for the Windows operating systems works on and operates
within that operating system all day long. As a Windows developer at Microsoft, you're always
running a fairly recent daily build on your main machine. You write Windows on Windows from within
Windows, just as you likely would with Linux. So, in the rather unique case of the operating system,
this doesn't offer a benefit in either direction. So in this category, we'll call it a draw.
[3 -
4]
When it comes to updating an operating system, it can range from a silent update that happens in
the background of which you are never even aware all the way to a complex upgrade
where Apple switches CPUs yet again. Each operating system has its own philosophy
on how to handle upgrades, and those attitudes are reflected in one of my favorite cartoons
by Christiann MacAuley. It compares the views of Linux, Widows, and Mac users when it comes to
updates.
Linux Users:
Cool, More Free Stuff!
Windows Users:
Ah man, more Windows Updates?
Not Again!
Mac user:
Oooh, only $99!
On the far end of the update spectrum is the Mac,
where Apple has the Cajones to simply require all the users and all of the applications
to update when they make a breaking change, and probably charge you for the privilege.
Windows users are well served by a dedicated Windows Update team at Microsoft, but the
process has occasionally had its hiccups and growing pains.
It's very easy to update a Linux
system, and while there's no professional team sitting by the big red phone ready to respond
to Day 0 exploits, the updates do come out with reasonable alacrity, and in some cases, you
can even update the kernel without rebooting. Keep in mind, however, that Linux is a monolithic
kernel which means it's all one big happy kernel: almost everything is in there. If they hadn't
started to add that ability a few years back, you'd be rebooting for every driver install.
The reality is that some parts of the Linux kernel are going to require a reboot just as some
parts of the Windows system are going to as well. I think we can likely all agree, however,
that Windows software is hardly selective about rebooting the system, and you're asked to
do it far too often.
While we're on the topic of upgrades we can't overlook the fact
that upgrades are generally free in the open-source world unless you're using a prebuilt
distribution from a vendor. To its credit, though, I don't remember the last time Microsoft charged
for an operating system upgrade if you were a normal end user or enthusiast. Still, this point
goes to Linux.
[4 - 4]
The topic of security is going to be contentious, and many of you may not
like my answer on it, but I'm not alone in arguing that open-source software is more vulnerable to
security exploits than closed software simply because, all else equal, it's easier to figure out
where the bugs are to exploit in the first place. On the plus side it does mean that many public
experts can also openly review the software and try to catch and fix vulnerabilities before they
are exploited. That essentially makes it a race between the black hats and the white hats as
soon as the source code is made available.
Believe it or not, by nature proprietary
software is less prone to security problems not only because the closed nature protects them
from being discovered, but also because there is a professional test organization whose job it is
to find such issues before they enter the product. I think it's a bit of a fallacy to rely on the
"many eyeballs" approach which simply assumes that because more people can see the code,
more bugs will be fixed.
While it might be true of buffer overruns and other simple bugs,
complex interactions such as security holes, race conditions and deadlocks may be subtle and
far more likely to be caught by a test gauntlet than by the widespread scrutiny of many casual
programmers. What really matters is the careful scrutiny of the right people.
And yet... as much
as it pains me, and as much as I believe that it is in spite of, and not thanks to, the open-source
model, I think that by and large Unix is more secure against the typical digital vandalism for
one simple reason: the target value of Windows is so much higher.
Unless you're a nation-state
targeting someone specifically on Linux, it's generally much more attractive to virus
authors to target Windows users simply because there are so many more of them!
Even if I were
to say that the systems were equally secure, there's one major problem not with Windows
itself, but with how it is typically used. Most users still run as a full administrator,
which means as soon as they accidentally click on the wrong user account control popup, mayhem can
ensue.
As your Windows apologist for the day, I should point out that this mess really
stems from applications that demand they need to be run as administrator, or which fail to
operate properly when run on a limited account. On Linux it's pretty much an exception when
an application needs administrative rights, and for the most part it doesn't happen unless
you explicitly use the sudo command. This means you do it intentionally, not reflexively, and
to me that's the critical difference. Windows isn't alone - you get a similar popup on MacOS
and it's just as easy to grow similarly jaded to the point that you simply click OK every time
you're asked. But I sure don't bust out sudo every time I get an access denied response: I at least
stop to think about why the access was denied.
Even though massive popularity as a target and
misuse of the administrator account isn't really the fault of the operating system itself, that's
the reality. And so, the final point in Round One goes to: Linux.
[5 - 4]
I'm actually a little
surprised, as I wasn't sure how this one would shake out myself. Suffice to say, however, it's
only round one, and there are many more to come. Make sure you're subscribed so you don't miss any
future showdowns!
Now it's time for some trademark Davepl opinion, where I get on my little soap box
and tell you what I think. And what do I think? I think Microsoft should consider open sourcing the
core Windows kernel, but under a hybrid model.
The model I envision involves openly releasing the
code to public scrutiny while retaining all of the rights to the software and preventing unauthorized
modifications. I personally would love to see such a model adopted for core portions of Windows,
such as the kernel and drivers. Make the code available such that anyone can download and even
build core components like ntdll and win32k.sys. Bugs can be reported to Microsoft, and if
it were me, I'd even pay a bounty. Perhaps the bounty could be on a sliding scale, such that
discovering a zero-day privilege escalation attack paid very handsomely while a spelling error in a
resource file might net you a free copy of Windows and a nice T-shirt.
I would also argue for a
staged release - academics and trusted security researchers would be given priority access so
that they had ample time to review the code before it was opened up to the broader public.
I'm not advocating for an open-source modification model where people can commit changes into the
kernel, add features, or any of that. But I think there's some additive value in the additional
scrutiny the code would receive if it were made available to more developers in this way.
This
would of course work best if done hand in hand with some form of trusted computing module such
that custom kernels which are not officially signed by Microsoft are plainly detected and
optionally blocked.
If you enjoyed Round One but you're not yet subscribed to my channel, I'd be
honored if you took a moment right now to do so. That'll also let me know that I'm going the right
direction with this episode, I'll make more like it, and if you turn on the bell icon, you'll even
be notified of them when I do. It's a win-win.
As always, remember I'm not selling
anything, and I don't have any Patreons, I'm just in this for the subs and likes, so if you
did enjoy the episode, please be sure to leave me one of each before going. YouTube apparently
really does care if you like the video or not: they call it engagement.
And don't forget to
head on over to Dave's Garage at the end of the month for a Livestream on Sunday the 28th
of February at 10 AM Pacific, 1PM Eastern. All questions will be answered, all inquiries
addressed, and you can help me plan for future episodes! The more the merrier, so bring a friend.
Our first ever livestream had a 1000 folks show up and it was a lot of fun, so please do stop by on
the 28th.
Thanks for joining me here in the shop. In the meantime, and in between time, I hope to
see you next time, right here in Dave's Garage.
I like your content, and it's good to keep discussing the pros and cons of both all operating systems. People should get the right tool for the job after all.
As for debate: on the topic of security there are two points to add:
You only really considered desktop numbers and risks. The usage numbers in the server world are flipped around and that also introduces risks: Many dangerously unpatched LAMP stacks out there.
Second, and here it's getting controversial... Windows is a US product that has for a long time used vendor-locking to maintain certain business control. And at the same time, one export restriction from the US Government can blow your counties infrastructure out of the water. It's a unique problem that doesn't get much attention from the US side, but it's comforting to know that major Linux distributions like Ubuntu and Suse come from Europe. Technical aspects aside, there are many other strategic 'security' benefits that Linux has.
anecdotally. A friend of mine oversees one of the larger groceries logistics products, which runs on a US cloud platform. On the question 'what follows when the US cuts the cord' he responded: food riots.
I know you gave the security point to Linux, but I don't think it's fair to say that the Linux kernel doesn't have professional teams of people looking at its security. We have so many huge companies using and contributing to the kernel, including but not limited to Google, Huawei, Facebook, and so on, plus all the distributions which have their own security teams (whether paid or voluntary).
It's not very in-depth technically and I'm certain the author could go much, much more in-depth, but of course there's a direct correlation between topic depth and audience size.
I actually would've appreciated a more in-depth discussion about stuff like the different memory management models etc. It's really entertaining when someone actually knowledgeable is able to throw out some salty jabs about design choices that turned out to be poor and such. You can make anything sound terrible that way even if the issues are mostly irrelevant in reality, heh.
1. re: user interfaces - it's not really fair to say "Linux" only comes with a CLI by default (to be pedantic, Linux is only a kernel and doesn't even come with a CLI) and Windows comes with a GUI. That depends on the distribution and install mode, same as with Windows actually.
You absolutely can get a "Linux" that comes with a GUI by default. Just as you absolutely can get a Windows installation without a GUI.
To nitpick, the default installation mode of Windows Server these days (since 2012?) is without a GUI (Server Core) so Windows (Server) doesn't come with a GUI by default either :)
2. re: problem solving and support - yes, if you encounter issues the number of commenters on support forums will be lower for Linux, but the quality of answers will be much higher in general. Or when searching for issues on search engines.
If you encounter an error on Windows and Google it, you have to wade through a lot of clueless crap to find solid actual answers instead of generic stuff like "have you tried rebooting?" or "turn off antivirus" or "delete system32". Or my absolute favourite "sfc /scannow" that is spammed on Microsoft support forums that never actually fixes anything.
Encounter a Linux error and you might find a comment from some kernel developer that went through the source code and identified the issue and it was fixed in commit xyz released in version q.
Debugging issues on Windows is much harder in general when compared to Linux, in my experience.
You absolutely can buy commercial support from commercial Linux vendors but otherwise you get what you pay for.
3. re: rebooting. I'm sure the author is well aware, but the reason why Windows needs/asks to reboot so often is because Windows can't overwrite/delete files that are in use. A reboot is required so that in-use files can be touched.
Unix/Linux allows replacement of files that are in use, but that in itself doesn't really do anything - software needs to be restarted to take the new files into use. Sometimes that's easy but not always. A daemon is easy to restart but in a complex GUI environment it might not be easy to restart everything using library x. Try updating libc without a reboot... Even on Linux it can be easier to just restart to get the system to a known state.
4. re: development and dogfooding. Yes, Microsoft is widely known to dogfood its own software so the developers should be in touch with what's actually happening in Windows, however...
Windows is developed in a commercial environment whereas Linux is developed by volunteers (though increasingly I think Linux is developed by commercial developers as well). Sometimes commercial requirements override purely technical considerations. That really changes things.
I'm sure it's not the developers pushing for "Candy Crush" and ads being deployed into the Start Menu on Windows. I'm guessing the Windows developers were not the ones pushing for pervasive telemetry. Why does the Windows Start Menu advocate Edge as the "Microsoft recommended browser" and tell me it's the best way to enjoy the web? I'm also sure the developers really enjoy writing code to gatekeep features behind different Windows editions (licenses).
5. re: security. I'd say Windows is massively more complex by default and has dozens of more services running by default implementing dozens of more protocols. There's also a lot of legacy and backwards compatibility going on by default.
But you absolutely can run Windows lean and can deploy your servers as Server Core and implement the Microsoft Security Baselines to disable legacy protocols and compatibility. Keep unnecessary services (RDP and SMB) disabled/blocked in the Windows Firewall. That changes the situation a lot.
Back in the early days Microsoft/Windows security really was shoddy but that changed by the early 2000's. Microsoft software developed after 2005 or so are a vastly different beast compared to the old days.
See for example this research from Check Point into RDP clients where multiple vulnerabilities were uncovered. they stated:
These days you could even make the argument Microsoft is an innovator in the security space. Windows has numerous security features that take security to the next level that simply don't exist on Linux like:
etc.
Thanks, everyone! The comments section has been lively, and I thought the Linux faithful might appreciate the diversion. I really appreciate the feedback that you guys have (with a few reservations) liked it!
If there's enough interest, I'll do Round 2 on "KDE Plasma vs Windows Shell" and see how that goes!
Cheers!Dave
I liked the video but I do have a point of contention with the security comparison. In the early days the statement 'widespread scrutiny of of many casual programmers' was true of Linux but now Linux is widely used in the enterprise and backed by a slew of major vendors and tech companies, Red Hat, IBM, Google, Oracle etc. I wouldn't consider Linux a hobbyist OS anymore.
Also I don't agree with the assertion that more Windows exploits exist because Window's market share gives it higher target value and bad actors just don't bother targeting Linux. This is definitely true in the desktop market but the server market is pretty evenly split and servers are far more attractive targets than a desktop so why do most bad actors and nation-states tend to successfully target Windows? I hate to say but most of the worst security debacles in recent memory like WannaCry, Petya/NotPetya, Solarwinds and the Exchange 0-day from this year, all involved Windows. Look at the WannaCry, Petya/NotPetya attacks that were based on the EternalBlue exploit. It was a bug in SMB1 that was introduced in WindowsXP and somehow carried through to multiple later versions of Windows. That critical bug went unnoticed by MS engineers for over 15 years and kept getting ported to other versions of Windows. To me, saying Windows is inherently more secure because it's proprietary closed source software is just security through obscurity which doesn't work in the real world.
Bruce Schneier has some thoughts about Open Source and security that disagree with you, Dave. While he doesn't assert that Open Source is intrinsically secure, a popular project that attracts a lot of attention from black hats will tend to attract a lot more and a lot better attention from academic Cryptographers, security experts, and just security-savvy contributors.
And yes, there are security teams paid to find and patch vulnerabilities in GNU+Linux—big ones. The NSA itself helped RedHat develop SELinux. To this day, RedHat has Security Engineers on staff being paid to find vulnerabilities. In fact, they're hiring a new Manager.
Love your videos, though, Dave! It's like watching the Director's DVD commentary, but for the world's most popular software.
I have to disagree with what he said about user environments, but other than that seems like a well put together and comprehensive video
Can we have one told by a totally neutral retired Linux kernel dev now?
There we're a LOT of oversights and big misses in here: