Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Rebooting is for Windows (zdnet.co.uk)
30 points by aoos on July 27, 2010 | hide | past | favorite | 42 comments


One of the comment refers to netcraft for example of long Linux uptimes. According to http://uptime.netcraft.com/up/today/top.avg.html Windows have better or just as good uptime as Linux servers. (maybe I am missing something)

The last time I used a MS web-server was ~10 years ago. I have almost exclusively used debian/ubuntu servers most of the time. Apart from the Linux fanboyism, what advantages does MSFT have as a server OS?


The reason Linux does not show up on those lists is because Linux uses Hi Res timers which make it impossible to guess to uptimes from tcp timestamps.

See http://uptime.netcraft.com/up/accuracy.html#hz1000


Exchange; that is quite a huge one for businesses where I am. Many people are just so used to outlook for email that when it comes for them to put in some kind of email management architecture they will always go exchange.


Actually I was thinking more in terms of web-server or any application server that is not tied to the OS. I am guessing you can't host Exchange server on Linux even if you wanted to. So that advantage would't really be an advantage. Its more like you don't have any option but to use Windows server.


There are some non-Windows Exchange alternatives that would be transparent to Outlook users, but it's probably an uphill battle selling them to management.


Yes. You'll have better chance selling a new paradigm (cloud based, no clients to deploy, no individual mail server mattering) than a opened version of the old paradigm (which is still controlled by the proprietary vendor).


what advantages does MSFT have as a server OS?

I would say support for enterprise apps. A lot of enterprise software vendors still target their development for Windows, and don't support Linux well or fail to test with Linux.


If they're Unix apps, they support Linux - RHEL is probably the most popular enterprise Unix for new projects.


The problem is really with testing. I know of some vendors who have versions of their SW for Linux, but don't test them --nearly-- as well as they do for their Windows versions, because most of their customers are running Windows.

And that's even before worrying about interoperability with database servers (i.e. testing on outdated *nix ODBC drivers, etc.)


Yeah, but there's vendors who do half assed Windows versions too - Oracle comes to mind. Ultimately the VB/.net shops like BMC do shitty Unix versions, the big iron companies do shitty Windows versions.


You got that right. In the end, the target platforms are the vendors prerogative.

Where you really run into problems is selling a weak Unix version into a Windows-hating shops. It ends up being more of a problem than if the vendor just told the customer "you're better off using Windows with our product".


Hate's a strong word, but yes, having a budget for such things in the past, a vendor coming in with an unpackaged app (or some horrible custom non RPM/deb packaging format), with no init scripts, no syslog support, and sales staff who don't understand the bog-standard RHEL or SLES platform to the point where you have to help them with their presentation, can and will guarantee no sale.


Yes. Shutting down every app, the OS kernel, and firmware, and restarting them for anything beyond a major OS upgrade is a reminder that Windows remains a desktop class OS.


While Windows has (and probably always will be) an OS that requires reboots for certain patches, it seems a bit biased of the article to ignore much of the work that's been done in recent (>= Server 2003) versions of the OS to minimize this, while highlighting improvements such as KSlice in Linux.


Can you move, rename or replace an open file with current Windows? That's possibly one of the reasons why it's so hard to apply patches without rebooting a Windows box. And an endless annoyance when using a Windows desktop.


Can you remove a logfile on linux/unix while something is using it? Of course you can but the open file will stay around in limbo. It wont free up space, the program that holds it open will write to the deleted file, etc. Most linux programs will let you send it a nohup and it will close and reopen the logfile, that's something I haven't a clue about on windows.

I still pipe logs to a separate process that checks if the logfile is deleted and reopens/creates the logfile, its just easier.


> Of course you can but the open file will stay around in limbo. It wont free up space, the program that holds it open will write to the deleted file, etc.

This is actually a feature, because it makes secure temp files possible.

> I still pipe logs to a separate process that ...

Note that logging is usually done in a separate process anyway, using the Syslog facility.


If your intention is to totally remove the space that's occupied by the log file, you'll probably want to pipe /dev/null into the file:

cat /dev/null > /path/to/logfile


That does not do what you think it does.

Your command will make /path/to/logfile fill up all available space on the disk. To truncate that file, which is probably what you wanted to do, you should

`echo > /path/to/logfile`


His command does exactly what he says it does. Your command truncates the file and then writes a newline into it.


That's not the point. On any decent Unix, you can overwrite any shared library that's being used and just reload the program. It's not about log files, but not having to bring down the machine in order to write an important file. That's why windows updates take eons: after the boot there are tons of files that have to be moved or renamed before the system can boot completely.

It's insane.


Eventually your hardware will fail. If you can't even safely reboot your machines under controlled circumstances, you've already lost the uptime battle.


And that's one point for specialized hardware. On a zSeries mainframe, CPU errors are detected on-the-fly, faulty CPUs are deactivated and their processes migrated to functioning CPUs.

They cost a lot, but they deliver a lot of confidence too.


I have run into problems where the machine would not survive reboot after updates but did not find this out after months.


It's OK to reboot from time to time. What is not OK is to have a reboot imposed on you when you would rather continue running. It's not a huge disruption to reboot a cluster node, as long as the rest of the cluster takes the load.

Rebooting makes sure the filesystem is properly scrubbed, temporary files are removed and any stale data in memory gets removed.

Uptime competitions are pointless.

But forced downtime (Windows Update-style) is unacceptable.


> Uptime competitions are pointless.

While uptime competitions don't indicate availability very well, they do show how much time happened since the last kernel crash or the last kernel security hole that required a kernel upgrade and thus a reboot.

(Unless, of course, someone is trading security for uptime, which is luckily the exception rather the norm, at least among responsible admins.)

It appears that neither Windows nor Linux work particularly well here, but the BSD systems are quite impressive in that regard, especially OpenBSD.


Good thing in ksplice is that not all kernel updates will require a restart. I see the Ubuntu folks got very bold in pushing new kernels down the update pipe in the last couple releases.

Availablity is also somewhat overrated. Like I said, it's not the downtime that kills you, but the forced, unpredicted downtime.


> the BSD systems are quite impressive in that regard, especially OpenBSD

From my experience with OpenBSD, this appears to be quite simply because things have been pruned so well, there's nothing to fail. What is there is well-designed, and IIRC my general purpose install was ~200MB.

The insanely lightweight and simple nature of OpenBSD is one of my favorite things about it, and probably one of the biggest contributing factors to its strengths.


Eventually parachutes need to be used. Now, they should work and save you, but why risk it if you don't have to?


Because if you don't exercise that functionality, you will never have confidence that it will work when you need it to.


Worse - you might have false confidence that it will work, but you find out at the worst possible moment that it won't.


I agree with the people who are disparaging Windows Server OSes as still being desktop class. Coming from a VMS/VAX background, I feel much the same towards Linux.


The biggest thing about Windows (and what I think makes it an unavoidably desktop-class environment) is that there's no kernel-level support for alternate filesystems.

The various Unix-likes have a variety of filesystems with a lot of innovation going on. Windows basically just has NTFS, which while ok for the desktop is only going to serve well for some types of servers.



Sorry, is that sarcasm? I honestly can't tell. Yes, people have implemented filesystems other than NTFS for Windows. NTFS is however the only one I personally would trust on a production server, especially for the system partition.

Linux, by contrast, has a variety of filesystems that are as stable if not more so than NTFS, and ready for production use on your root partition.

Linux probably has some catching up to do with respect to Solaris and the BSDs, but it has a good cut above desktop class filesystem support.


I was referring to your original point that Windows didn't not have kernel-level support for alternate filesystems.

Obviously I don't know your use scenarios, but I can think of 2 Linux filesystems that I'd trust to varying extents and they both begin with "ext".

That said, I use the right tool for the right job. Much of the time it's Linux, and sometimes it's Windows 2008. Thank FSM the VMS boxes are gone.Platform agnosticism is a valuable trait to have.

Professionally, I haven't had a real use for anything other than NTFS on a wide variety of Windows servers, all the way up to double-digit TBs of data. What situations have you found NTFS inadequate for your needs?

My desktop has no direct need to handle millions of database transactions and terabytes of data, but it's nice that it can with NTFS.


Why the obsession with using one or the other? Why not see that both have their appropriate uses, strengths and weaknesses?

My focus as a Sysadmin is Linux, purely by nature of the type of work I'm in, so I tend to keep up only with the benchmarks relevant to me.

If you're only seeing ext2 & 3 as mature and stable you've missed great file systems like XFS.

ext2 & 3 are great all-rounders, but XFS will knock them into a top hat when it comes to larger files, with lower CPU usage and disk ops. It'll also beat ext2 and ext3 if you're creating and deleting lots of small files (like on an e-mail server) as the delete will take place in the background without impacting the front end systems. It's nice and mature too (16 years old). ext2 & 3 have slightly better error recovery though. XFS is journalled so very little should go wrong that would impact it.

JFS has strengths when large files are moved around on it, extremely low sector overhead (less than 1%) and very low CPU usage, amongst the lowest of any of the main Linux ones.

NTFS has no knowledge of checksumming, something ZFS, ext4 and btrfs handle (the latter two I wouldn't trust yet in production environments), but it does have integrated snapshots, something you generally have to use LVM for under Linux, and native encryption, and from Vista/2003 onwards supports shrinking and expansion directly on the fly (again LVM is necessary to do this under linux, and is best done with filesystem offline).

You chose your operating system and file system to suit the task (for example I'd use OpenBSD on a gateway machine instead of Linux as it's more suited to the role).

It's such a simple concept, like how you wouldn't use a hammer to crack an egg. You could, but you might find the edge of a knife or a spoon a lot easier and neater.


Even with IFS, you cannot boot from third-party filesystem. "Root" partition is going to be NTFS anyway.


I don't keep up on enterprise Linux stuff much but after reading this it concerns me that Linux seems to have better in-place upgrade support than most of Cisco's big iron stuff I've worked with. The 10000 series routers I presently work with only recently got support for in-place upgrades and it has a list of caveats a mile long. From a network architecture standpoint you can't always (affordably) design around downtime but at least your servers can stay up while your network is done now.


Not to say it's not cool, but saying "Look how much cooler linux is because of ksplice" is kind of silly considering determina was doing this years ago for Windows.


Well, that is at least one of the advantages of Common Lisp… ;)


Windows has greatly improved in this regard but it's still a huge pain point. I believe that the frequency of required reboots after patches is likely a leading cause for machines remaining unpatched.

Ultimately it comes down to lack of having "no reboots" as a clear goal. SQL Server has a pretty strong goal of being able to patch a running server without rebooting and they do a good job meeting that goal. For Windows OS it comes down to programmer laziness winning over in the absence of a directive to avoid reboots.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: