Missing Option (Score: 1)
by billshooterofbul@pipedot.org in I mainly use my tablet in: on 2014-10-20 19:03 (#2TH8)
I hardly ever use my tablet, now that I have Shamu-boy Neil.
isn't it unfair to compare a nationwide network of AT&T and later RBOC maintained telco copper, on which POTS ISPs ran freely, with allowing/imposing competition on local evil cable company monopolies who all ran their own infrastructure and connected to each other and the Internet per se only as an afterthought to delivering TV?Nationwide versus local doesn't make any difference... You wouldn't want to do long-distance dial-up, with the high rates being charged, and couldn't ever do long-distance DSL, so only local really ever mattered for internet access.
You have 500 servers for the same reason you have raid: Redundancy.And if you let a few of them stay down for no reason, you've got that much less redundancy.
Because you have monitoring systems in place that report such status information, and because any decent admin will configure a service manager to only restart the process a few times in a short period, before giving-up. "Monitoring logs" is only something you do at home... It doesn't scale. You can't do system monitoring that way.You don't directly monitor logs on production machines past a certain number. Most companies get by with just 2 or 3 email servers and these machines are often monitored manually.
The most rock-solid stable and reliable service will crash, on occasion, in ways that do not need nor would benefit from investigationNor from automatic restart since it only happens "on occasion" and anything critical will be monitored. And because we are talking "critical", an administrator will be on call anyway.
Ironically impolite, given your other comments here.Just calling it like I see it. A spade is still a spade.
If you're getting crashed processes because of hardware errors you can't just restart the process.A service crashing isn't evidence of a hardware error, and when that does happen, trying to restart it a couple times won't hurt your efforts in any way.
If its a single Apache server out of 500 thats behind a loadbalancer that can detect the failure and route around, then yeah let it stay dead.Not actually a good plan... If you have 500 instances of Apache, it's because you NEED 500 instances of Apache, and a couple of them going down is likely to cause measurable slowdowns at peak times. If you have many more servers than you need, you're wasting money to compensate for software limitations.
I think everyone who complains about it should join in with uselessd and see that through. That approach makes sense to me. Forking Debian seems like a waste of time and energy.I think most everyone can agree on that point.
Establishing a record that can be easily linked back to you through a single slip isn't much better than not using a handle at all.You don't need to keep the same one for 20+ years like me, the barrier is quite low. Besides, it's extremely easy not to let personally identifying information slip... Unless you're Hodor.
Again, I think the ACs here are particularly polite and cogentUgg... Polite and cogent like him?:
This isn't an issue of sysV vs. systemdNo, upstart is in there, too, and that's about all... It got voted down in favor of systemd across the board.
most people start off their argument by saying "we agree system V init needs to be replaced with something better. But this isn't it.Open source software doesn't start with executives espousing grandiose ideas. Distros choose from what's out there. Somebody needs to churn out some code, and they needed to do it 20 years ago. This has been needed for a long time, and distros can't take a wait-and-see attitude when their big customers needed these features years ago and aren't going to continue waiting.
Committing to systemd is a big jump it's hard to back out of.Big jumps, that get redone later, are pretty common in Linux. Big initrd changes, devfs to udev, dcop and dbus, oss with esd and arts to alsa and pulse, KMS, lilo grub and grub2, LVM, etc., they're always painful, and often stupid and pointless, but not world-ending.
Lastly, when you think about how much work it is to maintain Debian, threatening to fork it is a BIG undertakingActually, it's easy to make the threat. That's the problem with all these discussions... Talk is cheap, and every random misinformed random user can make lots of talk.
If something is crashing on a production server you have fucked upUtter nonsense. You're just a kid with a linux box who has no large-scale experience but wants to pretend to be an expert on the internet. EVERYTHING crashes over a long enough time-frame. The most uber-stable and basic simple system software will eventually crash. Across enough servers, you'll see it happening daily.
If a service is restarting itself all the time, how would you know?Because you have monitoring systems in place that report such status information, and because any decent admin will configure a service manager to only restart the process a few times in a short period, before giving-up. "Monitoring logs" is only something you do at home... It doesn't scale. You can't do system monitoring that way.
How about you do that when it breaks and stops running the first time or twoAlready addressed this, TWICE, in my post. Look for 'crond'. The most rock-solid stable and reliable service will crash, on occasion, in ways that do not need nor would benefit from investigation. Across many hundreds of servers running numerous services, this is a daily occurrence.
When a daemon crashes on a production server, we want to know why. We investigate and fix the problem before restarting.Already addressed this nonsense, TWICE, in my post. Try again.
Funny... the only time I ever needed data center staff to intervene was after a botched systemd "upgrade".A NOC isn't datacenter staff.
SysVinit scripts don't have any way to restart services that have quit/crashed. That is EXTREMELY important on servers, and it's absence is a notable missing feature on Linux.When a daemon crashes on a production server, we want to know why. We investigate and fix the problem before restarting.
ANY service that you need running is "critical" and failure can't be ignored. Right now, these system restarts are typically performed by poorly-paid NOC personnel, who understand less about the services in question than systemd doesFunny... the only time I ever needed data center staff to intervene was after a botched systemd "upgrade".
Automatic service restarts are perfectly safeSimple minded nonsense that will be easily countered by the first 0day exploit that takes advantage of it.