Have you ever heard the phrase “if it ain’t broke, don’t fix it”? If you have, then you know sometimes it is best just to leave it alone. But no sysadmin worth their Ethernet cable can resist poking at new things in an attempt to figure out how they work. It is how we all got to the level we are now, and how we will advance to the next level. Sometimes, however, poking at things with a sharp stick can get us into trouble, and this list describes the 21 most common misconfigurations that will come back to haunt you, because poking at things randomly means trouble if you don’t pay attention to the outcome!
1. Anonymous Write and FTP
Anyone that has ever set up an FTP site, allowed anonymous write, and exposed it to the Internet has learned several things. How much bandwidth can they use? How much disk space do they have? How quickly word gets out. Leaving anonymous write enabled ensures that you will start hosting all kinds of pirated software and movies very quickly. Never permit anonymous write, even on the internal network, or you could quickly run out of disk space and bandwidth.
2. Everyone-Full Control
Until recent versions of Windows, any time you shared a directory the default permissions were Everyone-Full Control. Far too many of those older operating systems are still in use, but even worse, far too many admins will set that by default in modern operating systems because they think letting anyone have full control to a directory of data is a good thing! Think least privilege and don’t ever give everyone anything.
3. Reply All
Sending out an email to “Company All” is frequently necessary, but leaving that email so that someone can hit “Reply All” is the fastest way I know of to stress and load test your email system. One guy hits reply all to make some comment, and then you get the next 30 people replying to all that they want to unsubscribe, or asking everyone to stop replying to all, or chiming in with a “me too!” type answer. I’ve seen email servers shut down to stop the madness! Use Rights Management to restrict reply all, and make sure only authorized users can even send email to the largest DLs you have, like “Company All.
4. Leaving Shutdown in the Remote Session options
I once got an unplanned road trip because I went to log off a server in another city and hit shutdown by mistake. Removing the “Shutdown” option from remote desktop settings is default now, but how many 2003 and 2008 servers are still in production? Millions! Use a GPO to remove “Shutdown” from the remote menu, so that if you really do want to remotely kill a box, you have to use the command-line tool “shutdown” to be sure you really mean it. Of course, if you like unscheduled visits to the datacenter, you can leave that as is.
5. Storing cleartext passwords in webpages
Far too often, webmasters save connection strings to databases in their HTML code in cleartext, making it easy for anyone who wants to “view source” to get into the back end systems. Never store credentials in files that end users can access, and if you must store creds anywhere, use secure strings.
6. Not validating input
Buffer overflows, SQL injection, changing prices in shopping carts – all of these are possible when you don’t validate input from end users in your software and on your web pages. Always validate input and reject anything that fails validation before it gets to the point where damage is done.
7. Leaving cleartext protocols enabled
Except for DNS queries, public downloads, and webpages that you want the world to see, there’s really no reason on the Internet to use cleartext protocols at all. But if you are performing any authentication, or providing access to any sensitive data, it is imperative that you use encryption to protect the confidentiality of the data.
8. Not redirecting cleartext to encrypted
But to clarify, we don’t want you to turn off cleartext protocols. Too many users will type in an address without the protocol, and without an HTTP to HTTPS redirect, they won’t get to your site. Take the HTTP and redirect it to ensure users’ data is protected, but so is their experience with your site.
9. Using self-signed certificates
Whenever you train a user to just “click through” a warning without reading it, you are setting them up to be exploited. Nowhere is this more frequently observed than with internal websites that use HTTPS with a self-signed certificate, prompting the users to have to click to accept the danger. End users won’t distinguish between internal and external sites. They will simply recognize the warning and click OK, just like you taught them when accessing that internal application. Build an enterprise CA or purchase a wildcard certificate from a trusted CA, but never make users think it is okay to click through a warning.
10. Leaving sample applications and code on a production system
Sample applications are designed to show you how to do something. They are not written to be secure, nor are they typically updated when you patch an application. When building or deploying a server into production, remove all the sample code and apps to ensure that they cannot be used against you later.
11. Patching without testing
Unless you run nothing but vanilla code from the vendor, patching without testing is asking for trouble. The vendor cannot possibly test every single configuration, and that means that they didn’t test your configuration. That’s your job. You want to patch, but only after you have tested to be sure it won’t break something else in your environment.
12. Autoconfigured (169.254.y.z) IP addresses in DNS
If a server has two ip.addrs in DNS, it will reply to a query with both of them. If one of those addresses is bogus, a client stands a 50:50 chance of trying that bogus address before it tries the legitimate one. This means slow performance, and that means a helpdesk call. If you are not going to use a NIC, don’t connect it. If you connect it, give it a static ip.addr or make sure it is on a VLAN with DHCP. At the very least, untick the box to register a connection in DNS so that you don’t get bogus addresses mapped to legitimate hostnames.
13. DNS Islanding in Active Directory
In Active Directory, Domain Controllers should never point to themselves for DNS; they should point to another DC. When a DC points to itself, it can fall out of sync with the others and not realize it, quickly falling out of date and not being able to authenticate users. If it stays out of sync for too long (60 days by default) you have to flatten it and reinstall to fix the problem. Always make sure that DCs point to other DCs for DNS, and never to themselves, and then you have to use NTDSUTIL to purge bad data out of AD.
14. Not logging enough
Logging is critical, but it is seldom done well. Default logging is usually not enough to truly recreate events to determine what happened, it takes a lot of drive space, and it can be days or weeks after an event before anyone realizes something has happened and that they need to check the logs. Make sure you log thoroughly enough to be able to recreate what happened, and that you keep logs for long enough to be able to go back weeks if necessary to figure out what happened.
15. Not logging centrally
And the default for a system is to log to its local drives. That’s great until the system fails, or is compromised and the attacker wipes the logs. Logging centrally takes more time, money, and storage, but ensures that you have logs to refer to when a system goes down, and makes it much harder for an attacker to hide their tracks.
16. Permissions to ~
Many Linux distros permit READ to users’ home directories for the world. That usually doesn’t mean the entire Internet, but it does mean that anyone on the network may have READ to the admin’s home directory, and in there could be password files and configuration files and who knows what else. Make sure that permissions to every user’s home directory is set to 600 so that users have READ and WRITE to their home directory, but cannot execute programs from there. If you must allow EXECUTE, then 700.
17. Using default SNMP Public and Private community strings
The only security in SNMP v1 and v2 is with the community string, and the default write community string is Private. That makes it a trivial exercise for a malicious user to shut down router interfaces or mirror switch ports if they have network access. SNMP v1 and v2 transmit in clear text, but changing the community string at least makes it harder for an attacker to start messing around with your network. Use SNMP v3 if possible, or don’t use a writable SNMP at all.
18. Dropping ICMP
The RFCs state that hosts MUST respond to ICMP Echo requests, so any admin that drops ICMP is violating the RFCs, which is bad! But more to the point, since the Ping of Death hasn’t been a thing in 15 years, all dropping ICMP does is make it harder for customers to troubleshoot when they cannot get to your website, and leads to helpdesk calls when your users can’t get on the VPN. At least allow pings to your website and VPN endpoint, which is what most tests will be about anyway.
19. Dropping (instead of blocking) anything on the internal network
If you block traffic on the inside that you don’t want, then good admins will see the RST ACKs or ICMP Unreachables and know that the firewall is blocking things on purpose. If you just silently drop on the inside, your fellow admins could waste days trying to figure out why they cannot make something work, will learn to always blame the firewall when anything doesn’t work because they cannot tell the difference, and will at best call you whenever anything doesn’t work, or at worst grow to hate you. Drop on the outside.
20. Leaving systems set to automatically update
Much like patching without testing, letting systems automatically update means they are patching without testing and now without even a maintenance window. Seriously, if you are letting servers automatically update, what do they need you for? You want to control patching both so that you can test, and so that you only take servers down for reboot when expected.
21. Using the local hardware clock for time synchronization
Time synchronization is critical. Logs depend on it. Authentication depends on it. Your users depend on it to know when it is time to go home! So why would you let clocks sync to notoriously inaccurate hardware clocks? All networks should use NTP to keep their clocks in sync, and use a reliable external time source like pool.ntp.org to make sure that not only are all the clocks in sync, but that they are accurate.
Take care of these most common misconfigurations, and your network will be in great shape and ready to handle whatever comes its way.