As a guy with a startup of my own and my wife’s medical practice, I’m interested in state-of-the-art security for both entities. When I speak to the companies that seem to have leading solutions, they are all aimed at the enterprise market and none have any decent referrals for me to get an analogous system that can be bought for small operations. I spoke to Cylance today and they sell minimum 100 endpoints. “Call us back in 6-9 months” was their only suggestion.
I guess you could say I’m somewhat paranoid. I’d considered running all my browsing and email in a remote virtualized environment and using a VNC client to access it so no potentially nasty stuff could ever touch my network through those vectors.
I just finished reimaging both my kids’ Windows laptops this weekend because of suspected system compromises. Way too time-consuming and aggravating. I switched to Macs years ago to avoid having to do that regularly.
Ideally I’d like something that can be put in place with lightweight ongoing administration requirements. The medical office has no server infrastructure, just some computers and ipads accessing a cloud-based EMR.
The startup is cloud-based but I need to cover the endpoints and my family’s devices (laptops,ipads,phones) as well. Also likely going to have a couple of servers: Mac OSX and Linux, no Windows.
For home and office, I’m currently using ESET suite and Malwarebytes for the kid’s Windows laptops and ESET suite on my Macs. Have an ancient Cisco Pix 506e firewall appliance but it’s not in use at present (no longer supported by Cisco but could be used at home).
I’m fine with getting updated hardware in order to protect LANs at both office and home but need good recommendations. Wireless routers are recent generation non-managed types (Linksys E3200).
For home: I’d like some decent web filtering (got 12 and 15 year-olds). A nice-to-have would be traffic shaping so I could measure and prioritize various traffic (so kids streaming stuff doesn’t impact other stuff (100 MBps cable modem connection). I don’t know if traffic shaping is available for small networks at a reasonable price these days. It was sure nice to have back when I ran IT for a school district in the early 2000’s.
For medical office: Just need network security and endpoint protection. Also, disaster recovery solution for hybrid iOS and Macbook environment.
PS – I have gotten a bit spooked about inadvertently getting nailed by ransomware and thought that the only way to truly be insulated might be to have backup server initiate all processes, connect to client machines, perform malware scan- if the scan passes then initiate a file sharing connection (from the server) to perform the backup, then disconnect the file sharing connection once the backup is complete. The backup would then be replicated onto separate archival media that would not be accessible from the server at any time other than when the copy is being made.
File sharing connections would not be allowed to be initiated by client machines to the server under any circumstances- only the server can initiate the connection, and only after it has passed a malware self-test and the client has passed one as well.
Does this approach make sense? Have I missed something? Is this overkill?
- You must login to post comments
I’m short on time for the day but I’ll try to get to this tomorrow or the next day (though I’m not sure all of what I can offer). For now I’ll just state the following:
It isn’t paranoia – it is being realistic and concerned that there are risks. Many risks. That is a good trait to have; too many do not care about the security of their devices, even when it affects others too (malware comes to mind). I know it is probably a general choice of words that many use, but I will correct where I can because it isn’t paranoia at all (take it from me – be glad of this) but instead a valid position to take. You should be this way, in other words, and that is the point.
As for overkill or not. The only overkill in security is if you go so far that nothing is usable (or otherwise makes it so inconvenient that people start looking for workarounds – that is obviously a very bad thing). Of course, enforcing (and having) a policy is necessary but it is a balancing act here. I would advise caution with the IoT (‘Internet of Things’) but I won’t elaborate on it because it is such a terrible phenomenon that isn’t going away, but has many, many risks – too much to discuss (FYI: many medical devices have been found to have been designed with no security in mind whatsoever while connected to the Internet). I for one (with chronic health problems) am glad you want to keep medical information secure and I commend you for it.
Lastly, a tip in general: stay in the loop with security (this includes risks that you might find on advisory mailing lists or keeping up-to-date in some other way). Lack of awareness is a huge problem in this world and it is equally true for security; how can you protect yourself if you don’t know you’re vulnerable (many other questions like this exist)? By doing this you can fix problems as quickly as possible (much like keeping your systems patched) but you also develop (or improve upon) a sense of how everything works together.
I’ll try to respond in proper on the other points but I won’t make any promises on when or to what extent.
- Thanks for your feedback. I guess my idea to use an AS/400 as a server since hackers don’t know anything about the OS won’t sound too extreme (I’m only half-joking here- using an obscure platform seems like it might have some merit- HP3000 anyone?)
- Obscurity is only valid if it is used in addition to other layers. Example: you wouldn’t want your password hashes with world read access. But you wouldn’t want to only rely on obscurity. And I promise you this: no matter what you think others won’t know, or what you think you know, there will always be surprises with this: some will know what you rely on and everyone can learn more, always, and everyone can be bested in some things. Even VMS isn’t as obscure as you might think it, and same with HP-UX and really any other OS. Important part is this: even if it IS obscure, you shouldn’t rely on this fact (see above); it might not be obscure enough for everyone out there. You should presume and expect that this is the case.
- You must login to post comments
For endpoint protection, I heartily recommend Webroot Secure Anywhere. Even with over a hundred endpoints in our our organisation, it is all centrally managed and cloud-based, so there are no more signature downloads (that obtrusive thing of the past), and you can see your entire organisation from one set of web screens (who got infected and when and has it been dealt with …). You can also fire various commands at sets of workstations. It installs in a couple of minutes without need ing a reboot, by running a 700KB executable (on Windows), and the initial scan (to set it into the system) takes just 5 minutes (if that). It is light on resources, so older, slower PCs can run it easily, and we have found it to be more effective than MBAM and Kaspersky, especially on day-zero attacks. It has an application behaviour analyser built in, and also stops you going to bad websites. Website is at http://webroot.com
- You must login to post comments
I’m going to post another ‘answer’ because I’m responding to specific concerns you made. Some of this might seem redundant but any redundancy is intended because these points are really important (even those things that aren’t always related to security). I’ll point out on the subject of redundancy here: redundant storage (e.g. RAID setups) is not, has never been and never will be backups. This is unfortunately something that people miss but if you make a mistake (delete the wrong file), malware causes you problems, or any other write action does something to the volumes, then it will be the same on all disks (there are other problems that apply to hard drives in general that I will not discuss but one thing is: if you have SMART enabled drives I highly recommend you use the functionality – though it won’t stop the drives from dying, it can notify you of problems it sees so that they don’t become real problems for you).
Windows: Is it possible to physically isolate these from your network (or at least the business subnet) ? That would be ideal but if not physically isolated then through filtering (ingress and egress). This would be a good idea with external media, too (but I know that it sometimes can be easier said than done) as well as phones (if you have wireless – I would personally go for wired only on this but I’m also biased against wireless in general). ESET is a good choice on resources and last I knew it rates pretty well in the AV department (and I hear good things about Malware bytes). But yes, I would be concerned most with the Windows boxen; that isn’t to say that everything else is free: an insecure (which includes poor/lax configurations, not patching systems, etc.) system is still an insecure system, and the fact script kiddies can easily compromise a system means a great deal. If you can afford it, penetration testing would be a good idea.
Home/Traffic shaping: If you route the computers that you want to do this, through a computer (that acts as a router), then you could do this for free. Under Linux you can use trickle (I’m not sure if this would be what you want, though, because it isn’t by the NIC – network interface card – but services; however, there are other traffic shapers [just none come to mind immediately]). Some routers (with built-in switches) also have shaping through QoS settings (I seem to remember this, anyway). I would recommend, also, that if you’re going to filter web traffic, you might want to filter other kinds of traffic: it is important to remember that – for example – upstream is equally important as downstream, so if bittorrent (for example of others including even malware) is configured (for legit software) poorly, you can make a connection extremely slow simply because the upstream is so stressed.
Firewalls: I wouldn’t use something that isn’t supported but I can say this: Checkpoint firewall is a good choice. Linux’s netfilter (= iptables and ip6tables if you have IPv6 enabled) is also extremely powerful (but probably would go for checkpoint or some other commercial product if you can especially if you’re not experienced with iptables). BSD Unix has its own firewall (the name escapes me at the moment – I’ve not used BSD or any other Unix in a very long time). You might want to look into bastion hosts/DMZs/etc. as well, because they are part of firewalling.
Medical office: I have no personal experience with Apple iOS and the same goes for Macbook but if it is anything like Mac OS X then I would at least have some ideas (because it is based on BSD and NeXTSTEP and while I am not familiar with NeXTSTEP I am with BSD and more generally Mac OS X is Unix based). But in general, it comes down to limiting the attack surface, which means don’t run unneeded services, keep everything patched (this cannot be stressed enough) and so on. This rule applies to every part of the network – if a subnet is compromised and that subnet is not isolated (physically) or the filtering is somehow bypassed, then you now have another part of the network that is at risk.
Disaster recovery/etc.: Nightly backups, make sure your backups work (and media it is on is free from errors – backups are useless if they aren’t free from errors or restoration doesn’t work at all! this is really important), make sure all disaster recovery is functional and sound (and it is something you should evaluate regularly to make sure everything is still working as intended!). You obviously have to have the plans and policies in place initially, but do this for more than just the office – do this everywhere (and consider them all together).
More generally: make sure input (directly or indirectly) is sanitised. Make sure everything else sanitised, even (output is often input, after all). This is especially important for public facing services (e.g. websites) but it is still important for everything else (even programs that aren’t services). And keep everything patched.
- First off- thanks for the comprehensive response. That was exactly the kind of thing I was hoping for. To your points: Yeah, I was really trying to discourage the kids from getting Windows laptops but they want to play games (of course) and the Mac is not near comparable in that department (could have used a VM for Windows, I guess, but video performance would lack). I eventually capitulated and bought them each a laptop while at the same time promising myself that I’d create a bulletproof process for rebuilding them quickly when they (inevitably) fell over from whatever malady befell them. Maybe I should just get them an XBOX One and insist that all gaming be done on that – don’t think that’s gonna sell, though. Moving the Win machines onto their own wifi network/VLAN sounds like it makes sense, I’ve got extra Linksys AP’s sitting around. Maybe having them run a VPN full-time would be helpful too. Something like Tunnelbear (which I already use personally). It would get the kids in the habit of using it all the time so that when we travel they’d be trained to use it innately. I’m with you on the whole bias for wired networks and against wireless, but practicality means (in my case, at least) you’re going to have to support some level of WiFi. I guess the ipads (in the medical office) could all be cellular, that would likely minimize all but a targeted cellular attack, but probably increase monthly data costs- I’ll look into that. I would think that wifi with a non-advertising, random SSID with good encryption should handle most issues there. I think I need to get educated on RADIUS authentication. FYI- The Macbooks run Mac OSX. I had toyed with the idea of Chromebooks for the medical office. Then all I’d have to be concerned with is network security and making sure backups from the cloud were good (probably using something like Spanning to keep things entirely in the cloud). I’d use 2-factor authentication with Yubikeys for Google accounts. I just need to find a way to get local scanning and printing done from Chromebooks. I think Google Cloudprint could address the printing issue. Scanning is used to get consults notes from other physicians and pharmacy notes, etc. into the cloud EHR. The disaster recovery is where I would guess most people fall down. I remember in college working as a weekend system operator for a mid-sized business (500 employees) and coming in one Saturday morning to find my boss and her boss, the head of IT, looking like they hadn’t slept much. One of the disk packs crashed (IBM AS/400 midrange network). The IBM rep had replaced the hardware and they were restoring the system. Turns out the backup was no good. An obscure bug in the OS meant that once the backup hit a certain number of members (files), it stopped writing files. Would have been OK, except the programmer who wrote the backup routine never wrote a verify routine so it was never caught. The company lost 3 months of payroll data and had to cut checks manually for months until data entry got caught up. Direct deposit got broken. A very unhappy situation. I learned from that “It’s not the backup that matters, it’s the restore.” BRU is the only thing I’ve seen that backs up cross-platform clients and seems to have bulletproof verification. Looks like Acronis is just starting to support Macs but I think I have to give the nod to the mature platform in BRU. If I could just find a cloud-based backup provider that uses Bru. Bottom line is that you have to test that your restore is good, and most people find that to be a huge pain-in-the-ass. I say everyone gets religion regarding backups right after their hard drive crashes. If there’s a relatively easy way to test the restore without overwriting your machine’s main hard drive, I’d love to hear it. Maybe there’s an online service that allows you to restore to it and then test the restore in a VM environment. If not, someone needs to build it. As for testing media reliability, I’ve been a huge fan of Spinrite since I was a bench tech in the 80’s. It apparently still does it’s stuff effectively.
- I should point out that Steve Gibson is a known charlatan (see for example his broken ‘attempt’ at making his own SYN cookies, spending all the time and insisting he’s never heard of SYN cookies after the fact: see http://www.theregister.co.uk/2002/02/25/steve_gibson_invents_broken_syncookies/ for that one; there are many more examples). As for testing drives, different drives have different firmware but SMART is the best thing we have today (I would like to be wrong here, of course, and perhaps I am, but SMART shouldn’t be ignored – the reality is everything that is mechanical dies and hard drives are mechanical). I use bacula for backups and while I don’t know how it is configured under Windows (I believe there is a Windows port) it works well under Linux (and should under other Unices) as long as you make sure to keep the database backup too (it has the option of doing this but I dump all mysql – you can use other drivers I think – databases via a nightly cronjob). There is the potential problem for major version updates (especially OS releases that update many system components) and being unable to read the database but there are ways to fix (some) of the problems. But the real point of this is easy: restores it to a separate file system, in my case /var/tmp and under that is the directory structure you backed up (e.g. /var/tmp/home – or whatever it is). So the answer is you need a backup system (even if you write your own set of scripts) that doesn’t hardcode the path to restore to (and how to get around the drive letters of Windows is beyond me and I’m also clueless on a good way of separating data and system files and program files under Windows). Alternatively you have a system for only restoring backups. Remote backups aren’t necessarily a bad idea but I would, however, caution against using a cloud service to backup: if the service is shutdown (example: some people used Megaupload as their sole backup, and look what happened some years ago…), if the service has a disaster – like Google’s data centre in Europe being hit by lightning four times in a day, not too long ago – that they can’t recover from completely, and more generally you can’t control how they maintain their servers or backups. It is your data at stake and your data should be protected properly; it isn’t their data that is lost – it is yours. Relying on something you have no control over is a disaster waiting to happen. On the subject of data recovery and disks more generally: you should also have UPSes because not only will they allow you to shutdown safely, they help against dirty signals on the line. Example: during a lightning storm here, the UPS for this system (which is a normal system) shut the power off instantly during a strike because if it hadn’t there would have likely been damage to the connected devices. While it is annoying that the power was immediately cut, it is works as designed and the consequences of damage from lightning (or other bad things on the line) would be far worse (and that includes fires I might add, however rare it is it isn’t unheard of). Some would argue that I shouldn’t have the power on with lightning but that’s the wonderful thing with UPSes. My server and networking equipment is on a different circuit and a more powerful UPS (well, not now since I upgraded this one last year) and had no problem whatsoever (having a clean, dedicated circuit is also a good idea if you can get one). The UPSes I’ve had in recent years also have a way to detect wire faults (for one example of others). So I would highly recommend you get UPSes with room for expansion. Room for expansion should apply to more than UPSes, of course. Otherwise I’m not sure what else to respond to. Hopefully I covered everything of importance – I’m somewhat distracted.
- You must login to post comments
Please login first to submit.