Heartbleed blamed for hack that put 4.5 million patients at risk

As I described earlier this week on the Tripwire State of Security blog, hackers broke into the computer network of Community Health Systems (CHS), and stole personal data related to 4.5 million patients.

Hospital

The hackers, who struck in April and June this year, are feared to have accessed details of individuals who were referred for or received services from doctors affiliated with the CHS hospital group in the last five years.

CHS filing

CHS worked with Mandiant, a division of security firm FireEye, to investigate the attack and the finger of suspicion (somewhat predictably) got pointed towards China.

HeartbleedNow, claims have been made that the attack was orchestrated with a little help from the notorious OpenSSL Heartbleed vulnerability.

According to TrustedSec, the Heartbleed vulnerability was the initial attack vector used by the hackers to gain entry to CHS's network.

Attackers were able to glean user credentials from memory on a CHS Juniper device via the Heartbleed vulnerability (which was vulnerable at the time) and use them to login via a VPN.

From here, the attackers were able to further their access into CHS by working their way through the network until the estimated 4.5 million patient records were obtained from a database.

Sadly, details (as with CHS's initial announcement) are scarce, and TrustedSec merely says that it was told the information by a "trusted and anonymous source close to the CHS investigation."

Here is the interesting thing.

Heartbleed became public knowledge in April, and technology companies around the world rushed to push out fixes - Juniper amongst them. The latest update in Juniper's knowledgebase related to its patching against Heartbleed is dated May 6th.

So, if Juniper had a patch against Heartbleed by May, how come CHS got hacked via a vulnerable Juniper device in June?

The answer is simple: Patching is really hard. With the best will in the world, many organisations struggle to roll out patches and update systems in a timely fashion to deal with the latest vulnerabilities.

But breaches like the one that occurred at CHS prove that IT teams must be given the resources and backing by senior management to fix vulnerabilities when they become known about in a timely fashion, or risk making bad news headlines and putting millions of customers at risk.

Tags: , , , , , ,

Smashing Security podcast
Check out "Smashing Security", the new weekly audio podcast, with Graham Cluley, Carole Theriault, and special guests from the world of information security.

"Three people having fun in an industry often focused on bad news" • "It's brilliant!" • "The Top Gear of computer security"

Latest episode:

, , , , , ,

3 Responses

  1. Coyote

    August 21, 2014 at 3:08 pm #

    The last paragraph is definitely a key point. And now it seems that UPS is also hit by a breach and supposedly includes payment credentials. See also http://www.theupsstore.com/security/Pages/default.aspx

    And this time it is suggested (surprise surprise…) malware as the source – if you'll excuse the pun.

  2. George Kasica

    August 21, 2014 at 5:19 pm #

    I'm an IT Admin for 29 years.

    The problem lies with the culture of most companies these days.

    NOONE wants downtime. Systems are expected to be up and available 24/7/365 and that's simply not a reality to maintain and patch servers. Not every server and every company has the budget for redundant failover hot spare clusters of machines. In MANY cases there is simply just 1 or 2 devices or servers doing a given task and to patch and maintain them you need downtime.

    If IT isn't allowed that downtime you'll see a lot more of this.

    I guess the question becomes "Do you want to take a couple hours a month down or end up with your company name in the headlines like CHS or anyone of a dozen others lately?"

    • Coyote in reply to George Kasica.

      August 23, 2014 at 9:43 pm #

      A couple hours ? All you need to do is restart the services that uses it, at most (keep in mind OpenSSL is not a service itself but a library). Patching only needs downtime if the operating system requires that (maybe some Windows servers ? OpenSSL is mostly Unix and Linux derivatives anyway… and there is no server restart involved there …) and unless you have so many zones (dns, and I mean hundreds) or vhosts (web) for two examples, it's hardly any time at all (nothing to speak of). I would argue they don't apply here but see below anyway. As for rebooting, the only time you need to restart a Unix server is kernel update (if you need to boot it, notwithstanding certain technologies that allow avoiding this..), hardware failure (which you'll be down already …) or some other failure (same thing).

      But never mind any of that! Any corporation (and CHS is no exception and in fact it is worse: patient's credentials is even less acceptable) risking their customer's data is unacceptable. Simple as that. There is absolutely no excuse here if it has to do with downtime. In actuality, if the updates were available (…) then there is no excuse, at all. It's that simple. Remember that wisdom about reputation … and keep in mind how many it will affect and how that will affect your reputation in many people's eyes …

      Yes, IT sometimes aren't given enough privileges but I don't see how this is one of those cases and especially for downtime (that is absurd for a library update).

Leave a Reply