Sign up FAST! Login

Largest HIPAA Breach Ever: Hackers Steal Data on 4.5 Million Community Health Systems Patients


http://smartdatacollective.com/onlinetech/227001/largest-hipaa-breach-ever-hackers-steal-data-45-million-community-health-systems-p

hospital community

There’s a new leader on the U.S. Department of Health & Human Services’ Wall of Shame.

A hacking group known as “APT 18” is suspected of stealing names, Social Security numbers, addresses, birthdays and telephone numbers from 4.5 million patients of Community Health Systems, a network of 206 hospitals across 29 states (see map at right). Credit card numbers and medical records were not accessed.

It’s the largest attack involving patient information since the HHS started tracking HIPAA breaches in 2009, passing a Montana Department of Public Health breach that affected roughly 1 million people.

Patients who were referred or received services from doctors affiliated with Community Health Systems in the last five years were affected, the company reported in a regulatory filing on Monday. The sophisticated malware attacks occurred in April and June.

According to numerous news reports, security experts said the hacker group may have links to the Chinese government. Charles Carmakal, managing director of the Mandiant forensics unit, hired by the hospital group to consult on the hack, told Reuters that “APT 18” typically targets companies in the aerospace and defense, construction and engineering, technology, financial services and healthcare industry.

In an Online Tech webinar titled Why is it So Hard to Secure a Company,” security expert Adam Goslin discussed how the past decade has seen “a continuous and steady increase in attempts by specifically the Chinese attempting to gain intellectual property.”

According to a CNN report, Mandiant and federal investigators told the hospital network that the hacking group has previously conducted corporate espionage to target information about medical devices. This time, however, the bounty was patient data.

Community Health Systems stated in a release: “Our organization believes the intruder was a foreign-based group out of China that was likely looking for intellectual property. The intruder used highly sophisticated methods to bypass security systems. The intruder has been eradicated and applications have been deployed to protect against future attacks.”

In his aforementioned webinar, Goslin, the CEO of Total Compliance Tracking, detailed examples of the value of intellectual property theft:

One of the stories that the FBI was bringing up was the Chinese were trying to get into a manufacturing facility to get a sample of a rinse solution for some type of a glass manufacture. It was a coating for glass and they couldn’t figure how they were doing it. So, the Chinese were trying to get a hold this of this rinse solution in the manufacturing setting. …

There was a story of an organization that had spent some number of years developing a patent. They were just about to file it and found that they have gotten hacked by the Chinese. The Chinese filed for the patent. Because the organization’s entire business revolved around this work, they literally had to pay royalties to the Chinese just to use the patent that they developed themselves that got hacked out from under them.

The value of personal information is clear: Hackers can sell the information to those looking to steal identities. And hospital networks are becoming a hotbed for finding that information.

Michael “Mac” McMillan, CEO of security consulting firm CynergisTek, told Modern Healthcare that hospitals are “going to become a bigger and bigger target as the hacking community figures out it’s easier to hack a hospital than it is to hack a bank and you get the same information. I’m not sure healthcare is listening yet.”

McMillan told the website there has been a spike in hacking activity directed at hospitals this year:

“I know at least a half a dozen or so hacks against hospitals we work with where the data wasn’t transferred, but it still caused a lot of disruption,” McMillan said. “But it wasn’t a HIPAA issue, so it didn’t get reported.”

Stashed in: Privacy does not exist., Privacy, Cyber Security and De-Identification

To save this post, select a stash from drop-down menu or type in a new one:

Given all the sloppy practices, should we give up on any notion that data once digitized can be private?

Yes, we should give up. The more rigid you make your cyber security policy the more prone it is to banal violations. Remember the organization who makes you change your password every month? Especially the ones where the password has to be 10 characters, use mixed case, numbers and specially characters. Wonder around those organizations and look behind screens, under keyboards and in pencil drawers... and look for the post-its! 

password post it

password post it

If we give up on the notion that there can be protection, then that frees us to thing instead about what really should be digitized and what should not be.

Giving up the protection racket is going to be difficult for Enterprise IT. The organization, its mode of operation and all the technology infrastructure has evolved over the last 30 years to protect data. To build a firewall and keep everything on the inside from being invaded by people and systems on the outside. 

So there is a pretty big paradigm that needs to be shifted. Yet the new Cyber Security function is faced with their greatest challenge yet - Big Data. This means that data is being exported to the cloud wholesale, is being blended with new data that is being harvested from sensors, social and open sources. Seeing red lights going on all over the place.

What is the new paradigm? One where the data can be free, yet individuals privacy can be respected and organizations data stores not left vulnerable

Looking for an answer or maybe even a lead as to where we might start looking

It does seem like the system is rigged.

Compliance requirements force companies to overspend on protection that doesn't actually protect.

It's security theater. They spend money and make the case that they made the effort.

Your idea -- to change the way we think about this -- is a good one.

Trying to develop this thought further... 

1. Privacy vs. Anonymity (Privacy where I know who you are but nothing about you. Anonymity where I know a lot about you but not who you are)

2. In the physical world where a percentage of information makes it on record, is held in our heads or on paper, is difficult to join up and can be lost, Anonymity is possible.

3. However, in the digital world, all information is stored on computers 'permanently', is easy to join up, the definition of anonymity becomes very difficult.

4. Anonymization is a term being used where the identity of the data owner is removed. To do this effectively a lot of 'clues' also have to be removed from the data. For example, in an organization with small sub-departments, say of 5 people, the department identity would have to go too.

5. De-Personalization is less stringent, perhaps employing algorithms that tokenize the identity in such a way that the tokenization can be reversed allowing someone with the right security clearance to establish individual identity

6. There is no legislation (I don't believe that the legal framework has been able to keep up with the technological one), so the industry needs a pragmatic and secure enough solution

7. I think that the model being employed in practice right now is a gigantic bureaucracy with layers of policy and so many controls that to get anything done requires subversion (like the post-its on keyboards above)

8. The basic problem here seems to be: how we allow those with the right intent sufficient access to the data to do their jobs (in the example above, a hospital needs access to individual patient records and the healthcare industry needs access to aggregated patient data); and how we are able to deny access to those with nefarious intent

9. I believe that the beginning of a solution lies in HTTPA (“HTTP with Accountability,”) - Remote access to a Web server would be controlled much the way it is now, through passwords and encryption. But every time the server transmitted a piece of sensitive data, it would also send a description of the restrictions on the data’s use. And it would log the transaction, using only the URI, somewhere in a network of encrypted, special-purpose servers.

References: 

http://pandawhale.com/post/47376/guidelines-for-data-de-identification-or-anonymization

http://pandawhale.com/post/45865/httpa-new-web-technology-would-let-you-track-how-your-private-data-is-used-online  

http://pandawhale.com/post/46447/de-identification-re-identification-and-the-risks-therein

http://pandawhale.com/post/30822/how-to-use-23andme-without-giving-up-your-genetic-privacy

http://pandawhale.com/post/28720/reach-into-our-bodies-grab-the-genotype-reach-into-the-medical-system-and-we-grab-our-records-and-we-use-it-to-build-something-together

http://pandawhale.com/post/49472/accountable-information-usage-in-fusion-center-information-sharing-environments

Well said. Adding policy to web servers is something that makes a lot of sense. 

You May Also Like: