All Your Hospital Are Belong To Us.
Re-posted from intothecyberbreach.com, originally published on February 15, 2020.
This morning, I ran across a 2014 article on Wired.com, which goes on to explain that hospital medical devices and other related gadgets (what we would today call IoT or the “Internet of Things”), are shockingly easy to access via the wireless network, and vulnerable to abuse by would be hackers. For some reason, the article reminded me of an old meme from the early 2000s, hence the name of this post. I ended up down a bit of a Wired.com rabbit hole, which I figured I’d share with you.
Back in 2014, they reported on a study that found “drug infusion pumps–for delivering morphine drips, chemotherapy and antibiotics–that can be remotely manipulated to change the dosage doled out to patients; Bluetooth-enabled defibrillators that can be manipulated to deliver random shocks to a patient’s heart or prevent a medically needed shock from occurring; X-rays that can be accessed by outsiders lurking on a hospital’s network; temperature settings on refrigerators storing blood and drugs that can be reset, causing spoilage; and digital medical records that can be altered to cause physicians to misdiagnose, prescribe the wrong drugs or administer unwarranted care….” as well as discovering “they could blue-screen devices and restart or reboot them to wipe out the configuration settings, allowing an attacker to take critical equipment down during emergencies or crash all of the testing equipment in a lab and reset the configuration to factory settings.”
I assumed that given the article was almost six years old, the security situation in hospitals would be markedly improved. My initial research has not borne that out exactly. By 2017, Wired was reporting that “Medical Devices are the Next Security Nightmare.” A little weird, if you ask me, since they identified the issue three years earlier, but I digress. Wired reported that while the FDA has begun providing guidance on cybersecurity concerns, they also noted that a significant percentage of medical devices were running on outdated operating systems or technology that is no longer supported with security patches, and has already gotten through FDA approval and into common useage. Instances of Windows XP (which was released in 2001, almost 20 years ago) were found running major hospital computers and connected to various devices (they cited an average of 10 to 15 connected devices per bed, with a large hospital having up to 5,000 beds). FDA certainly has stepped up its cybersecurity game since 2017, and they offer great cybersecurity resources for the medical community here.
Fast forward to 2019, Wired reported on a newly discovered vulnerability on devices that have been in use in hospitals for nearly 20 years. The problem, as put by one cybersecurity analyst, is that “once you identify what is vulnerable, how do you actually update these devices? Often the update mechanism is almost nonexistent or it’s such an analog process it’s almost like it’s with a screwdriver. It’s not something that can be done at scale. So I don’t know if it will ever be accomplished to update all of these machines.”
But its never enough to just identify the problem and put our hands in sky. HIPAA has long required notification for security breaches of personally identifiable health information. But newer data privacy laws like NY SHIELD, CCPA and GDPR take data security a step further by expanding the definition of protected private information. For instance, NY SHIELD considers a username and password combination to be protected private information that businesses are required to safeguard. For all of the efforts complying with HIPAA, healthcare organizations at risk of noncompliance (pronounced, “law enforcement”) in regards to state data privacy laws.
So, the good news is that the FDA is aware of the issue, and there appears to be somewhat less of a “wild west” attitude towards IoT medical device security. The bad news is that 2020 is predicted to be a banner year for ransomware and medical device cybersecurity concerns generally.
6 Ways To Beef Up Your Email Security.
Re-posted from intothecyberbreach.com, originally published on February 10, 2020.
I have been setting up a Microsoft Exchange email server for a new project of mine that is related to my data privacy law practice. I hope to make an announcement sometime this week as to what the new project will look like. It’s all good stuff.
As I’m setting up my email server, I’m thinking about what steps I need to take to increase my own cybersecurity. It is obvious that I need to practice what I preach. So, here are some of the things that I’ve been implementing for my own business email:
Backups. Backups. Backups. Backups. Backups.
Everyone understands the concept of backing up your data. But backups are not a “set it and forget it” type of thing. What is being backed up? How? Where are the backups stored? How do you go about retrieving it? Do your backups work, are they secure themselves? There is a small section of hell where lost souls are punished by having their computers AND their backups destroyed in the same catastrophe (by a fire, obviously). Don’t be one of those souls.
I’ve been burned by not backing up very recent personal data. (See what I did there?) If you save anything at all on your computer’s hard drive, you are likely guilty of this. It is really frustrating. Especially when you know better. I put this at the top of the list, because if you haven’t recently backed up all of your data, then you are setting yourself up for heartbreak. Frequency is an issue, retrieval is an issue, and all of this stuff needs to be tested.
You can run a cloud drive like OneDrive, Dropbox, Google Drive, which have some security features built in. Make sure you understand what you are signing up for though. Free services are often free because your data will be mined. You might not care about that. As a lawyer, I have to care about that, because allowing Google to read my attorney-client communications can defeat the attorney-client privilege. So, there are pitfalls, and you need to know them. I don’t do any legal work on my free gmail. The paid-for Google Suite is more private, but I can’t say I’m very trusting of Google generally. So, I went more traditional with Microsoft.
You may want to also keep an external hard drive handy that is solely for the purpose of routine backups. I find these are the easiest to retrieve, but that is a two-way street. You need to make sure the backup drive itself is password protected and secure. Out of view is ideal. If anyone can just plug it in and access your files, all you are doing is creating more security holes. The thing with any of these methods is that its easy to forget your email data. Fortunately, Exchange backs up emails automatically and they should be accessible by anyone with admin privileges. Do yourself a favor though and attempt a retrieval now, BEFORE you actually have to.
Multi-factor identification
This should be at or near the top of your list (behind backups, if that is not already being done regularly). For the uninitiated, multi-factor identification (also known as two-step authentication) is a process you may have noticed on a lot of online applications that ask you to verify your login information by also putting in a code on your cell phone. Online banking was one of the early adopters. It can be done in a variety of ways. It may be a a text message, an app, or a follow up on subscription requests with a confirmation email that you have to click on. These are all examples of multi-factor identification.
Gmail and Microsoft Exchange both have a two-step authentication setting. When turned on, you will get a security code sent to your phone or a backup email. Both systems also have authenticator apps that can streamline this process a bit.
I was slow to adopt this at first, because of the times it might slow down your workflow. But actually, when I was forced to use it through various applications, I got used to it pretty quickly, and found it to be a good way to keep out the bad guys. If you are a law firm, or another repository of someone’s personal information, especially in email form, this is a really cheap and easy way to prevent a breach. Remember, the new law in New York is that even if data is only “accessed”, it can trigger a data breach event that must be reported to law enforcement and effected consumers. Even if the data is accessed inadvertently and non-maliciously, the law requires a five year documentation period. Two-step authentication can help prevent those very simple incidents.
Practice “Least Privilege”
Biggie Smalls might have said it best: : “Number three, never trust nobody.” If he were alive today, surely he would advocate for “Least Privilege” and “Zero Trust” security frameworks.
“Least Privilege” means that every user has the least privileges it can possibly get by with to perform its function. So, rather than giving your user name all admin privileges, you would have a user for your day-to-day work, and then a separate admin user only for performing administrative functions. Consider whether every employee should access to every file. In Microsoft Office Suite, you can set up multiple admins that are limited in the things that they can do (one would be an exchange admin, another would be able to change user passwords, etc.) The more you are able to separate these roles, the better.
“Least Privilege” is related to the framework for “Zero Trust” which is, I’m sure, going to be in the running for one of the most popular catch-phrases/buzzwords in 2020. The concepts are related, yet distinct. What they share is the idea that just because a user has gained access inside your network, doesn’t mean they should be given the keys to the company car (metaphorically).
As a lawyer, my office could have clients, adversaries, vendors, employees, and lost visitors looking for another office, on any given day. Unfortunately, any one of those people may intend harm on my system, or may just be an accident waiting to happen. You have to verify each step of the way. One example “zero trust” is to get rid of the idea that once you are connected to the network, that somehow entitles you to access the cloud. It doesn’t and it shouldn’t. Further, once inside the cloud, it doesn’t entitle you to access the entire system.
Develop Basic Security Literacy Within Your Organization
When the Nigerian Prince comes knocking, don’t let him in. Most of us understand that on a basic level. But in business, the scams are more sophisticated. Recently, I’ve received a few emails purportedly from one of my co-workers asking when I will be in the office. Another colleague received a similar email from me, asking for their help with an emergent issue. Those emails set off red flags because my colleagues and I were peers, and the language of the email was clearly designed to invoke fear of one’s supervisor. But, with a different target, or a different sender, I suspect they would have gotten a response from someone in my organization. Of course, the reply email isn’t actually the person you are expecting, it goes elsewhere, and who knows what kind of information they can gather. It is limitless.
So, you need to train the users in your organization on the basics of information security. My rule of thumb is that if someone is sending me something in an email and claiming it is an emergency, I follow up with a phone call. You don’t have to explain to the person why you are calling, you can just say that you want to make sure you get it right the first time. You may find that the person you thought was emailing you has no idea about the email.
Another way this is done by hackers is to take over one email account and use it to gain information from other people. So the email address itself could even be legitimate, but not actually sent from the person you expected. Imagine getting an email from your spouse that says something like “Hi Honey, I’m at the store and my debit card won’t work, can you send me your credit card number to try and use yours?” That’s a really simple scam and all it requires is access to your email. If the hacker doesn’t change the password, they might even be accessing it without anyone’s knowledge.
There is a lot of anti-phishing, anti-scamming educational materials out there online. So, I’m not going to reinvent the wheel here. Just look into it, and make sure your team is trained on this stuff.
Physical Security Is A Necessary Part of Information Security
You can have all of the bells and whistles in regards to password usage, training of employees, and backups, but if someone can just find your phone in the park and access your email without some sort of passcode, then you aren’t secure. Conversely, if you are sticking random USB drives into your computer, then all of those passcodes aren’t going to help you.
Movies and television would have you believe that hacking looks a lot like the Matrix, with some trendy electronic music blasting in the background, and an exciting GUI with colorful lines of code streaming across the screen. Hacking can be a version of that. Although leather pants are far less popular in the hacking community than the Wachowski brothers would have you believe. More often, it’s just a person on a phone, asking the right questions, being friendly to receptionists, and charming their way into our hearts (and data). Be wise to what social engineering looks like. Remember that getting your purse snatched can constitute a data breach under many state laws, if you are holding electronic devices that contain other people’s personal information.
Password Management
My view on password management starts to make more sense once you’ve thought about physical security. A lot of companies are still having employees change their password every few months. I don’t advocate for that. For the last 20 years or so, I’ve held the view that a person who does not know their own password may be as dangerous to the system as a person who has a very weak password. Password managers have softened that view a little, but let me explain the thinking.
If you are unable to reuse your passwords, and must change them every few months, the chances that an employee is going to write down their password and stick it to their monitor becomes much higher. In that instance, the organization went from very high security to a situation where the cleaning crew, all visitors, other co-workers, and all sort of potential invaders can plainly see your password. Now, this may be less of an issue for you if you are practicing two-step authentication. But, if your work computer is considered a “trusted” computer, you may still end up in a bad spot. I would rather that people have a password they can memorize and not have to write down, than have them use random digits and letters that have to be written out and left on their desk.
That said, reusing the same password repeatedly across systems is still considered poor practice, and remembering all of those passwords for all of those different accounts gets pretty challenging. For those reasons, you may want to consider using a password manager. Yes. They CAN be hacked too. But the data tends to be encrypted, and I still think the risk is lower than doing it as described above. I’ve seen good recommendations on 1password ($3 per month) and bitwarden (free for personal, $5 /month for business). I’m going with bitwarden, but there are a lot of good options out there.
Conclusion
As Biggie once said, “follow these rules and you’ll have maad bread to break up.” The last recommendation I can offer is to get a professional to look at your system if you are able. You don’t have to have an IT department to have a secure system. Most parts of the U.S. have plenty of IT firms that would be glad to come to your home or office and figure out what you can do to be more secure. These are just the starting points and steps that I’m taking. There is always more to do, and evolution is part of the security game.
Last, none of what I’ve said here ensures compliance with any data privacy laws. This is technical advice from my personal experience. So, don’t take it as legal advice for what you need to do in your state, and don’t take it as a definitive version of everything that an IT pro would suggest either.
Stay safe out there!
California Legislature Makes Last Ditch Amendments to CCPA
Re-posted from intothecyberbreach.com, originally published on September 17, 2019.
The CCPA, which remains set to go into effect on January 1, 2020, was amended with no less than five Assembly bills last week. The amendments, covered below, are awaiting Governor Newsom’s signature, as is Assembly Bill 1202, which requires data brokers to register with the California Attorney General. The Governor has until October 13, 2019 to sign. These were passed as separate bills, so it is possible the Governor could accept some and reject others. However, given the dominance of Democrats in the legislature and governor’s office both, the Governor is expected to sign.
Change is always exciting, but perhaps the biggest news out of this round of amendments is that no additional amendments to the CCPA are expected before it goes into effect on January 1st. So, while I used to tell friends at cocktail parties that the CCPA could be delayed until the spring, I now tell them that life as they know it will end on New Year’s Day. Yeah, I don’t get invited to much anymore.
For the most part, I view these as positive changes. I’ve heard them described as “pro-business” amendments, which is fine. I see them more as effort to make the CCPA easier to understand, and a steering away from definitions that confuse more than clarify. A brief description of each pending bill is below.
Assembly Bill 25 exempts for a period of one year any “Personal information that is collected by a business about a natural person in the course of the natural person acting as a job applicant to, an employee of, owner of, director of, officer of, medical staff member of, or contractor of that business to the extent that the natural person’s personal information is collected and used by the business solely within the context of the natural person’s role or former role as a job applicant to, an employee of, owner of, director of, officer of, medical staff member of, or a contractor of that business.” According to the Assembly’s comments on the bill, “the one-year sunset provides the Legislature time to more broadly consider what privacy protections should apply in these particular employment-based contexts, and whether to repeal, revise, and/or make these exemptions permanent in whole or in part moving forward.”
Assembly Bill 1146 removes the right to opt out from vehicle information or ownership information retained or shared between a new motor vehicle dealer and the vehicle’s manufacturer, if the information is shared for the purpose of effectuating or in anticipation of effectuating a vehicle repair covered by a vehicle warranty or a recall, as specified. The bill would define terms for that purpose. The bill would also except from the right to request a business to delete personal information about the consumer the personal information that is necessary for the business to maintain in order to fulfill the terms of a written warranty or product recall conducted in accordance with federal law.
Assembly Bill 874 defines “publicly available” to mean information that is lawfully made available via government records. The bill also clarifies that personal information does not include deidentified or aggregate consumer information and that personal information includes information that is “reasonably capable” of being associated with a particular consumer or household, as opposed to “capable” of being associated. excludes deidentified or aggregate consumer information from the definition of “personal information.” This distinction is not so much a policy change, but a recognition that the CCPA as originally written was over inclusive of data that could in theory, possibly, maybe, someday, be used to identify an individual.
Assembly Bill 1202 requires data brokers to register with, and provide certain information to, the Attorney General. The bill would define a data broker as a business that knowingly collects and sells to third parties the personal information of a consumer with whom the business does not have a direct relationship, subject to specified exceptions. The bill would require the Attorney General to make the information provided by data brokers accessible on its website and would make data brokers that fail to register subject to injunction and liability for civil penalties, fees, and costs in an action brought by the Attorney General, with any recovery to be deposited in the Consumer Privacy Fund.
Assembly Bill 1355 refines the existing FCRA exemption to ensure it applies to any activity involving the collection, maintenance, disclosure, sale, communication, or use of any personal information regarding a consumer’s credit worthiness, credit standing, credit capacity, character, general reputation, personal characteristics, or mode of living by a consumer reporting agency to the extent such activity is subject to the FCRA with some exceptions. Second, the CCPA generally will not apply to business-to-business communications and transactions for a period of one year. Third, the CCPA does not require businesses to collect or retain information they would not collect in the ordinary course of business or retain it for longer than they would otherwise retain such information in the ordinary course of business. Fourth, data that is encrypted, or if it is redacted is not covered by the CCPA’s data breach protocol. Lastly, the Attorney General is given authority to promulgate regulations to effectuate certain aspects of the CCPA.
Assembly Bill 1564 provides that a business that operates exclusively online and has a direct relationship with a consumer from whom it collects personal information is only required to provide an email address for submitting requests for information required to be disclosed, as specified.
CCPA Begins, NY SHIELD Explained.
Re-posted from intothecyberbreach.com, originally published on January 28, 2020.
As of January 1, 2020, the California Consumer Protection Act (CCPA) went into effect. I’m going to dig a little deeper into how that seems to be playing out later, but the purpose of this post is really just to mark the occasion. And also, to point out that the second installment of NY SHIELD is coming into effect in March 2020. For both of these acts, you don’t have to be located in California or New York for the law to apply to you. A lot of companies are starting realize this, and are scrambling. The good news is that if you are a larger company that is CCPA compliant, pre-incident, you are on the right track for New York too. Although the requirements for both are not equivalent. National companies (i.e., all internet-based businesses) will have to do separate compliance for both. But, if you are New York-centric, you are probably breathing a sigh of relief that the NY SHIELD ACT does not create a private cause of action against companies for data breach. (Unlike California). However, there are still pitfalls aplenty. Specifically, On October 23, 2019, the Stop Hacks and Improve Electronic Data Security Act (the SHIELD Act) imposed data breach notification requirements on any business that owns or licenses certain private information of New York residents, regardless of whether it conducts business in New York. In March 2020, the second part of the Act requires businesses to develop, implement and maintain a data security program to protect private information.
We haven’t focused on NY SHIELD as much (and I suspect that will change soon), so, just to re-cap, New York’s new data privacy law:
Expands When A “Breach” Is Triggered
Under the old rules, for a security incident to be called a “breach” and thus trigger the state’s breach notification requirements, there must be an “unauthorized acquisition or acquisition without valid authorization of computerized data that compromises the security, confidentiality, or integrity of personal information maintained by a business.” In English, that means that someone (or something) must “acquire” the data. Typically, that means they must access the data, AND come away with it. In other words, under the current law, a breach is not triggered by merely hacking into a server and seeing that there are a number of files containing personal information. The hacker would also have to take the files, or open them and record them somehow. The hacker would have to walk away with some ability to recall or review those files, whether it is by copying them, or some other means. That was then. This is now.
The NY SHIELD Act expands the definition of a breach by including ANY unauthorized access. That means if our hypothetical hacker gains access to your server, but never copies the personal information in the server, this would still count as a breach and would require breach notification.
Expands The Meaning of “Private Information”
The NY SHIELD ACT expands the definition of private information to include a combination of any personal identifier, and any account, credit, or debit card number, if it is possible to use that number to access an individual’s financial account without any additional identifying information OR a combination of any personal identifier and certain biometric information OR a username and password combination that would give access to an online account.
All of this creates interesting possibilities for what could be considered private information. For instance, your username and password to even the most useless online accounts could trigger a breach notification requirement. Further, under the biometric category, this could include your name and a picture of your face, since a picture of your face is, after all, “data generated by electronic measurements of an individual’s unique physical characteristics, such as a fingerprint, voice print, retina or iris image, or other unique physical representation or digital representation of biometric data which are used to authenticate an individual’s identity.” What feature is better at authenticating your identify than your face? Suddenly, unauthorized access to the school yearbook committee’s folder may become a notifiable incident. I’m going to stay out of the debate as to whether this is a good idea or a bad one, but most people can agree that it represents a significant expansion.
Creates New Obligations For Keeping Private Information Secure
The NY SHIELD ACT creates an obligation to maintain “reasonable” safeguards starting in March 2020. The word “reasonable” is a favorite among attorneys, especially attorneys who bill by the hour. Here, mid-size and large companies have specific milestones they must meet. For smaller companies, reasonability will be judged typically in terms of what precautions have been made. Basic stuff like multi-factor authentication should be a given. Implementing a company-wide security protocol, and identifying key players to run said program are also going to count towards “reasonable”-ness. I would argue anything that shows proactive steps, and preparedness will go a long way.
So, one question that the business community may have is what happens if they do not take reasonable safeguards? That can get complicated. True, the great state of New York may impose fines of up to $5,000 per violation. But, the consequences might be worse than that. For instance, would your insurance policy still cover you if you haven’t complied with the law? Suddenly that litigation or that business loss may be uninsured. That sting is going to exceed $5,000 very quickly.
As I alluded to, the Act takes size into account. For business with fewer than 50 employees, less than $3 million in gross revenues in each of the last three fiscal years, or less than $5 million in year-end total assets, those small businesses must maintain “reasonable administrative, technical and physical safeguards that are appropriate for the size and complexity of the small business, the nature and scope of the small business’s activities, and the sensitivity of the personal information the small business collects from or about consumers.” For businesses larger than that, they must implement a data security program containing the administrative, technical and physical safeguards enumerated in the law (see below). Thus, while CCPA has been getting all of the attention. The NY SHIELD ACT puts a number of requirements on companies that are too small for the CCPA to cover. The enumerated reasonableness requirements are as follows:
According to § 899-bb(2)(b)(ii)(A), organizations can Implement reasonable administrative safeguards by:
Designating one or more employees to coordinate the security program
Identifying reasonably foreseeable internal and external risks
Assessing the sufficiency of safeguards in place to control the identified risk
Training and managing employees in the security program practices and procedures
Verifying that the selection of service providers can maintain appropriate safeguards and requiring those safeguards by contract
Adjusting the security program in light of business changes or new circumstances
According to § 899-bb(2)(b)(ii)(B), organizations can establish reasonable technical safeguards by:
Assessing risks in network and software design
Assessing risks in information processing, transmission, and storage
Detecting, preventing, and responding to attacks or system failures
Regularly testing and monitoring the effectiveness of key controls, systems, and procedures
According to § 899-bb(2)(b)(ii)(C), organizations can create reasonable physical safeguards by:
Assessing risks of information storage and disposal
Detecting, preventing, and responding to intrusions
Protecting against unauthorized access to or use of private information during or after the collection, transportation, and destruction or disposal of the information
Disposing of private information within a reasonable amount of time after it is no longer needed for business purposes by erasing electronic media so that the information cannot be read or reconstructed
Expands Breach Notification Requirements
When a New York resident’s personal information is accessed without authorization, under the NY SHIELD Act, the affected New York residents, the New York Attorney General, the New York Department of State, and the New York State Police must be notified of the breach. If the breach affects more than 500 New Yorkers, you will have 10 days from the date the breach if discovered to notify the attorney general, and the fines for noncompliance have increased as well. Further, if over 5,000 residents were affected by the breach, notification must also be made to consumer reporting agencies.
Take Aways
I think the take aways from where we sit right now is that the NY SHIELD Act is about to cause a scramble similar to the one we are seeing in California. New York companies are going to need to get compliant, or risk enforcement. Is the Attorney General likely to start prosecuting violations on March 1st? Doubtful. But the writing is on the wall. And unlike the CCPA, even the little guys are affected.
Are you a startup trying to figure out to get NY SHIELD compliant (hint: do you think your investors might ask about this?) Now is the time to get with the program. Reach out to me at jlong@long.law if you want to schedule a free consultation on data privacy compliance.
Are You Liable for the Data Shenanigans of Others? (Part 2 – Controllers and Processors)
Re-posted from intothecyberbreachcom, originally published on September 5, 2019
In Part 1 of this post, we laid a framework for the legal landscape for American businesses and their potential for exposure to State and International law regarding data privacy, very broadly. If you missed it, and you could use a 30,000 foot view, its here.
Now that you know the basics behind GDPR and CCPA, what responsibilities or liabilities do you have in regard to entities that process data it got from you. Let’s walk through a scenario to illustrate what I mean…
Say you’ve got a website that attempts to develop a mailing list or subscriber list. It’s a site about designer sneakers, and it notifies your customers on that list whenever certain sneakers that are difficult to locate, are available for sale in their size. The website is owned by you, belongs to you, is run by you. But… somewhere, you’ve got this little snippet of code on the site, which allows users to subscribe to your page, and enter their name, address, email address, phone, and shoe size. Now let’s say that all of that information about your client, gets stored on a website that does NOT belong to you. So, think of a separate contact management application that you have linked into your site, but is run by another company.
Under the GDPR framework, you would be what is a called a “controller” of the data your customer has shared, and the company that handles your contact management system would be the “processor” of that data.
The GDPR defines a “controller” as an entity that “determines the purposes and means of the processing of personal data.” A “processor” is defined as an entity that “processes personal data on behalf of the controller.” So, why do we care?
According to the GDPR, the data controller is primarily responsible for getting consent to collect data (that will be a topic for another day), revoking that consent, as well as notification of any breaches of that data. This is true even though it may be the processor that actually possesses the data.
Regarding revocation… Recall that under the GDPR, you have a right to be forgotten. Anyone located in the European Union can contact an American company and demand that any data about them be removed. Pretty neato for them! Total headache for you!
So, back to our example: You’ve got this lit sneaker shop online, you have a vendor that collects your customer contact information and their shoe size, and someone contacts you and demands to be forgotten. As the data controller, it would be your responsibility to contact the processor and have them remove that data. It might be as easy going onto your admin page on the processor’s website and removing the information. But… data storage is rarely that easy, and it is more likely that you will have to check the processor’s privacy agreement with you (ahem, which you read ahead of time…. right?) and possibly even contact a human to discuss how the data processor handles GDPR rights revocation requests. As a data processor, your vendor then has to comply with that request to remove the data for you. Simple, right? No, of course not. But, if you’ve followed along this far, you’re already a few steps ahead of the game here. Might as well see the ending, no?
As you know, as a loyal reader of this blog, and as a person who has ever shopped at a big box retail store, when a breach happens the company who was breached has to provide notification to the people whose personal information has been affected… So, what happens when the data that came through your website and into the vaults of your third-party vendor gets hacked into? How about if that third-party vendor did something supremely stupid to enable the breach?
Article 28 of the GDPR requires that “where processing is to be carried out on behalf of a controller, the controller shall use only processors providing sufficient guarantees to implement appropriate technical and organisational measures in such a manner that processing will meet the requirements of this Regulation and ensure the protection of the rights of the data subject.” Thus, not only do Europeans spell “organizational” wrong, they also require controllers to only use processors that are GDPR compliant. Thus, if your vendor is doing something supremely stupid, and you had reason to know about it ahead of time, you’ve violated GDPR. Congrats!
This issue recently came up for Delta Airlines, who initiated a lawsuit against its third-party vendor [24]7.ai Inc, after the vendor experienced a data breach in 2017. Delta alleges that its vendor had weak security protocols and had not notified them of the breach for five months. Of course, Delta, itself, has been fending off lawsuits from its own customers as a result of this breach.
Under the GDPR, “[t]he processor shall notify the controller without undue delay after becoming aware of a personal data breach.” Delta alleges that the data pirate that hacked into its vendor’s system had unauthorized access to names, addresses, and payment card information of approximately 800,000 to 825,000 Delta customers in the U.S. The vendor failed to notify Delta of the breach until one month after Delta renewed its contract with the vendor. Further, the vendor contract required that Delta be notified of any breach. The basis of Delta’s suit is a breach of contract and negligence, not GDPR compliance, per se. Be that as it may, many, if not most, vendor contracts from major players nowadays are going to include terms or requirements that the vendor be GDPR compliant, however they choose to define that. That’s a solid endorsement that you should consider similar requirements in your own vendor contracts.
Back stateside, the new California Consumer Privacy Act (“CCPA”), discussed in Part 1, creates a private cause of action for consumers affected by data breach when that breach is caused by a business’s violation of the duty to implement and maintain reasonable security procedures . Naturally, the plaintiffs’ bar will contend that all breaches are caused by a failure to implement reasonable security procedures. How does that affect our example though?
The CCPA is one avenue where your business may face liability when your vendor fails to secure the data that you have provided it. Fortunately, the CCPA only applies to certain businesses. If you are still in startup mode (< $25 million in revenue), chances are the CCPA excludes you, unless you are in the business of buying or selling personal information. While the CCPA does not use terms like “controllers” and “processors”, the concept is a useful one that many teams are already familiar with. Your vendors will attempt to opt-out of any liability to you for a breach, meanwhile, the CCPA squarely puts the onus on you to ensure the safety of the data being used. The CCPA has a private cause of action, which allows not only for state enforcement, but also for private individuals to sue the pants off of you.
So what is the take away?
First, make sure you understand what data is being collected by any vendors that you are working with. Remember, vendors can be anything from the applications that you add to your website to certain backend service providers. Given today’s expanded view of private personal data, it is likely they are collecting something that would trigger GDPR or CCPA.
Second, read your terms and conditions with your vendors. If you are using systems like Mail Chimp, Google Analytics, or any number of other plug-in style apps in your website to gather data, you are unlikely to be in a position to negotiate with them. But at least know what you are signing up for, and decide whether its worth the risk.
Third, if you are negotiating with vendors, don’t accept their denial of liability for their own data shenanigans. They shouldn’t become your cybersecurity insurance policy, but they shouldn’t be creating unchecked liability for you either.
Fourth, consider using GDPR compliance efforts as an opportunity to work with your vendors to be clear about what they are doing, why, how the data is being protected, and what they are required to do in the event things go sideways. Remember that the purpose of a contract is to prevent litigation.
Last, no legal blog post would be complete without an admonition to ask a lawyer and get actual legal advice.
Are The New York Department of Health’s New Breach Notification Requirements for Healthcare Providers Actually Authorized?
Re-posted from intothecyberbreach.com, originally published on August 22, 2019.
Early last week, a letter from the New York Department of Health was issued to Administrators and Technology Officers in the Healthcare Industry in New York, which states, essentially, that the NYDOH has implemented a new notification protocol in the event of a data breach at a healthcare facility.
The letter states “We recognize that providers must contact various other agencies in this type of event, such as local law enforcement. The Department, in collaboration with partner agencies, has been able to provide significant assistance to providers in recent cyber security events. Our timely awareness of this type of event enhances our ability to help mitigate the impact of the event and protect our healthcare system and the public health.”
The new protocol is directed to hospitals, nursing homes, diagnostic and treatment centers, adult care facilities, home health agencies, hospices, and licensed home care services agencies.
The letter goes on to note that “Providers should ensure they make any other notifications regarding emergency events that are already required under statute or regulation. For example, a cyber security event should be reported to the New York Patient Occurrence Reporting and Tracking System (NYPORTS), under Detail Code 932.”
Now, I might be accused of being late to the party on this one, since the letter appears to have gone out August 12th. But, surprisingly, I’ve seen almost no coverage of this change, other than here. So, I can probably be forgiven for being slow on the uptake with this one.
I reached out to the DOH regarding what authority or regulation they are relying on to implement this new requirement. Again, I may be slow on the uptake.
According to N.Y. Gen. Bus. Law § 899-aa, “In the event that any New York residents are to be notified, the person or business shall notify the state attorney general, the department of state and the division of state police as to the timing, content and distribution of the notices and approximate number of affected persons. Such notice shall be made without delaying notice to affected New York residents.” So, that doesn’t say anything about notifying the DOH. Conversely, HIPAA is a federal law, and that requires notification to federal agencies of a breach. New York Public Health Law – PBH § 2805-l deals with reporting to DOH of adverse events, but its definition does not appear to contemplate data breaches as adverse events either.
Title 10, New York Code, Rules and Regulations 405.8 states “(13) disasters or other emergency situations external to the hospital environment which affect hospital operations;” calls for adverse event reporting. This seems overly broad if it is meant to apply to a data breach. Before I stick my foot any further in my mouth, I will admit that I am not a healthcare expert, and maybe there is a clear blue law that authorizes this new protocol. I just haven’t seen what that is yet. I’ll put a pin in this one and see if I can find out.
The reason why I bring it up is two fold:
It seems fishy to me that the letter does not cite any statute of regulation on which it relies for the change in authority. That is somewhat unusual in my experience. That is potentially an issue because If you’ve got agencies that are changing requirements willy nilly, it creates a nearly impossible set of rules to follow (which are likely to be unfair, and not fully vetted in the comment process). It’s going to spell disaster for some poor healthcare facility, and many of those are small businesses.
The letter seems to suggest some not so great advice as well, as it appears to suggest that your first call should be to DOH. Yes, it acknowledges that you have other legal obligations as well (and this is where it maybe this falls under the adverse event reporting requirement), but it ignores a really major issue. So, without further ado, here is some FREE LEGAL ADVICE in the event that your healthcare facility has a data breach: Before you make statements to a public agency about your breach, talk to a lawyer who specializes in this stuff. Doesn’t have to be me, but talk to someone.
Would definitely like to hear from friends and colleagues on this one.
Update: August 30, 2019. It’s been about a week and I have not heard back on my request from the Department of Health as to the basis of their direction in the letter.
Are You Liable for the Data Shenanigans of Others? (Part 1 – A Brief Introduction to the Legal Framework)
Re-posted from intothecyberbreach.com, originally published on August 10, 2019.
Yes. The end. Ok, it’s not quite that cut and dry, but it is somewhat of a scary proposition. I had initially envisioned discussing vendor management in the context of “controllers” and “processors”, when it occurred to me that a lot of people don’t really know what that means or even what the GDPR is and whether they need to worry about it. The actual answer is, of course, it depends.
The question came up recently for me in a conversation with a couple of attorneys who had gone to a data privacy event for the purpose of figuring out what they had to do themselves to become GDPR compliant. They were shocked to learn that as a “controller” of data, they were potentially liable for the actions, or inactions, of the “processor”. This is all Greek to the solo practitioner, working with personal injury or family law cases, who just wants to know whether Google Analytics is going to cause them to be fined by the European Union. But, I think it is an opportunity to break down what some of these concepts mean, and to say something regarding vendor management under the GDPR in Part 2. I guess we’ve got our first “two part series” on our hands.
Bear in mind, these are really broad strokes, and depending on your own situation, may be an oversimplification. As always, I recommend you retain counsel for the purpose of establishing and maintaining compliance with data privacy laws.
Before we jump right into the GDPR, it is helpful to start at the beginning. I am going to assume for starters that your business is located in the United States. It may seem like, in the privacy world, all anyone ever talks about is the GDPR and the CCPA. For the uninitiated, it is not even clear what those acronyms mean.
The GDPR stands for General Data Protection Regulation. It is a set of regulations established by the European Commission on behalf of the European Union to update existing data privacy laws in recognition of changing technology and social norms which have put people’s personal information at risk.
The CCPA is the California Consumer Privacy Act, which is a state law enacted by the state of California to ensure that California residents have a right to know what companies are doing with their personal information, as well as to ensure that companies collecting that data are taking all reasonable steps to act responsibly with the information they gather.
The reason data privacy conversations so often refer to the E.U. and California law are that these are two of the strictest rulesets in the world regarding how to handle data collected from individuals. Further, because of the nature of the internet, the relevant query here isn’t necessarily where your business is located, it is where your business is reaching others. For instance, if you are a New York-based business but you have customers on your website from Germany, the GDPR applies to you. The query is as much about the location of the consumer as it is about the location of the business. And in an interconnected world you have far less control over who your customers really are than you would in a brick-and-mortar operation.
Today, 48 of the 50 states in the U.S. have data privacy laws. And all 50 states have some form of consumer protection and tort system. Further, there are laws and regulations regarding other contexts in which personal information can arise (for instance, the Health Insurance Portability and Accountability Act, i.e., HIPAA, or the Securities and Exchange Commission’s regulations about reporting financial information). I am going to put HIPAA and SEC regulations aside for now, to avoid muddying the waters. For the sake of context, if you are handling patient medical information, you need to be HIPAA compliant, which is a separate universe of rules, and if you are a publicly traded company, you need to follow SEC regulations. The majority of issues related to data breach in the SEC context have to do with making public, misleading statements about the nature of the breach. If you are dealing with data about children, that’s a different set of rules as well.
Just as importantly, you have to be aware of your local state laws to see what anomalies may apply to you. That said, as a VERY general rule of thumb, i.e., not-legal-advice and not true in all cases, if you are in compliance with the GDPR and the CCPA, you are very likely in compliance with other states’ privacy laws. However, these laws do not apply to every business.
The CCPA is set to go into effect in January 2020, although there are rumors this will be extended by several months. The law is targeted to businesses with “annual gross revenues in excess of twenty-five million dollars ($25,000,000)”, or who “annually buys, receives for the business’ commercial purposes, sells, or shares for commercial purposes, alone or in combination, the personal information of 50,000 or more consumers, households, or devices”, or “derives 50 percent or more of its annual revenues from selling consumers’ personal information”. If you don’t meet that criteria, the CCPA does not apply to you. However, my advice would be that even if the CCPA does not apply, you should consider the feasibility of building CCPA compliance into your business process, for several reasons. First is that other states are changing their privacy laws all the time and may encompass some or all of these measures in the near future. Second is that it allows you to grow your business to fit the CCPA, rather than have to take remedial (pronounced: e-x-p-e-n-s-i-v-e) measures in the future. Third, the CCPA offers a set of “best practices” that are likely to keep you out of trouble in most state jurisdictions.
The language of the CCPA also raises the interesting question of what a business is, but I hope to address that at some point in a future post. If you are unsure whether your outfit is a “business”, go talk to a lawyer. If you can afford to hire said lawyer, chances are good that what you are doing is a business.
The GDPR casts a far more ambitious net. First, dispel with the idea that the law does not apply to you because you are a U.S.-based business. That’s so 2017! The GDPR applies even to U.S.-based businesses that never step foot in the E.U., if they find themselves handling the “personal data” of E.U. citizens, or even people located in the E.U. (cue puzzling questions about whether we’ll see a cottage industry of “data privacy tourism” for Americans who want to fly to France, eat their fill of cheese, and claim E.U.-style privacy rights before returning home.)
How “personal data” is defined must be discussed before we can decide whether the GDPR applies, and here the boldness of the law really comes into focus. “Personal data” can be any information relating to an identified or identifiable natural person, including name, ID number, location, online identifier, physical traits, physiological, genetic, mental, economic, cultural or social data about that person. That also covers IP address, cookies, social media posts, contact lists, and mobile device data. Probably also includes dessert recipes and favorite color. So… yeah, we are talking about nearly anything.
It is very hard to collect any information about your customers or website visitors without triggering the protections of the GDPR. The crazy thing here is that it is unclear what personal information will be identifiable from future technologies, which could also be problematic. Is asking “how are you?” over the telephone a GDPR triggerable event? Maybe…
If we are still wondering whether the GDPR applies to you, I think we can distill it down a little further. Do you have a website? Does the website have any cookies? Does the website keep a log of IP addresses visiting your site? Do you use a third-party service to contact your customers or track website visitors (like Google Analytics or MailChimp)? If your answers tend to be yes, then the GDPR is likely to apply. Now, if you have less than 250 employees, not only are you my target audience for this blog, but the GDPR recognizes that you are a smaller data risk than the larger big corps out in the world. The rules apply to you, but the requirements are somewhat different.
I am going to have to write about what these laws actually require in a separate post (I will put a link here once I’ve done that). But that last question about third-party vendors is really the issue that I wanted to try to tackle in this series. What are your responsibilities when a company that you use to track your website traffic, or to manage your contact list, experiences a data breach of your data?
To answer that question, we have to understand and discuss the concepts laid out by the GDPR of “data controller” (the people with a website), and “data processors” (the people who are given third-party access to information about that website). As you can see, this is a big topic, and you’ll have to wait for Part 2 to really dive in (or, you can discover this post months later and by then I hope to have a link to Part 2 right here on the page).
Stay Tuned!
What’s In Your Wallet?
Re-posted from intothecyberbreach.com, originally published on July 30, 2019.
Yesterday, Capital One announced a breathtaking breach of 100 million accounts within its system, thus compromising the private data of a significant percentage of Americans in one single incident. The scope of the breach is comparable to the Equifax breach in 2017, which Equifax had acknowledged affected 143 million Americans.
The question of “how can this keep happening?” should, by now, be replaced with “when is the next big one?” Is this even a “big one?” Breaches like the one announced by Capital One yesterday are the new normal.
From the consumer side, people who think their private information may have been breached can take a few steps towards solace. One is, obviously, check your credit card statement and make sure there are any goofy charges on there. If you want to take it step farther, you can freeze your credit reports, which would prevent anyone from opening a new credit card account with your information. Third, change your passwords.
The issue of compromised passwords is all the more alarming when considering that most people still use the same password on all of their accounts. So, if when your password is finally compromised, it is essentially compromised everywhere. Here’s a hint, chances are good that by the time you find out about a breach, it’s way too late. The name of the game nowadays is detection, not prevention. This means there is some acknowledgement from the establishment that preventing breaches is a losing battle, and many security groups are re-focusing their attention on just making sure that the breaches that do occur actually get noticed.
So, what does the Capital One breach tell us from the perspective of a data controller? See above. One takeaway here is that if Capital One, Equifax, Marriot, Yahoo!, Myspace (when was the last time I said those two in one sentence? 2003?), Under Armor, Uber, Target, Home Depot, and countless others have been unable to thwart 100% of all data breach attempts, what makes you think you can?
One common misconception on that theme is that it’s only the big boys that are being targeted. That couldn’t be farther from the truth though. According to the Verizon 2019 Data Breach Investigations Report, 43% of cyber-attacks target small businesses.
The takeaway here is that if you don’t already, you need to have a plan for what happens when it happens.
New York State Of Mind.
Re-posted from intothecyberbreach.com, originally published on July 29, 2019.
This last Thursday, July 25, 2019, lawmakers in New York enacted the cleverly named “Stop Hacks and Improve Electronic Data Security Act” (the SHIELD Act), Senate Bill 5575. While Nick Fury could not be reached for comment, I was able to cobble together some details from the new law…
Following the lead of many other states, the SHIELD Act updates New York’s data breach laws by expanding the definition of private information, expanding notification requirements, and requiring that individuals and businesses handling sensitive information implement “reasonable” data security measures. Perhaps most significantly, these requirements will now apply to any person or business that owns or licenses “private information” of a New York resident.
According to the Governor’s office in New York, “[t]his legislation imposes stronger obligations on businesses handling private data of customers, regarding security and proper notification of breaches by:
Broadening the scope of information covered under the notification law to include biometric information and email addresses with their corresponding passwords or security questions and answers;
Updating the notification requirements and procedures that companies and state entities must follow when there has been a breach of private information;
Extending the notification requirement to any person or entity with private information of a New York resident, not just those who conduct business in New York State;
Expanding the definition of a data breach to include unauthorized access to private information; and
Creating reasonable data security requirements tailored to the size of a business.
This bill will take effect 240 days after becoming law.” https://www.governor.ny.gov/news/governor-cuomo-signs-legislation-protecting-new-yorkers-against-data-security-breaches
The new law does not expand the definition of private information to include passport number, employer ID number or financial transaction devices, all of which are included in California’s new privacy regime.
While New York’s previous data breach statute, passed in 2005, required notification of breaches whenever unauthorized private information had been accessed, the SHIELD Act now requires such notice whenever such data has been accessed. Not surprisingly, this significantly expands the number of incidents that will require breach notification. Notification is required to occur within “the most expedient time possible and without unreasonable delay”, unless it can be verified that the access was “inadvertent” and that it “will not likely result in misuse.”
The Act’s requirement for “reasonable” security measures is an interesting one. It states, “[a]ny person or business that owns or licenses computerized data which includes private information of a resident of New York shall develop, implement and maintain reasonable safeguards to protect the security, confidentiality and integrity of the private information…”. The Act even states some examples of what “reasonable” could mean: employee training, regular risk assessment exercises, regular testing of key controls and procedures, and the disposal of private information when no longer needed. There is some risk here that while the list is not meant to be seen as exhaustive, a court could de facto apply those requirements rather rigidly. I’ll be following that issue once we see some guidance from the courts.
Notably, the SHIELD Act does not create a private right of action for an entity’s failure to comply with the law. While this may warrant a sigh of relief from companies within the technology space, we will have to continue to look out for The New York Privacy Act, which is under consideration by the New York State Senate at this time. The New York Privacy Act would indeed create such a private right of action. If passed, it would represent the most aggressive data protection policy in the United States, if not the world.
It Was Just A Mission Statement…
Re-posted from intothecyberbreach.com, originally published on July 28, 2019.
Just what the world needs. Another blog.
Let me start that over. What are we doing here?
This first post will be my mission statement, if you will. My statement of intentions.
So, who is this blog for?
It’s mainly directed to entrepreneurs, technologists, business owners, executives, in-house counsel or really anyone trying to figure out: 1) how to prevent the data in my possession from being compromised or stolen; 2) what I need to do if it has been compromised; and 3) how I can protect myself and my company from liability in the event of a breach? I will be covering these things from the legal aspect, but there will be actionable information relevant to your approach to technology as well.
And who am I?
I have a relatively unusual background for a lawyer. (cue Liam Neeson’s explanation of my “unique set of skills”) I started my adult life dropping out college in 2000 to go join the new technology revolution. Back then, you could get a job writing code by just reading a few books and having the gumption to ask for a job.
I started my first tech job in Newark, NJ at NJIT’s business incubator in the late 90s. My best friend was working for at a tech startup, writing software for one of the world’s first online travel booking engines. For those of you born in this century, what that means is that before this project, in order to book a travel vacation people would either drive to a travel agent’s office, or pick up the phone and book their vacation through the telephone. My friends were changing that. And they were making way more money doing it than I was going to make digging ditches or painting fences.
So, in an effort to get what was intended merely to be a summer job, I showed up to their office and begged for a job. The boss asked me, “What do you know how to do? Can you write code? Ever use SQL? Unix? Do you know any Perl?”. “No, but I can learn really fast.” I said. He wasn’t impressed and ignored me the rest of the day. They were too small and busy to have the sense to kick me out of their single-room incubator office.
There was an energy there that can only be found in a new startup, and I absorbed everything, like a sock in a puddle. I sat in their office reading coding manuals all day. There were very few websites that taught programming back then, but they existed, and I sought them out. I started with HTML, a little javascript, and it didn’t take long before I was piping into grep. (Don’t ask)
I hung around for a few days, asking for a job each day, and having a sense, deep down, that if they just hired me, I’d be great. I read and learned and waited for the job that I knew I would get.
After those first few days, as I sat around in their office, some data entry task for a client’s website came up that no one wanted to do. It involved making it so that clicking on certain parts of an image brought you to different links (i.e., travel agent locations). I eagerly volunteered. It required almost no skill, just effort. I did it for free. It took me all day (in hindsight, probably a 20 minute job). At the end of the day, my new mentor said, “well, if you are going to work here I guess we’ll have to pay you.” I was in!
I dove in, learned as much as I could and was (in my mind) on track to make my first million before 21. I dropped out of college shortly thereafter to go full time. We were doing cutting edge stuff, and I was in the middle of it. I worked long hours, and it hardly ever felt like work. Our little company with a few people grew to 10. After hours, I wrote more code at night on my own time, eventually creating a task management system that utilized some of the prototypical aspects of social media, which I sold to our company in exchange for a stake in ownership. We were on our way!
*bubble pop*
Then it was gone in a couple years. It all happened so fast. I went to see my doctor for a checkup one day and my insurance had lapsed. A few months later, my paycheck bounced. I felt like the wind was knocked out of me. My lease came up, and instead of renewing, I lived in my truck for a few weeks and began to re-group. Re-grouping looks a lot like mostly moping in the day and partying at night to outside observers. It took me a long time to understand what happened, and even longer to come to terms with it.
One thing led to another, I wrote code freelance in my living room for a number of years to get through college, and made the decision that I would go to law school to pursue my original path before my affair with the startup world. I loved law school, and I avoided anything tech like the plague. I think part of it was that it hurt too much. Besides, anytime I told a prospective internship about my tech experience, they always asked me to work on their website, while the other interns were going to be doing policy research or watch oral arguments in court. I felt like I couldn’t escape. I stopped telling people that I knew how to write code, and I graduated law school to become a trial lawyer. That was 9 years ago, and the world has changed. People don’t need me to make them a website anymore, they need me to help them keep their data secure and stay out of trouble if they get breached.
You’ve probably already gotten one of those letters explaining that your private information has been compromised by a major retailer. You might have seen even more in the news. Companies that find themselves in the position of having been breached need someone who understands the technology, understands the rules governing breach responses, and who can handle any litigation that may arise out of the breach. This isn’t just about big-box retailers anymore. In many states, anyone who handles private information (or has a third-party vendor that does so), could liable for either mishandling that information or not reporting and notifying in the event of a breach.
So, that’s what this blog is about. I am a seasoned litigator and business attorney in a mid-sized law firm with offices across the country, and am admitted in New York, California and New Jersey. I live in upstate New York. I have seen the inside of a server, and I have seen the inside of a courtroom. The law is changing fast, and almost all of the states now require a complex response in the event of a company having its private data accessed inappropriately (i.e., a data breach). Not surprisingly, I offer these services (as well as other more traditional litigation and corporate law representation). You can contact me if you find yourself needing counsel regarding a data breach. But, my hope is that this blog is useful to you whether you become my client or not.
That said, let me throw this disclaimer out there, because it really needs to be made clear (to protect us both): NOTHING IN THIS BLOG IS LEGAL ADVICE. UNLESS WE HAVE A RETAINER AGREEMENT, I AM NOT YOUR LAWYER. IF YOU ARE RESPONSIBLE FOR A COMPANY WHOSE PRIVATE DATA HAS BEEN BREACHED, YOU SHOULD CONTACT A LAWYER IMMEDIATELY IN ORDER TO COMPLY WITH THE NUMEROUS STATE AND INTERNATIONAL DATA BREACH NOTIFICATION REQUIREMENTS. There are real consequences to being breached and not complying with notification laws. There are also real consequences to over-notification (and we’ll talk about that here too). Ideally, this is something that you should work out ahead of time, so you have someone to help you right away. In some cases, you really have very little time, a matter of a few days, not weeks.
Anyway, I will be writing about arising issues in the cybersecurity world, notable data breaches, and developments in the law, yes. But more importantly, I want this blog to provide actionable information, and I intend to do it in as human a fashion as possible. This isn’t a stuffy generic presentation of “what you need to know.” I’m going to write about what’s new in the cyber security world, but I might also write about why movies showing hackers hacking are mostly nonsense. I might also write about why Terminator is an amazing piece of art.
So read the blog. If you have questions, or just want to riff on these issues, get in touch. If you have complaints, keep those to yourself. Good luck navigating this crazy world!