CCPA Begins, NY SHIELD Explained.
Re-posted from intothecyberbreach.com, originally published on January 28, 2020.
As of January 1, 2020, the California Consumer Protection Act (CCPA) went into effect. I’m going to dig a little deeper into how that seems to be playing out later, but the purpose of this post is really just to mark the occasion. And also, to point out that the second installment of NY SHIELD is coming into effect in March 2020. For both of these acts, you don’t have to be located in California or New York for the law to apply to you. A lot of companies are starting realize this, and are scrambling. The good news is that if you are a larger company that is CCPA compliant, pre-incident, you are on the right track for New York too. Although the requirements for both are not equivalent. National companies (i.e., all internet-based businesses) will have to do separate compliance for both. But, if you are New York-centric, you are probably breathing a sigh of relief that the NY SHIELD ACT does not create a private cause of action against companies for data breach. (Unlike California). However, there are still pitfalls aplenty. Specifically, On October 23, 2019, the Stop Hacks and Improve Electronic Data Security Act (the SHIELD Act) imposed data breach notification requirements on any business that owns or licenses certain private information of New York residents, regardless of whether it conducts business in New York. In March 2020, the second part of the Act requires businesses to develop, implement and maintain a data security program to protect private information.
We haven’t focused on NY SHIELD as much (and I suspect that will change soon), so, just to re-cap, New York’s new data privacy law:
Expands When A “Breach” Is Triggered
Under the old rules, for a security incident to be called a “breach” and thus trigger the state’s breach notification requirements, there must be an “unauthorized acquisition or acquisition without valid authorization of computerized data that compromises the security, confidentiality, or integrity of personal information maintained by a business.” In English, that means that someone (or something) must “acquire” the data. Typically, that means they must access the data, AND come away with it. In other words, under the current law, a breach is not triggered by merely hacking into a server and seeing that there are a number of files containing personal information. The hacker would also have to take the files, or open them and record them somehow. The hacker would have to walk away with some ability to recall or review those files, whether it is by copying them, or some other means. That was then. This is now.
The NY SHIELD Act expands the definition of a breach by including ANY unauthorized access. That means if our hypothetical hacker gains access to your server, but never copies the personal information in the server, this would still count as a breach and would require breach notification.
Expands The Meaning of “Private Information”
The NY SHIELD ACT expands the definition of private information to include a combination of any personal identifier, and any account, credit, or debit card number, if it is possible to use that number to access an individual’s financial account without any additional identifying information OR a combination of any personal identifier and certain biometric information OR a username and password combination that would give access to an online account.
All of this creates interesting possibilities for what could be considered private information. For instance, your username and password to even the most useless online accounts could trigger a breach notification requirement. Further, under the biometric category, this could include your name and a picture of your face, since a picture of your face is, after all, “data generated by electronic measurements of an individual’s unique physical characteristics, such as a fingerprint, voice print, retina or iris image, or other unique physical representation or digital representation of biometric data which are used to authenticate an individual’s identity.” What feature is better at authenticating your identify than your face? Suddenly, unauthorized access to the school yearbook committee’s folder may become a notifiable incident. I’m going to stay out of the debate as to whether this is a good idea or a bad one, but most people can agree that it represents a significant expansion.
Creates New Obligations For Keeping Private Information Secure
The NY SHIELD ACT creates an obligation to maintain “reasonable” safeguards starting in March 2020. The word “reasonable” is a favorite among attorneys, especially attorneys who bill by the hour. Here, mid-size and large companies have specific milestones they must meet. For smaller companies, reasonability will be judged typically in terms of what precautions have been made. Basic stuff like multi-factor authentication should be a given. Implementing a company-wide security protocol, and identifying key players to run said program are also going to count towards “reasonable”-ness. I would argue anything that shows proactive steps, and preparedness will go a long way.
So, one question that the business community may have is what happens if they do not take reasonable safeguards? That can get complicated. True, the great state of New York may impose fines of up to $5,000 per violation. But, the consequences might be worse than that. For instance, would your insurance policy still cover you if you haven’t complied with the law? Suddenly that litigation or that business loss may be uninsured. That sting is going to exceed $5,000 very quickly.
As I alluded to, the Act takes size into account. For business with fewer than 50 employees, less than $3 million in gross revenues in each of the last three fiscal years, or less than $5 million in year-end total assets, those small businesses must maintain “reasonable administrative, technical and physical safeguards that are appropriate for the size and complexity of the small business, the nature and scope of the small business’s activities, and the sensitivity of the personal information the small business collects from or about consumers.” For businesses larger than that, they must implement a data security program containing the administrative, technical and physical safeguards enumerated in the law (see below). Thus, while CCPA has been getting all of the attention. The NY SHIELD ACT puts a number of requirements on companies that are too small for the CCPA to cover. The enumerated reasonableness requirements are as follows:
According to § 899-bb(2)(b)(ii)(A), organizations can Implement reasonable administrative safeguards by:
Designating one or more employees to coordinate the security program
Identifying reasonably foreseeable internal and external risks
Assessing the sufficiency of safeguards in place to control the identified risk
Training and managing employees in the security program practices and procedures
Verifying that the selection of service providers can maintain appropriate safeguards and requiring those safeguards by contract
Adjusting the security program in light of business changes or new circumstances
According to § 899-bb(2)(b)(ii)(B), organizations can establish reasonable technical safeguards by:
Assessing risks in network and software design
Assessing risks in information processing, transmission, and storage
Detecting, preventing, and responding to attacks or system failures
Regularly testing and monitoring the effectiveness of key controls, systems, and procedures
According to § 899-bb(2)(b)(ii)(C), organizations can create reasonable physical safeguards by:
Assessing risks of information storage and disposal
Detecting, preventing, and responding to intrusions
Protecting against unauthorized access to or use of private information during or after the collection, transportation, and destruction or disposal of the information
Disposing of private information within a reasonable amount of time after it is no longer needed for business purposes by erasing electronic media so that the information cannot be read or reconstructed
Expands Breach Notification Requirements
When a New York resident’s personal information is accessed without authorization, under the NY SHIELD Act, the affected New York residents, the New York Attorney General, the New York Department of State, and the New York State Police must be notified of the breach. If the breach affects more than 500 New Yorkers, you will have 10 days from the date the breach if discovered to notify the attorney general, and the fines for noncompliance have increased as well. Further, if over 5,000 residents were affected by the breach, notification must also be made to consumer reporting agencies.
Take Aways
I think the take aways from where we sit right now is that the NY SHIELD Act is about to cause a scramble similar to the one we are seeing in California. New York companies are going to need to get compliant, or risk enforcement. Is the Attorney General likely to start prosecuting violations on March 1st? Doubtful. But the writing is on the wall. And unlike the CCPA, even the little guys are affected.
Are you a startup trying to figure out to get NY SHIELD compliant (hint: do you think your investors might ask about this?) Now is the time to get with the program. Reach out to me at jlong@long.law if you want to schedule a free consultation on data privacy compliance.
Are You Liable for the Data Shenanigans of Others? (Part 2 – Controllers and Processors)
Re-posted from intothecyberbreachcom, originally published on September 5, 2019
In Part 1 of this post, we laid a framework for the legal landscape for American businesses and their potential for exposure to State and International law regarding data privacy, very broadly. If you missed it, and you could use a 30,000 foot view, its here.
Now that you know the basics behind GDPR and CCPA, what responsibilities or liabilities do you have in regard to entities that process data it got from you. Let’s walk through a scenario to illustrate what I mean…
Say you’ve got a website that attempts to develop a mailing list or subscriber list. It’s a site about designer sneakers, and it notifies your customers on that list whenever certain sneakers that are difficult to locate, are available for sale in their size. The website is owned by you, belongs to you, is run by you. But… somewhere, you’ve got this little snippet of code on the site, which allows users to subscribe to your page, and enter their name, address, email address, phone, and shoe size. Now let’s say that all of that information about your client, gets stored on a website that does NOT belong to you. So, think of a separate contact management application that you have linked into your site, but is run by another company.
Under the GDPR framework, you would be what is a called a “controller” of the data your customer has shared, and the company that handles your contact management system would be the “processor” of that data.
The GDPR defines a “controller” as an entity that “determines the purposes and means of the processing of personal data.” A “processor” is defined as an entity that “processes personal data on behalf of the controller.” So, why do we care?
According to the GDPR, the data controller is primarily responsible for getting consent to collect data (that will be a topic for another day), revoking that consent, as well as notification of any breaches of that data. This is true even though it may be the processor that actually possesses the data.
Regarding revocation… Recall that under the GDPR, you have a right to be forgotten. Anyone located in the European Union can contact an American company and demand that any data about them be removed. Pretty neato for them! Total headache for you!
So, back to our example: You’ve got this lit sneaker shop online, you have a vendor that collects your customer contact information and their shoe size, and someone contacts you and demands to be forgotten. As the data controller, it would be your responsibility to contact the processor and have them remove that data. It might be as easy going onto your admin page on the processor’s website and removing the information. But… data storage is rarely that easy, and it is more likely that you will have to check the processor’s privacy agreement with you (ahem, which you read ahead of time…. right?) and possibly even contact a human to discuss how the data processor handles GDPR rights revocation requests. As a data processor, your vendor then has to comply with that request to remove the data for you. Simple, right? No, of course not. But, if you’ve followed along this far, you’re already a few steps ahead of the game here. Might as well see the ending, no?
As you know, as a loyal reader of this blog, and as a person who has ever shopped at a big box retail store, when a breach happens the company who was breached has to provide notification to the people whose personal information has been affected… So, what happens when the data that came through your website and into the vaults of your third-party vendor gets hacked into? How about if that third-party vendor did something supremely stupid to enable the breach?
Article 28 of the GDPR requires that “where processing is to be carried out on behalf of a controller, the controller shall use only processors providing sufficient guarantees to implement appropriate technical and organisational measures in such a manner that processing will meet the requirements of this Regulation and ensure the protection of the rights of the data subject.” Thus, not only do Europeans spell “organizational” wrong, they also require controllers to only use processors that are GDPR compliant. Thus, if your vendor is doing something supremely stupid, and you had reason to know about it ahead of time, you’ve violated GDPR. Congrats!
This issue recently came up for Delta Airlines, who initiated a lawsuit against its third-party vendor [24]7.ai Inc, after the vendor experienced a data breach in 2017. Delta alleges that its vendor had weak security protocols and had not notified them of the breach for five months. Of course, Delta, itself, has been fending off lawsuits from its own customers as a result of this breach.
Under the GDPR, “[t]he processor shall notify the controller without undue delay after becoming aware of a personal data breach.” Delta alleges that the data pirate that hacked into its vendor’s system had unauthorized access to names, addresses, and payment card information of approximately 800,000 to 825,000 Delta customers in the U.S. The vendor failed to notify Delta of the breach until one month after Delta renewed its contract with the vendor. Further, the vendor contract required that Delta be notified of any breach. The basis of Delta’s suit is a breach of contract and negligence, not GDPR compliance, per se. Be that as it may, many, if not most, vendor contracts from major players nowadays are going to include terms or requirements that the vendor be GDPR compliant, however they choose to define that. That’s a solid endorsement that you should consider similar requirements in your own vendor contracts.
Back stateside, the new California Consumer Privacy Act (“CCPA”), discussed in Part 1, creates a private cause of action for consumers affected by data breach when that breach is caused by a business’s violation of the duty to implement and maintain reasonable security procedures . Naturally, the plaintiffs’ bar will contend that all breaches are caused by a failure to implement reasonable security procedures. How does that affect our example though?
The CCPA is one avenue where your business may face liability when your vendor fails to secure the data that you have provided it. Fortunately, the CCPA only applies to certain businesses. If you are still in startup mode (< $25 million in revenue), chances are the CCPA excludes you, unless you are in the business of buying or selling personal information. While the CCPA does not use terms like “controllers” and “processors”, the concept is a useful one that many teams are already familiar with. Your vendors will attempt to opt-out of any liability to you for a breach, meanwhile, the CCPA squarely puts the onus on you to ensure the safety of the data being used. The CCPA has a private cause of action, which allows not only for state enforcement, but also for private individuals to sue the pants off of you.
So what is the take away?
First, make sure you understand what data is being collected by any vendors that you are working with. Remember, vendors can be anything from the applications that you add to your website to certain backend service providers. Given today’s expanded view of private personal data, it is likely they are collecting something that would trigger GDPR or CCPA.
Second, read your terms and conditions with your vendors. If you are using systems like Mail Chimp, Google Analytics, or any number of other plug-in style apps in your website to gather data, you are unlikely to be in a position to negotiate with them. But at least know what you are signing up for, and decide whether its worth the risk.
Third, if you are negotiating with vendors, don’t accept their denial of liability for their own data shenanigans. They shouldn’t become your cybersecurity insurance policy, but they shouldn’t be creating unchecked liability for you either.
Fourth, consider using GDPR compliance efforts as an opportunity to work with your vendors to be clear about what they are doing, why, how the data is being protected, and what they are required to do in the event things go sideways. Remember that the purpose of a contract is to prevent litigation.
Last, no legal blog post would be complete without an admonition to ask a lawyer and get actual legal advice.
Are The New York Department of Health’s New Breach Notification Requirements for Healthcare Providers Actually Authorized?
Re-posted from intothecyberbreach.com, originally published on August 22, 2019.
Early last week, a letter from the New York Department of Health was issued to Administrators and Technology Officers in the Healthcare Industry in New York, which states, essentially, that the NYDOH has implemented a new notification protocol in the event of a data breach at a healthcare facility.
The letter states “We recognize that providers must contact various other agencies in this type of event, such as local law enforcement. The Department, in collaboration with partner agencies, has been able to provide significant assistance to providers in recent cyber security events. Our timely awareness of this type of event enhances our ability to help mitigate the impact of the event and protect our healthcare system and the public health.”
The new protocol is directed to hospitals, nursing homes, diagnostic and treatment centers, adult care facilities, home health agencies, hospices, and licensed home care services agencies.
The letter goes on to note that “Providers should ensure they make any other notifications regarding emergency events that are already required under statute or regulation. For example, a cyber security event should be reported to the New York Patient Occurrence Reporting and Tracking System (NYPORTS), under Detail Code 932.”
Now, I might be accused of being late to the party on this one, since the letter appears to have gone out August 12th. But, surprisingly, I’ve seen almost no coverage of this change, other than here. So, I can probably be forgiven for being slow on the uptake with this one.
I reached out to the DOH regarding what authority or regulation they are relying on to implement this new requirement. Again, I may be slow on the uptake.
According to N.Y. Gen. Bus. Law § 899-aa, “In the event that any New York residents are to be notified, the person or business shall notify the state attorney general, the department of state and the division of state police as to the timing, content and distribution of the notices and approximate number of affected persons. Such notice shall be made without delaying notice to affected New York residents.” So, that doesn’t say anything about notifying the DOH. Conversely, HIPAA is a federal law, and that requires notification to federal agencies of a breach. New York Public Health Law – PBH § 2805-l deals with reporting to DOH of adverse events, but its definition does not appear to contemplate data breaches as adverse events either.
Title 10, New York Code, Rules and Regulations 405.8 states “(13) disasters or other emergency situations external to the hospital environment which affect hospital operations;” calls for adverse event reporting. This seems overly broad if it is meant to apply to a data breach. Before I stick my foot any further in my mouth, I will admit that I am not a healthcare expert, and maybe there is a clear blue law that authorizes this new protocol. I just haven’t seen what that is yet. I’ll put a pin in this one and see if I can find out.
The reason why I bring it up is two fold:
It seems fishy to me that the letter does not cite any statute of regulation on which it relies for the change in authority. That is somewhat unusual in my experience. That is potentially an issue because If you’ve got agencies that are changing requirements willy nilly, it creates a nearly impossible set of rules to follow (which are likely to be unfair, and not fully vetted in the comment process). It’s going to spell disaster for some poor healthcare facility, and many of those are small businesses.
The letter seems to suggest some not so great advice as well, as it appears to suggest that your first call should be to DOH. Yes, it acknowledges that you have other legal obligations as well (and this is where it maybe this falls under the adverse event reporting requirement), but it ignores a really major issue. So, without further ado, here is some FREE LEGAL ADVICE in the event that your healthcare facility has a data breach: Before you make statements to a public agency about your breach, talk to a lawyer who specializes in this stuff. Doesn’t have to be me, but talk to someone.
Would definitely like to hear from friends and colleagues on this one.
Update: August 30, 2019. It’s been about a week and I have not heard back on my request from the Department of Health as to the basis of their direction in the letter.
Are You Liable for the Data Shenanigans of Others? (Part 1 – A Brief Introduction to the Legal Framework)
Re-posted from intothecyberbreach.com, originally published on August 10, 2019.
Yes. The end. Ok, it’s not quite that cut and dry, but it is somewhat of a scary proposition. I had initially envisioned discussing vendor management in the context of “controllers” and “processors”, when it occurred to me that a lot of people don’t really know what that means or even what the GDPR is and whether they need to worry about it. The actual answer is, of course, it depends.
The question came up recently for me in a conversation with a couple of attorneys who had gone to a data privacy event for the purpose of figuring out what they had to do themselves to become GDPR compliant. They were shocked to learn that as a “controller” of data, they were potentially liable for the actions, or inactions, of the “processor”. This is all Greek to the solo practitioner, working with personal injury or family law cases, who just wants to know whether Google Analytics is going to cause them to be fined by the European Union. But, I think it is an opportunity to break down what some of these concepts mean, and to say something regarding vendor management under the GDPR in Part 2. I guess we’ve got our first “two part series” on our hands.
Bear in mind, these are really broad strokes, and depending on your own situation, may be an oversimplification. As always, I recommend you retain counsel for the purpose of establishing and maintaining compliance with data privacy laws.
Before we jump right into the GDPR, it is helpful to start at the beginning. I am going to assume for starters that your business is located in the United States. It may seem like, in the privacy world, all anyone ever talks about is the GDPR and the CCPA. For the uninitiated, it is not even clear what those acronyms mean.
The GDPR stands for General Data Protection Regulation. It is a set of regulations established by the European Commission on behalf of the European Union to update existing data privacy laws in recognition of changing technology and social norms which have put people’s personal information at risk.
The CCPA is the California Consumer Privacy Act, which is a state law enacted by the state of California to ensure that California residents have a right to know what companies are doing with their personal information, as well as to ensure that companies collecting that data are taking all reasonable steps to act responsibly with the information they gather.
The reason data privacy conversations so often refer to the E.U. and California law are that these are two of the strictest rulesets in the world regarding how to handle data collected from individuals. Further, because of the nature of the internet, the relevant query here isn’t necessarily where your business is located, it is where your business is reaching others. For instance, if you are a New York-based business but you have customers on your website from Germany, the GDPR applies to you. The query is as much about the location of the consumer as it is about the location of the business. And in an interconnected world you have far less control over who your customers really are than you would in a brick-and-mortar operation.
Today, 48 of the 50 states in the U.S. have data privacy laws. And all 50 states have some form of consumer protection and tort system. Further, there are laws and regulations regarding other contexts in which personal information can arise (for instance, the Health Insurance Portability and Accountability Act, i.e., HIPAA, or the Securities and Exchange Commission’s regulations about reporting financial information). I am going to put HIPAA and SEC regulations aside for now, to avoid muddying the waters. For the sake of context, if you are handling patient medical information, you need to be HIPAA compliant, which is a separate universe of rules, and if you are a publicly traded company, you need to follow SEC regulations. The majority of issues related to data breach in the SEC context have to do with making public, misleading statements about the nature of the breach. If you are dealing with data about children, that’s a different set of rules as well.
Just as importantly, you have to be aware of your local state laws to see what anomalies may apply to you. That said, as a VERY general rule of thumb, i.e., not-legal-advice and not true in all cases, if you are in compliance with the GDPR and the CCPA, you are very likely in compliance with other states’ privacy laws. However, these laws do not apply to every business.
The CCPA is set to go into effect in January 2020, although there are rumors this will be extended by several months. The law is targeted to businesses with “annual gross revenues in excess of twenty-five million dollars ($25,000,000)”, or who “annually buys, receives for the business’ commercial purposes, sells, or shares for commercial purposes, alone or in combination, the personal information of 50,000 or more consumers, households, or devices”, or “derives 50 percent or more of its annual revenues from selling consumers’ personal information”. If you don’t meet that criteria, the CCPA does not apply to you. However, my advice would be that even if the CCPA does not apply, you should consider the feasibility of building CCPA compliance into your business process, for several reasons. First is that other states are changing their privacy laws all the time and may encompass some or all of these measures in the near future. Second is that it allows you to grow your business to fit the CCPA, rather than have to take remedial (pronounced: e-x-p-e-n-s-i-v-e) measures in the future. Third, the CCPA offers a set of “best practices” that are likely to keep you out of trouble in most state jurisdictions.
The language of the CCPA also raises the interesting question of what a business is, but I hope to address that at some point in a future post. If you are unsure whether your outfit is a “business”, go talk to a lawyer. If you can afford to hire said lawyer, chances are good that what you are doing is a business.
The GDPR casts a far more ambitious net. First, dispel with the idea that the law does not apply to you because you are a U.S.-based business. That’s so 2017! The GDPR applies even to U.S.-based businesses that never step foot in the E.U., if they find themselves handling the “personal data” of E.U. citizens, or even people located in the E.U. (cue puzzling questions about whether we’ll see a cottage industry of “data privacy tourism” for Americans who want to fly to France, eat their fill of cheese, and claim E.U.-style privacy rights before returning home.)
How “personal data” is defined must be discussed before we can decide whether the GDPR applies, and here the boldness of the law really comes into focus. “Personal data” can be any information relating to an identified or identifiable natural person, including name, ID number, location, online identifier, physical traits, physiological, genetic, mental, economic, cultural or social data about that person. That also covers IP address, cookies, social media posts, contact lists, and mobile device data. Probably also includes dessert recipes and favorite color. So… yeah, we are talking about nearly anything.
It is very hard to collect any information about your customers or website visitors without triggering the protections of the GDPR. The crazy thing here is that it is unclear what personal information will be identifiable from future technologies, which could also be problematic. Is asking “how are you?” over the telephone a GDPR triggerable event? Maybe…
If we are still wondering whether the GDPR applies to you, I think we can distill it down a little further. Do you have a website? Does the website have any cookies? Does the website keep a log of IP addresses visiting your site? Do you use a third-party service to contact your customers or track website visitors (like Google Analytics or MailChimp)? If your answers tend to be yes, then the GDPR is likely to apply. Now, if you have less than 250 employees, not only are you my target audience for this blog, but the GDPR recognizes that you are a smaller data risk than the larger big corps out in the world. The rules apply to you, but the requirements are somewhat different.
I am going to have to write about what these laws actually require in a separate post (I will put a link here once I’ve done that). But that last question about third-party vendors is really the issue that I wanted to try to tackle in this series. What are your responsibilities when a company that you use to track your website traffic, or to manage your contact list, experiences a data breach of your data?
To answer that question, we have to understand and discuss the concepts laid out by the GDPR of “data controller” (the people with a website), and “data processors” (the people who are given third-party access to information about that website). As you can see, this is a big topic, and you’ll have to wait for Part 2 to really dive in (or, you can discover this post months later and by then I hope to have a link to Part 2 right here on the page).
Stay Tuned!
What’s In Your Wallet?
Re-posted from intothecyberbreach.com, originally published on July 30, 2019.
Yesterday, Capital One announced a breathtaking breach of 100 million accounts within its system, thus compromising the private data of a significant percentage of Americans in one single incident. The scope of the breach is comparable to the Equifax breach in 2017, which Equifax had acknowledged affected 143 million Americans.
The question of “how can this keep happening?” should, by now, be replaced with “when is the next big one?” Is this even a “big one?” Breaches like the one announced by Capital One yesterday are the new normal.
From the consumer side, people who think their private information may have been breached can take a few steps towards solace. One is, obviously, check your credit card statement and make sure there are any goofy charges on there. If you want to take it step farther, you can freeze your credit reports, which would prevent anyone from opening a new credit card account with your information. Third, change your passwords.
The issue of compromised passwords is all the more alarming when considering that most people still use the same password on all of their accounts. So, if when your password is finally compromised, it is essentially compromised everywhere. Here’s a hint, chances are good that by the time you find out about a breach, it’s way too late. The name of the game nowadays is detection, not prevention. This means there is some acknowledgement from the establishment that preventing breaches is a losing battle, and many security groups are re-focusing their attention on just making sure that the breaches that do occur actually get noticed.
So, what does the Capital One breach tell us from the perspective of a data controller? See above. One takeaway here is that if Capital One, Equifax, Marriot, Yahoo!, Myspace (when was the last time I said those two in one sentence? 2003?), Under Armor, Uber, Target, Home Depot, and countless others have been unable to thwart 100% of all data breach attempts, what makes you think you can?
One common misconception on that theme is that it’s only the big boys that are being targeted. That couldn’t be farther from the truth though. According to the Verizon 2019 Data Breach Investigations Report, 43% of cyber-attacks target small businesses.
The takeaway here is that if you don’t already, you need to have a plan for what happens when it happens.
New York State Of Mind.
Re-posted from intothecyberbreach.com, originally published on July 29, 2019.
This last Thursday, July 25, 2019, lawmakers in New York enacted the cleverly named “Stop Hacks and Improve Electronic Data Security Act” (the SHIELD Act), Senate Bill 5575. While Nick Fury could not be reached for comment, I was able to cobble together some details from the new law…
Following the lead of many other states, the SHIELD Act updates New York’s data breach laws by expanding the definition of private information, expanding notification requirements, and requiring that individuals and businesses handling sensitive information implement “reasonable” data security measures. Perhaps most significantly, these requirements will now apply to any person or business that owns or licenses “private information” of a New York resident.
According to the Governor’s office in New York, “[t]his legislation imposes stronger obligations on businesses handling private data of customers, regarding security and proper notification of breaches by:
Broadening the scope of information covered under the notification law to include biometric information and email addresses with their corresponding passwords or security questions and answers;
Updating the notification requirements and procedures that companies and state entities must follow when there has been a breach of private information;
Extending the notification requirement to any person or entity with private information of a New York resident, not just those who conduct business in New York State;
Expanding the definition of a data breach to include unauthorized access to private information; and
Creating reasonable data security requirements tailored to the size of a business.
This bill will take effect 240 days after becoming law.” https://www.governor.ny.gov/news/governor-cuomo-signs-legislation-protecting-new-yorkers-against-data-security-breaches
The new law does not expand the definition of private information to include passport number, employer ID number or financial transaction devices, all of which are included in California’s new privacy regime.
While New York’s previous data breach statute, passed in 2005, required notification of breaches whenever unauthorized private information had been accessed, the SHIELD Act now requires such notice whenever such data has been accessed. Not surprisingly, this significantly expands the number of incidents that will require breach notification. Notification is required to occur within “the most expedient time possible and without unreasonable delay”, unless it can be verified that the access was “inadvertent” and that it “will not likely result in misuse.”
The Act’s requirement for “reasonable” security measures is an interesting one. It states, “[a]ny person or business that owns or licenses computerized data which includes private information of a resident of New York shall develop, implement and maintain reasonable safeguards to protect the security, confidentiality and integrity of the private information…”. The Act even states some examples of what “reasonable” could mean: employee training, regular risk assessment exercises, regular testing of key controls and procedures, and the disposal of private information when no longer needed. There is some risk here that while the list is not meant to be seen as exhaustive, a court could de facto apply those requirements rather rigidly. I’ll be following that issue once we see some guidance from the courts.
Notably, the SHIELD Act does not create a private right of action for an entity’s failure to comply with the law. While this may warrant a sigh of relief from companies within the technology space, we will have to continue to look out for The New York Privacy Act, which is under consideration by the New York State Senate at this time. The New York Privacy Act would indeed create such a private right of action. If passed, it would represent the most aggressive data protection policy in the United States, if not the world.
It Was Just A Mission Statement…
Re-posted from intothecyberbreach.com, originally published on July 28, 2019.
Just what the world needs. Another blog.
Let me start that over. What are we doing here?
This first post will be my mission statement, if you will. My statement of intentions.
So, who is this blog for?
It’s mainly directed to entrepreneurs, technologists, business owners, executives, in-house counsel or really anyone trying to figure out: 1) how to prevent the data in my possession from being compromised or stolen; 2) what I need to do if it has been compromised; and 3) how I can protect myself and my company from liability in the event of a breach? I will be covering these things from the legal aspect, but there will be actionable information relevant to your approach to technology as well.
And who am I?
I have a relatively unusual background for a lawyer. (cue Liam Neeson’s explanation of my “unique set of skills”) I started my adult life dropping out college in 2000 to go join the new technology revolution. Back then, you could get a job writing code by just reading a few books and having the gumption to ask for a job.
I started my first tech job in Newark, NJ at NJIT’s business incubator in the late 90s. My best friend was working for at a tech startup, writing software for one of the world’s first online travel booking engines. For those of you born in this century, what that means is that before this project, in order to book a travel vacation people would either drive to a travel agent’s office, or pick up the phone and book their vacation through the telephone. My friends were changing that. And they were making way more money doing it than I was going to make digging ditches or painting fences.
So, in an effort to get what was intended merely to be a summer job, I showed up to their office and begged for a job. The boss asked me, “What do you know how to do? Can you write code? Ever use SQL? Unix? Do you know any Perl?”. “No, but I can learn really fast.” I said. He wasn’t impressed and ignored me the rest of the day. They were too small and busy to have the sense to kick me out of their single-room incubator office.
There was an energy there that can only be found in a new startup, and I absorbed everything, like a sock in a puddle. I sat in their office reading coding manuals all day. There were very few websites that taught programming back then, but they existed, and I sought them out. I started with HTML, a little javascript, and it didn’t take long before I was piping into grep. (Don’t ask)
I hung around for a few days, asking for a job each day, and having a sense, deep down, that if they just hired me, I’d be great. I read and learned and waited for the job that I knew I would get.
After those first few days, as I sat around in their office, some data entry task for a client’s website came up that no one wanted to do. It involved making it so that clicking on certain parts of an image brought you to different links (i.e., travel agent locations). I eagerly volunteered. It required almost no skill, just effort. I did it for free. It took me all day (in hindsight, probably a 20 minute job). At the end of the day, my new mentor said, “well, if you are going to work here I guess we’ll have to pay you.” I was in!
I dove in, learned as much as I could and was (in my mind) on track to make my first million before 21. I dropped out of college shortly thereafter to go full time. We were doing cutting edge stuff, and I was in the middle of it. I worked long hours, and it hardly ever felt like work. Our little company with a few people grew to 10. After hours, I wrote more code at night on my own time, eventually creating a task management system that utilized some of the prototypical aspects of social media, which I sold to our company in exchange for a stake in ownership. We were on our way!
*bubble pop*
Then it was gone in a couple years. It all happened so fast. I went to see my doctor for a checkup one day and my insurance had lapsed. A few months later, my paycheck bounced. I felt like the wind was knocked out of me. My lease came up, and instead of renewing, I lived in my truck for a few weeks and began to re-group. Re-grouping looks a lot like mostly moping in the day and partying at night to outside observers. It took me a long time to understand what happened, and even longer to come to terms with it.
One thing led to another, I wrote code freelance in my living room for a number of years to get through college, and made the decision that I would go to law school to pursue my original path before my affair with the startup world. I loved law school, and I avoided anything tech like the plague. I think part of it was that it hurt too much. Besides, anytime I told a prospective internship about my tech experience, they always asked me to work on their website, while the other interns were going to be doing policy research or watch oral arguments in court. I felt like I couldn’t escape. I stopped telling people that I knew how to write code, and I graduated law school to become a trial lawyer. That was 9 years ago, and the world has changed. People don’t need me to make them a website anymore, they need me to help them keep their data secure and stay out of trouble if they get breached.
You’ve probably already gotten one of those letters explaining that your private information has been compromised by a major retailer. You might have seen even more in the news. Companies that find themselves in the position of having been breached need someone who understands the technology, understands the rules governing breach responses, and who can handle any litigation that may arise out of the breach. This isn’t just about big-box retailers anymore. In many states, anyone who handles private information (or has a third-party vendor that does so), could liable for either mishandling that information or not reporting and notifying in the event of a breach.
So, that’s what this blog is about. I am a seasoned litigator and business attorney in a mid-sized law firm with offices across the country, and am admitted in New York, California and New Jersey. I live in upstate New York. I have seen the inside of a server, and I have seen the inside of a courtroom. The law is changing fast, and almost all of the states now require a complex response in the event of a company having its private data accessed inappropriately (i.e., a data breach). Not surprisingly, I offer these services (as well as other more traditional litigation and corporate law representation). You can contact me if you find yourself needing counsel regarding a data breach. But, my hope is that this blog is useful to you whether you become my client or not.
That said, let me throw this disclaimer out there, because it really needs to be made clear (to protect us both): NOTHING IN THIS BLOG IS LEGAL ADVICE. UNLESS WE HAVE A RETAINER AGREEMENT, I AM NOT YOUR LAWYER. IF YOU ARE RESPONSIBLE FOR A COMPANY WHOSE PRIVATE DATA HAS BEEN BREACHED, YOU SHOULD CONTACT A LAWYER IMMEDIATELY IN ORDER TO COMPLY WITH THE NUMEROUS STATE AND INTERNATIONAL DATA BREACH NOTIFICATION REQUIREMENTS. There are real consequences to being breached and not complying with notification laws. There are also real consequences to over-notification (and we’ll talk about that here too). Ideally, this is something that you should work out ahead of time, so you have someone to help you right away. In some cases, you really have very little time, a matter of a few days, not weeks.
Anyway, I will be writing about arising issues in the cybersecurity world, notable data breaches, and developments in the law, yes. But more importantly, I want this blog to provide actionable information, and I intend to do it in as human a fashion as possible. This isn’t a stuffy generic presentation of “what you need to know.” I’m going to write about what’s new in the cyber security world, but I might also write about why movies showing hackers hacking are mostly nonsense. I might also write about why Terminator is an amazing piece of art.
So read the blog. If you have questions, or just want to riff on these issues, get in touch. If you have complaints, keep those to yourself. Good luck navigating this crazy world!