RVM Enterprises, Inc. eDiscovery

Blog Posts

It’s Time to Take action Against IP Theft

Recently, Tesla CEO Elon Musk was forced to admit that his company was the victim of sabotage by one of its own employees. That employee, frustrated over recently being passed up for promotion, applied damaging code to the company’s manufacturing system and shared large amounts of sensitive data with third parties.

Given the company’s desperate need to make progress following a string of negative announcements, the timing couldn’t have been worse.

Tesla’s situation, though perhaps one of the highest profile cases, is not new or unheard of. Companies quietly monitor their workflows and processes for any signs of IP theft or sabotage by disgruntled or even misinformed employees. Very often, it’s simply a case of those employees taking the work product that they created, believing that they have ownership. In other cases, an employee may copy large contact lists hoping to maintain and divert relationships to a new employer.

Whatever the theft, and whatever the motivation behind it, this particular crime is common and can cause a company not only financial loss, but the potential for serious reputation damage and even litigation.

Roughly 50 percent of employees will take work product when they leave a company, and close to 40 percent will attempt to leverage that work product on behalf of their new employer.

But what can we do about it?

Most companies leverage commonplace strategies, such as blocking employees from using online storage sites such as Dropbox, or disabling USB ports so that files cannot be moved to USB storage devices. The fact is that these methods are only a minor stumbling block for an employee intent on taking work product.

In the past, to determine whether information was stolen, companies needed to do forensics work, costing a lot of money, time, and resources.  It is hard to measure an ROI for a process like this because you cannot assess the value of an event that may have been prevented, and you cannot assume the result before you commit the resources.  Many companies struggle to see the value in building processes that protect their IP in the face of committing resources to R&D, service line launches, shareholder rewards, or employee benefits.

Understanding this challenge and leveraging its forensics expertise, RVM created a tool – Tracer – to analyze computers and identify activities that might be affiliated with potential IP theft. It is designed to look for user behaviors (online and offline) that may indicate an employee’s ill intentions. The tool can sweep through the user’s actions looking for files and actions and can draw attention to troubling patterns to guide an employer’s decisions.

But, technology alone may not be enough to overcome the problem.  Leveraging experts that can properly assess the problem and collaborate with a company to right-size the solution is a powerful next step.  The best way for companies to protect their IP is to ask the hard questions regarding its value and be prepared to take action.

Tesla is a strong company with a stable revenue stream, and will likely weather this storm. Other companies may not be so fortunate.

Navigating Dangerous Waters: Using Technology to Sail Through Second Reviews

by Jonathan Alef

With the economy heating up leading to cash-heavy corporate balance sheets, navigating the corporate environment has become akin to sailing through pirate-infested waters, as companies ride the tide, snapping up smaller entities through mergers and acquisitions. The bevy of activity seen thus far in 2018 is on pace to eclipse the record of $4.7 trillion set in 2015.

What the public sees through the headlines and articles in the media often belies the complexity of the work being done in the background. The FTC and DOJ, empowered by the Hart-Scott-Rodino Act (HSR), investigate large proposed mergers for potential antitrust issues, potentially forcing second reviews. The large volumes of data and tight deadlines that are common in these proceedings, make them an ideal setting for an analytics suite and technology assisted review (TAR) workflows.

second reviewA second request is much like it sounds, where the acquirer in the transaction — if it meets certain thresholds — has to submit filings and a fee to the FTC and DOJ for them to review and grant approval of the merger. However, if either agency believes that the proposed merger would likely result in negative impacts to competition, a second request is made to allow the agencies the opportunity to perform a deeper investigation. Second requests can cover anything and everything deemed necessary to ensure compliance with antitrust provisions.

In recent years both agencies (the FTC in 2015 and DOJ in 2016) amended their Model Second Request forms to allow for the expanded use of analytics and TAR workflows. This is a potential blessing, so long as you follow their prescribed procedures. Most find it worth gaining written approval prior to implementing these tools and processes so that their company can see substantial time and cost savings over traditional linear review. Prior to the amendments, companies were required to review documents one-by-one for responsiveness, PII, and Privilege, often necessitating an army of contract attorneys.

When reviews are augmented with technology, they can usually be completed in a fraction of the time and at a reduced cost. For some background, there are two primary analytics functions that are always helpful to employ when performing a review, especially when used together: email threading and similar content analytics. Email threading is primarily used to organize review content, but can also be used to reduce duplicative content. Meanwhile similar content analytics (conceptual searching, near dupe analysis and clustering) work to identify the information that is most sought after by the requesting agency faster that it can be moved to the forefront of production. Compared to traditional keyword searching, these procedures offer significantly more flexibility, helping to produce results more quickly to meet deadlines.

TAR workflows incorporate machine learning, as a team of subject matter experts review a small segment of the total data population, and “feed” their learning through machine algorithms. The machine can then take this learning and apply it to the larger population, quickly identifying and classifying responsive documents. TAR is tremendously valuable, but does still require careful vetting for privileged and PII content, and the final outcome must include robust reporting of the process and metrics as required by FTC and DOJ.

In the past decade 575 mergers and acquisitions required second review, about 3 percent of all eligible transactions. That’s more than triple the likelihood of having your taxes audited by the IRS in 2015. These companies were subjected to a painstakingly laborious process requiring countless hours of discovery and legal review. Meanwhile, trillions of dollars rested on the findings of those second reviews, making technology-guided processes that whittle down and organize the review population critical.  The smaller, richer review populations go a long way toward minimizing the risk and the costs.

On the Ground in the EU: Key Takeaways on GDPR

GDPR data privacyOn May 25, the General Data Protection Regulation (GDPR) took effect in the EU, and the world has scrambled to demonstrate its compliance. With so much on the line, many companies have been turning to vendors experienced in multinational cross-border cases to better meet the standards and requirements of the new regulations. RVM’s clients are no exception, often having terabytes of data stored in physical and cloud servers around the world.

RVM recently completed its first in-country data privacy review since GDPR went into force. Our team was contracted by a U.S.-based multinational corporation that required onsite privacy culling to meet some of the guidelines set out in the new regulations. Through the process, RVM forensic engineers collected and reviewed custodian emails and file data in country by performing searches based on relevance and date. The data was exported to native and load-file formats for upstream hosting and review in the United States.

To ensure that the work being performed was in compliance with GDPR, RVM worked throughout the project with local and outside counsel – including Data Privacy Officers – to ensure all documentation and agreements were in place.

 

DOCUMENT, DOCUMENT, DOCUMENT

There are a lot of moving pieces with GDPR, so it is important that all parties have an understanding of the prescribed rules and work in hand with the data privacy officers to build a process that meets both the business and legal requirements. The more you can demonstrate in writing, the better. Some of the documentation, like data privacy agreements, should be in place before your team ever gets on site. Documenting each step in the process ensures the safety of both the vendor performing the work and the client, and it can affect your ability to complete a project on time.

Avoid GDPR Fixation

There is no question that GDPR is new and important. However, the EU is not the only place that has rules and laws governing data handling and privacy. Large projects may involve data stored or moved between multiple countries and multiple jurisdictions. Satisfying GDPR regulation is important, but companies need to be aware of other regulations that may differ from or even supersede those of the EU. For this reason it is critical to be in communication with client counsel, other data processors, and the data controller where you are working to ensure compliance in all relevant jurisdictions.

Ask Before You Move that Data

In this example RVM experts were able to satisfy the GDPR requirements for data export to a third country when it ingested data that originated in the EU into a review platform in the United States. Through GDPR data containing personal information cannot simply be transferred outside of the EU. It is critical to work with the client counsel, other data processors, and the data controller to complete all expected processes and identify and obtain consent where required to complete the project.

Data Privacy Laws Aren’t Just a European Concept

The air is hot and stale in his 8×8 cell in Colombia, and the constant sounds of prison unrest make sleeping difficult. For the last three months his company has consisted of a resident rat named Rata, who looks better fed than him, and his cellmate, Chismoso, who became a guest of the prison after he was caught transporting bags of drugs at the airport.

data privacy

Professionals who inappropriately collect and process personal data internationally face potential prison sentences.

As far as the Colombian government was concerned, Spencer Davis was also smuggling. But instead of plastic bags in a briefcase, Spencer’s contraband was stored in five hard drives. His firm was hired by an international bank’s U.S.-based attorneys to perform a forensic collection, data processing, and culling onsite. After processing the data for an onsite privilege review he was to transfer the data back to the United States for additional searching and hosting for review.

That’s when it happened. While searching Spencer’s carry-on luggage at the airport, security found the hard drives and asked what they were for. He explained the situation and showed them the data privacy and consent agreements he’d received from the bank’s compliance officer. What he didn’t know was that every database containing personal information created in Colombia must be registered with the federal government, and that consent for the processing of data must come freely from each individual – not from the data holder’s corporate legal team. Although Spencer was acting as an agent of his firm, he was still responsible for failing to comply with the law.

This seemingly small oversight was enough for the officers to arrest Spencer at the airport and him to be sentenced to 96 months in jail.


The above example story is fictitious, but the punishment – as harsh as it may appear – is a very real possibility for professionals who collect and process personal data internationally. With all the attention that the European Union has received, it is a good reminder that professionals must be aware of the legal requirements in any foreign jurisdiction in which they work.

General Data Protection Regulation (GDPR) is the new buzz word in data privacy and consulting, having gone into full effect in the E.U. on May 25. The financial penalties for non-compliance are tremendous (up to the greater of €20 million or 4 percent of global annual revenue) and have on some level scared the business world “straight.” It’s made U.S.-based companies look at data privacy like never before. It even inspired the American public to petition politicians to take a new look at increased privacy laws at home.

As eDiscovery practitioners, consultants, and forensic experts, we mustn’t forget that GDPR only applies to organizations located within the European Union and foreign organizations that offer goods or services to, or hold data of E.U. citizens. More than 80 countries have data privacy laws, some of which were inspired by GDPR regulations. In spite of their similarities, however, there are still considerable differences that must be recognized and understood to avoid potentially steep penalties.

Countries throughout the world enforce similar data privacy regulations as the E.U., some of them much stricter and with more severe penalties. In the example country, Colombia, the right to intimacy, good name or reputation, and data protection, are all guaranteed by Article 15 of the country’s constitution. The Colombian Criminal Code allows offenders to be sentenced to prison terms of 48 to 96 months, and can levy fines equivalent to over $270,000 USD. Similar regulations exist in other countries, such as Hong Kong, Morocco, Japan, and Venezuela.

In Brazil, the Brazil Internet Act was passed in 2014, which created policies about collecting and using personal data via the internet. Brazil added an additional step by ruling that minors under 16 years of age could not legally give consent for the use of their personal information, and that young adults between 16 and 18 years of age required assistance from a legal guardian to give consent.

The Data Privacy Act of 2012 in the Philippines is the country’s first-ever overarching data privacy legislation, and was heavily influenced by Directive 95/46/EC of the European Union. The Act introduces the concept of ‘sensitive personal information’, a class of personal information which is subject to more stringent requirements for processing. Those found guilty of processing such data – even data stored outside the Philippines – without the proper consent may be subject to prosecution and jail time of two to seven years.

The goals of each of these countries’ policies are similar, but the mechanism by which appropriate consents can be given will vary depending on which country you are in. Further, transferring data from one country to another may compound the requirements, so never simply assume that everything is in place. Be proactive, ask difficult questions before, you begin collecting and processing data.

When working in any country or even at home on a foreign citizen’s data, it’s prudent to perform your due diligence and consult with a data privacy expert familiar with the laws of that country to ensure you’re in compliance with local and national data collection and processing law. It’s important to understand that consent is only valid when it is obtained freely and willingly from the appropriate party, and how the law defines the appropriate party may vary from country to country. The penalties for incomplete or inappropriate data privacy consent can include personal liability up to, and including, prison sentences.

The Necessary Evil of Search Terms

by A.J. Strollo

“Having lawyers or judges guess as to the effectiveness of certain
search terms is ‘truly to go where angels fear to tread.’”
Magistrate Judge Facciola,
United States v. O’Keefe, 537 F. Supp. 2d 14, 24 (D.D.C. 2008)

This statement was made 10 years ago, and the wisdom – particularly when looking at the complexities relating to term syntax and what exists within data sets – has only become more prescient. Search terms can seem fraught, if not outright risky. So why do we continue to rely on them?

Despite the concerns surrounding keywords, and even after all the recent technological gains, they remain the most common way to cull data for potential review and production. The reason for this is likely that they are familiar, and as we all know, the legal community can be slow to move away from the tried and true, particularly when the alternatives involve relinquishing control to machines.

It’s relatively easy to generate a proposed list of terms, run them against the data, and determine how many documents the terms capture. But knowing whether the terms actually capture information of interest is a different story. Along those lines, Magistrate Judge Facciola noted that whether the terms “will yield the information sought is a complicated question involving the interplay, at least, of the sciences of computer technology, statistics and linguistics.”  Id.

Facciola may have said this because of the way lawyers often use the search results without substantive analysis. A common practice when running terms is to look at the volume of data that is returned, rather than the quality or effectiveness of the search. So, if the data returned is significantly higher than expected, the lawyer may narrow the terms arbitrarily with the goal of reaching the “right” number of documents. How they determined what is “right” can be a mystery. These adjustments may yield fewer results, but also risk eliminating necessary ones. While that’s not to say that this practice is haphazard, it does lack defensibility, especially if parties are locked in a contentious battle over the scope of discovery.

For me – and I think Facciola would agree — instead of volume, a better focus is on the effectiveness of the terms, measured not solely by number, but on the richness, or “relevancy rate,” of the potential review population.

So how do we make keywords and search terms more effective and assuage the “fears of the angels?”

A big step is to perform substantive analysis of any search terms rather than the commonly used guess and check method. When the starting point is a list of proposed terms from opposing counsel with an uncertain level of effectiveness, we must assess and refine those terms to increase the likelihood of capturing the most relevant documents. Borrowing concepts from basic statistical analysis, the process for vetting terms and suggested revisions can be based on results of a sample review.  Terms are modified by targeting common false positive hits — hits on the term but not for the intended target — identified within the non-relevant documents from the sample.

Imagine a fact pattern where the relevant discussions involve Jacob Francis and his interactions with a specific contract. Initial searches for Jacob OR Francis in documents that also contain the contract title or number would yield a substantial volume of documents based on the commonality of Jacob’s name.  It’s easy to label this as a bad term, but a lawyer’s analysis is helped much more by understanding why it is bad and how to make it better. Attorneys can do this by looking at the documents, which reveals that there are others at the company with Jacob or Francis in their names (e.g., Jacob Smith or John Francis), thus opening the door to an array of potential term revisions to minimize the number of documents returned. This is a good start, but the analysis does not end there.

Next, it is important to check actual document hits to ensure they are consistent with any assumptions. To do that an attorney should draw and review an additional sample from the documents that were removed from the review population to ensure the new terms are not missing potentially relevant content. Digging into these, the attorney may find out that Jacob Francis had a nickname, “Jake,” which would not be captured using the terms Jacob OR Francis to Jacob w/2 Francis. Continued analysis may also uncover references to the contract negotiation as “Project Apple” instead of the contract title or number.

Using this knowledge and adding or modifying the search to include “Project Apple” and “Jake” addresses these missing documents, avoiding potentially serious omissions. Additional considerations might include running “Project Apple” as a conceptual search rather than as a strict keyword, seeking documents that are similar in meaning but that do not necessarily share the same set of terms.

The payoff of all this work is a more focused set of documents for review, reducing associated costs, and concentrating the review team’s time working on documents in need of review. Considering the alternative of reviewing countless volumes of data unnecessarily, or worse, discarding valuable documents, it’s clear that using keyword searches – effectively – is not only necessary, but beneficial.

How to Manage the Challenges of Change in Technology

A Q&A with RVM CTO Geoffrey Sherman

Geoffrey Sherman, Chief Technology OfficerIn our day-to-day lives, technology goes in and out of style every few years, from a PC operating system to the latest media platform. Fortunately, in most of these cases, upgrading to the newest technology can be accomplished with minimal expense and time.

At the enterprise level, however, changes in technology can pose big headaches, ranging from compatibility with existing systems, retention for compliance, security, and of course – cost.

Many companies will delay the move to new or changed technologies as long as they can, opting instead for patches and workarounds. But at some point, it doesn’t make sense to fight the tide any longer; change is inevitable.

Geoffrey Sherman is RVM’s Chief Technology Officer, responsible for overseeing and deploying information technology products and solutions used by both RVM’s internal workforce and its clients around the world. What did he have to say about managing the challenges of change?

 

What might drive a company to consider upgrading to a new platform or system?

There are a few things at play. First, a company should look at its needs and determine whether they are being met by its current technology solution. An aging system may have a negative impact on work product or be vulnerable to security flaws.

Even if the product still functions perfectly, the company’s business needs may have evolved to the point where they are no longer being met. Or, new products may enter the market that include features not available previously. The value of those features will be weighed to see if they provide significant value to warrant the upgrade.

What are a company’s considerations before transitioning to a new technology platform?

Once a company realizes that their technology is not meeting their needs, the first thing to consider is whether the problem can be addressed with something less impactful, such as a version update, edition change, or workflow change. Failing those solutions, a replacement product or technology may be in the best interest of the organization.

For RVM, a key consideration is the impact that this change will have on our user base. We also need to evaluate if this change will require heavy architectural changes. We are sensitive to whether this would be a full replacement for our current platform, or whether the two would co-exist in parallel.

What are some of the concerns with upgrades in general?

RVM’s goal is to be on the cutting edge of technology, while stopping short of the “bleeding edge.” This means that we approach any upgrade strategically to be sure that the product version change is well tested, often-maintaining smaller test environments for major applications. However, in spite of the planning and testing, not all upgrades will go as planned.

There are ways to combat this. The most basic way is to have a maintenance plan in partnership with the technology vendor to handle minor problems as they arise. In addition to this, we firmly believe in recognizing the circumstances that warrant backing out of a given upgrade and having a mature process to revert to a known good state.

What steps can a company undertake to ensure a smooth transition to a new platform?

If you are reading this, platform change has become inevitable. There are many steps involved in an enterprise-level transition to a new platform. The key is gaining buy-in from the user base and testing early and often in manageable batches. This is crucial at the beginning for two reasons. First, it creates a sense of ownership. Future users that are invested early will apply a more critical eye and be more likely to contribute to the process, ensuring satisfaction with the result. Second, when users have a better grasp of a product and understand how it can meet their needs, there is a much higher rate of user adoption, which in turn improves return on investment.

Testing is just a common sense practice for any technology rollout. We recognize that no solution is going to be perfect out of the box, so it is critical that we conduct rigorous tests to find as many bugs and fix them before we roll out the product to our clients.

How do you measure the success of a transition?

Before we attempt any transition, we develop a notion of what the success factors will be, and solicit client feedback afterward. We find that this creates a feedback loop resulting in lessons learned on both sides. Responses may come in the form of contemporaneous feedback (e.g., emails or phone calls about the product) or may be a more formalized approach using surveys. No matter what we hear, we make sure we are learning from the process.

Can you give an example of a successful migration that RVM implemented?

RVM recently transitioned to a SaaS-based email security offering. The transformation was smooth due to meticulous planning and testing, and the user community was pleased with the added functionality, clear documentation, and fanatical levels taken to validate that operations went as planned. This was a situation where we performed several days of IT lead training to ensure everyone knew the timeline of events and that their expectations were well set. It was such a smooth transition that afterward I had to walk the floor and chat with users to make sure they were all working as expected, since there were no service desk tickets logged.

What should organizations consider if they have a legacy litigation offering or are facing challenges with the current system?

RVM can reduce the burden of housing legacy systems and even offload some modern ones, allowing our clients to focus on the merits of their cases without worrying about upgrades, testing, and downtime. As an example, RVM can migrate Cases from Veritas’s Clearwell Platform, which has a large footprint of mostly static cases. RVM specializes in migrating Relativity Workspaces to eliminate the burden and upkeep of hosting Relativity in house. Such options offer clients the ability to put contingencies in place for service and support.

eDiscovery in O365 is Easy, But Still Best Left to the Experts

By Sean King, Chief Operations Officer

I admit it. I do my own taxes.  I like the control I have in organizing my finances and filling out a ridiculously complex set of forms and fields. Honestly, I do my own taxes because the software that is available makes completing it much easier. But despite the “risk meter” shown by the software, every year after I complete the process and triple-check all of my information, I never feel confident I did it 100 percent correctly. I worry about the potential for an audit and the stiff penalties that accompany a failed audit.

A lot of technology that has rolled out in the last few years takes complex tasks and reduces them to everyday functions. With cloud-based solutions like Office 365, management of a company’s email and legal functions that relate to data management and information governance are becoming routine. As someone who has spent his career working with and around legal professionals, I wonder whether we realize the potential legal risk that presents.

Recently I moderated an RVM webinar, Office 365 – The Unseen Legal Risks, where we elaborated on some of those risks inherent in the implementation and use of Office 365. There is an expectation of compliance, process, and collaboration that has greatly expanded over the years as new technology, such as predictive coding and technology-assisted review (TAR), has become more acceptable in the mainstream, as noted in court opinions from matters like Moore v. Publicis Groupe (287 F.R.D 182 (S.D.N.Y. 2012) or in Winfield v. City of New York, 2017 US. Dist. LEXIS 194413 (S.D.N.Y. Nov 27, 2017) where Judge Parker directed the City to use TAR instead of linear review. This trend is further complicated by cloud-based systems like Office 365.

As you may be aware, the heavy lifting in completing personal income taxes is the overall questionnaire. During this stage you enter in your family information, where you live, your W2 data, investments, etc. If you read or interpret the question wrong, the best tax calculator in the world won’t be able to help you.

Same thing with Office 365.

In Office 365, you need to create your rules and establish how your data will look. Companies who choose to go with an “out of the box” setup in their application may get a nasty surprise when it comes time to pull data for an investigation. One such example we learned about was how long Microsoft will store data. Your company policy may be to keep emails for one year, but if you’re not aware of your settings, you could be responsible for producing emails going back much farther.

There are other legal concerns as well, using a product that proudly associates with “the cloud,” which suggests that the data is in an unknown location. This could present jurisdictional concerns or GDPR compliance issues, as your responsibility for producing data or protecting privacy may hinge on the country or region in which your data is being stored. Just because your data is not in the United States does not protect it from the U.S. courts, and being an American company does not mean that your U.K.-based data is exempt from GDPR compliance.

Another important concern is whether like me and my taxes, companies are performing tasks that are perhaps best left to certified professionals. Typically, a company that receives a document request or subpoena will engage in a process overseen by a lawyer or outside counsel. But, with Office 365, it becomes easy for a company to bypass much of that process, believing that the risk is low. But, is that enough? What if I misinterpret or do not understand the function of search or analytics in O365 and do not get the right results?  Will I even know if it is right?  Do I know what O365 is NOT giving me, and should?  While it may seem easy, it may not be done correctly to meet discovery or evidentiary requirements.

RVM has written in the past about self-collection and the risks that it can entail. The logical interface and robust nature of O365 could lead even more companies down a road that we previously described as similar to driving with too little insurance: it may save in the short run, but in the long-term you’ll likely end up paying more.

Finally, Office 365 gives companies the ability to analyze and review documents.  As a litigation support professional, I recognize the power and effectiveness of this kind of technology, as have the courts who have started encouraging the use of analytics during document review. But, in the hands of someone lacking the proper training, such a tool becomes highly ineffective, resulting in potentially deficient production that can negatively impact summary judgments. The key question as we learned from Allan Johnson, from Actium LLP, was whether you are able to speak to the results you achieved and the process used to get those results. The best way to guarantee that is to ask about your O365 environment from your IT person or consultant and work with an experienced forensics professional familiar with O365.

We as professionals have a requirement and a duty to understand the technology that we use every day. I am concerned by the lack of understanding that companies exhibit about their Office 365 licensing, functionality, setup, and workflows. The courts will not accept ignorance as a legitimate rationalization for failing to meet the standards of legal competence, and most companies cannot afford the fallout from a negative ruling.

Doing your taxes on your own might be one thing, letting anyone do email collection and export might be a level of risk we should not take for granted.

 

Learning from Equifax: Preparing for a Cybersecurity Breach

by Danielle McGregor

 

data server bankOn March 1, Equifax announced that its 2017 data breach affecting over 140 million Americans had actually affected over 2 million MORE than previously reported. The initial breach took place on July 29, 2017, but wasn’t made known to the public until September 7, over a month later. For a month, information such as driver’s license numbers, birth dates, and social security numbers were exposed without their knowledge, and as a result Equifax and its reputation were excoriated.

Unfortunately, data breaches like this are not uncommon, and, in fact, might be becoming a norm rather than an exception. We need to ask ourselves what have we done to prepare for a data breach and do I have access to technologies that will allow me to respond quickly and efficiently?

Earlier this month, The Sedona Conference released an Incident Response Guide that addressed this scenario. Their guidance was for every company to have an Incident Response Plan in place for handling data breaches. This plan should be broad enough to cover any scenario, but provide key actionable details so that should the unthinkable happen in the middle of the night, everyone knows who to call first. The first step in the plan is to identify what format of data the organization has (e.g., digital, or paper) and where it is located.

RVM recommends that any protocol include three basic steps to minimize the risk to the public as well as mitigate any potential public relations fallout.

Step 1 – Determine the nature of the breach and fix it! This is easier said than done. The company’s IT department or technology experts have to determine what vulnerabilities may have given outsiders access to their information and plug the hole. This is important from a business perspective, but also very helpful when assuring the public that the problem will not be repeated in the future.

Step 2 – Engage a Data Forensic Team that can isolate the affected systems and collect the images of the breached data for your review and analysis. This will help to determine the extent of the damage of the breach by identifying affected parties, data sources, and the sensitivity of the information that was stolen.

Step 3 – Consult with legal counsel to determine what the law states in regards to a duty to notify and who should be notified. Determining what type of information was accessed will provide guidance for who outside of the governmental entities will need to be notified. While transparency may open you up to scrutiny, it will also help to establish a level of trust with the authorities and the general public.

Most states require that notice be given “without unreasonable delay.” For example, New York State requires that consumer notice be given in the “most expedient time possible and without unreasonable delay.” (N.Y. Gen. Bus. Law § 899-AA, N.Y. State Tech. Law 208) However, some states have a specific date limitation. Vermont requires that consumer notice be made “in the most expedient time possible and without unreasonable delay, but no later than 45 days after discovery.” (9 V.S.A. § 2435)

With so much on the line, time is of the essence, so it is critical to identify the affected information/data as quickly as possible. This is where analytics comes into play.

Doing simple linear review in these cases can take a lot of time, especially if the amount of data breached is large. Leveraging a large variety of processes, analytics can help narrow down the data or help to identify what is in the data.

After a data breach, analytics can help a company determine whether personally identifiable information (PII) was exposed and identify the documents in which this information is held. This is possible through the use of fact first, the idea of prioritizing what is known. What types of sensitive information could be accessed? What information is the most damaging?

RVM’s analytics team typically starts by identifying standard PII, which includes social security numbers, bank card numbers, etc. With the use of technology and our analytics experience, we can quickly identify documents that contain social security information and isolate those documents for review. After the breached data is identified, the next step is to determine whether it contained trade secrets or privileged information. When you know what you are looking for, analytics can help shorten the time spent on the search.

A large-scale data breach can be a scary event for any organization, no matter the size. However, by adequately preparing for this likelihood and applying sound analytics, it is possible to mitigate the damages and maintain a positive relationship with stakeholders. In particular, companies with large volumes of sensitive data may do well to work with an advisor capable of developing a plan and implementing it.

While no company wishes to go through this ordeal, the important thing is to take the proper steps to minimize the likelihood of it happening again.

The “A” in AI Needs a Makeover

By Jeanne Somma, Director, Analytics and Managed Review

When I first joined RVM, I had a meeting with a law firm client that was supposed to focus on how RVM could use technology to cut costs for the firm and ultimately their client’s. What it turned into, though, was a frank conversation about the firm’s reluctance to rely on technology without something more to back it up. The client put a name to that “something more,” calling it “legal legs.” His frustration was that technology companies are selling tech to lawyers, and that these things don’t have the proper legal legs to warrant complete adoption.

At the time I found that to be an interesting and frustrating stance. There is case law, metrics, and a whole host of other things that I believed to be the legal legs necessary to back up the use of technology. I left that meeting with a sticky note that simply read “tech needs legal legs” and fear that I wouldn’t be able to bridge the gap between what is possible when using technology and what is acceptable to the lawyers that use it.

It’s no surprise that many of the conversations and presentations at this year’s Legalweek centered on arguably the most polarizing technology, Artificial Intelligence (AI). The use of the term in the legal sector has grown exponentially, and at Legalweek that talk seems to reach a fever pitch. But even as session after session describes the amazing things that AI can do for the legal community, I wonder if the mass adoption promised by presenters is reasonable in the current environment. Put another way, we need to focus on evolving the conversation, rather than just evolving the technology.

One of the sessions that I was excited to attend was The AI Bootcamp. There the conversation was more of the same, with unfailing optimism that not only could the technology solve the problems of the legal sector, but that the sector would soon embrace it for those reasons. The feeling in the room was that there was a wide agreement that AI would be adopted and permeate the legal sector. That is, until the questions started.

The questions from the audience dissected the great divide between the technology’s capabilities and an attorney’s faith in the result being the same or better than if they had used the processes they are used to. With each comment from the practicing attorneys in the room, I was reminded more and more of that sticky note in my office – “legal legs.” The technology is ready, but it seems the lawyers may not be.

As I listened to the dialogue between the adopters and the more reticent participants, the real difference of opinion boiled down to defensibility. Some attorneys were finding it hard to rely on a system that made decisions using algorithms that were not easily articulated. These attorneys wanted to be able to tell a judge or opposing party that the results of an AI exercise would be the same or better than if they had done it without the technology. How can they do that when the machine is doing the thinking?

Looking at my notes and seeing the word “artificial,” I realized that that was my stumbling block. It’s the implication that the results are being driven by a machine, which is not accurate. The type of technology that we use across the legal sector – whether in contract analysis, legal research, or predictive coding — is meant to take human understanding and amplify those results in order to provide the user with an efficient way to get to their result. The process is the same – a human with the required intelligence on the subject must train the machine on their thought process and desired results. The machine then simply takes that knowledge and applies it across large data sets faster than the human could. The machine isn’t deciding anything that it wasn’t already told by the human. What it does do is amplify the human’s intelligence. In other words, people are still driving the results, but with the assistance from the technology.

Termed as amplification rather than artificial we take the mystique out of the process. We bring the real definition to light – a process that leverages human intellect in order to train a machine to do the resulting work quicker. The result is the same because the human’s intelligence and input at the outset is the same. Also the ability to check the work is the same. The only thing that’s changed is the speed with which the process can be completed.

We need to change the conversation as technology and legal service providers. We need to focus on the amplification power of AI solutions and the defensibility of relying on the human subject matter expert. Until we can show the legal community that AI is not artificial at all, we will continue to have this battle between capabilities and adoption.

I for one want to solve this problem – if only so I can finally throw out that sticky note.