
(Photo credit: jenni from the block)
Today’s guest post was written by Ron Dolin. You can learn more about him at the end of the post.
*********
There has been a lot of talk over the last few years regarding the Unauthorized Practice of Law (UPL) and use of technology – in particular, document automation systems. A common topic is whether or not using document automation in a law practice without an attorney’s review goes against UPL statutes. For example, suppose that a client would like to know that a document automation system conforms with a licensed attorney’s guidance. If an attorney implements the automation system with their own legal practice logic, then one could guarantee that the final product is the same as what an attorney would produce manually. The key is making sure that the automated system follows the same procedure that a lawyer would use, and to make sure that any kind of deviation from the process be monitored and flagged.
Take search as another example – we use an index structure that automatically pulls documents for us based on our given queries in a fraction of the time it would take to physically comb through documents to find the same information. Imagine how slow search would be if we had to search every document for every query. The index accurately represents the text in a search-efficient structure.
Like search indexes, a properly designed document automation system can be a form of simply extending a lawyer’s analysis – one that removes the need for manual document construction and increases efficiency with an automated formula. If a lawyer designs the document process rules, the software is then executing, at least in principle, with authorized practice of law.
When it comes to actual implementation, there is a layer of complexity that goes beyond the lawyer’s initial automation formula. The issue lies in the analysis of the client’s information – how do you make sure the software doesn’t go beyond the limited scope for which it was designed? If a system uses something along the lines of a checkbox system for “yes” or “no” questions then we can be fairly confident with the document analysis. However, if the document system uses Natural Language Processing (NLP), which uses a form of artificial intelligence, then there is a greater chance it will be less than 100% accurate. In that case, we can’t be positive that a lawyer wouldn’t take a different action than the system; thus using NLP likely would fall under UPL. This also applies to document automation systems that are built but never approved by a lawyer in the state where it is used. In that case, I’d want to limit its scope to something closer to just filling out a form.
This is just one example of the boundary between automation and manual legal work, and how the application of different technology to accomplish the same goal can have legal implications, both in terms of the client’s final work product and ensuring compliance with ethical guidelines. For those reasons, it is important to understand how your system works and where it falls on the line of legal compliance if you plan to make a document automation system available without attorney review. In other words, using the right technology can make all the difference.
Ron Dolin, J.D., Ph.D., teaches legal technology and informatics at Stanford Law School. Dolin is also an adviser to the UC-Hastings Privacy and Technology Project as well as its Science and Technology Law Journal. He is a member of the executive committee of the LPMT section of the California Bar.


Ron Friedmann
August 29, 2013 — 5:56 am
I’m not sure I understand or agree with the criteria here for a UPL finding. As I read the post, if a system does not do what a lawyer would do, then it constitutes UPL. Suppose the system is “better” than the lawyer (more informed by more rules and better able to apply the rules consistently) and is therefore correct where the lawyer’s judgement might be incorrect?
Another interpretation of the post is that it is ok for a lawyer to err but not for a machine. Why does that make sense?
And finally, should we hang our UPL hat on consistency? Two lawyers can assess the same situation and reach different conclusions. Or the same lawyer, when presented with the same situation at different points in time, might reach different conclusions.
Perhaps the real problem is that UPL is designed to protect lawyers rather than to ensure that clients receive the best possible legal advice. It seems to me that if the bar is protecting clients, it would want to maximize correct advice and not focus so much on who provides it.
nikilblack
August 30, 2013 — 3:34 pm
Ron Dolin’s reply:
In my discussion, I’m trying to focus more on a deeper sense of what UPL might signify than on how various state courts have interpreted it. In addition, I’m not trying to address the issue of potential 1st Amendment rights, or whether or not UPL is used in a protectionist manner — all valid points of discussion, I agree.
Instead, I’m looking at UPL from a technological perspective in an effort to identify the computational point at which the software may deviate from the express wishes of a licensed attorney. The first prong of this approach is to recognize when decision rules are simply the embodiment of what a particular licensed attorney would do under a given fact pattern. The second prong is to examine the degree to which a system can accurately be said to have captured the fact pattern.
In Ron Friedmann’s comment, he asks the question about whether a system that might do better than an attorney should properly be labeled UPL. I’d argue that with the current state of technology, the answer is rightfully “yes”. That’s because I view UPL as dealing with the issue of “authorization”, and not the issue of “best practice”. Certainly, airplane auto-pilot systems (and probably soon, car auto-driving systems) arguably are more reliable than human pilots in many/most circumstances. At some point, we probably will find a similar situation in law, where automated systems outperform humans for many tasks. At that point, one could argue that “best practice” would be to use the automated system. However, even then, we might want to certify automated systems — UPL would migrate to a certification mechanism. One might imagine that for legal software, we would want to make sure that systems comply with some testing regimen. (I’m not an expert in software certification, and I don’t claim to know if that’s already happening in some areas of law.)
Does it make sense to allow a lawyer to err and not a machine? Yes and no. We license attorneys, and no two attorneys are likely to agree on all issues. Thus, even determining what constitutes a mistake is problematic. Where software, however, as discussed above, is not certified, we can’t say whether errors are due to poor programming or due to a difficult fact pattern. Not only do we not license general document automation software, but also we do not have an evaluation mechanism for software that allows us to compare machines with humans for most legal work (though that is happening more and more — for example in the e-discovery space, where software often outperform humans).
The reason that we don’t hang UPL on consistency between people is that two lawyers can reasonably think to take different courses of action. However, given our current state of software, if we can say that the machine is acting identically to some licensed, authorized attorney, then, from a technology perspective, the software is the embodiment of the lawyer’s work. Since we allow the lawyer to take such actions, software that mimics exactly the lawyer’s work would be indistinguishable (from the work product perspective) from having the lawyer do the work directly. That might reasonably remain the case until software has to pass an e-bar exam.
If we wanted to focus on notions of “best advice” (which I think is inevitable), then we have to establish a framework for measuring quality empirically. For a future post…
Luke OBrien
September 4, 2013 — 9:09 pm
Sorry for the woefully tardy chime-in…This is a very interesting post. Just about every document-related technology requires its user to believe that a given input produces a given output with consistency and fidelity. I trust that the PRINT button will create a paper copy of the exact document I’m looking at on-screen. I trust the SAVE will commit all my recent changes to the document I’m editing. I trust that CTRL+F is thoroughly locating all the instances of “liable” that I’m trying to change to “not liable.” I trust that metadata inspection tools completely remove my indelicate comments from my outbound documents.
Where I agree with Ron F. – For the older document automation technologies, where clicking a button or entering a value ostensibly propagates some prescribed changes through a form document, the issue is the same. Once you reasonably believe that input X causes output Y, I don’t see the ethical issue.
Where I agree with Ron D. – For a long time, Excel wouldn’t display text blocks between 256 and 1024 characters. The data was there. But it just showed up as hash signs (#) instead of actual text.. If you were reading with your eyes an Excel printout to track key deal provisions and you tripped over this bug, you might miss a renewal window on one deal or fail to send a timely indemnification claim on another. That’s a well-known, long-accepted, much-used technology that under some conditions doesn’t produce the behavior the user expected.
Natural language parsing (and statistical parsing and structural parsing, etc) can greatly improve lawyer efficiencies and greatly benefit clients. But like the Excel example, they require the user to understand the input-process-output link.
I think, ultimately, this is an email-ish problem. Lawyers obviously can’t involve 3rd parties in privileged client communications without destroying privilege. Email by its nature bounces around a lot, careening from server to server, getting picked off by spam filters, being diverted to IT departments when delivery instructions fail, and accidentally finding its way into inboxes other than that of their intended recipient, So, email, by its nature, should be inherently unprivileged.
But email is also really useful. And when lawyers, characteristically late to the party, finally realized this, we just started slapping those “if you’re someone other than the person whom I was intending in my head to send this email to, please destroy your computer, change your name, and leave the country” disclaimer footers. We know that almost nobody erases those emails or notifies the sender. We know that some percentage of our privileged emails will be seen by privilege-shattering 3rd parties. But we allow ourselves the legal fiction of pretending that these footers work so that we can use a really useful technology to get more work done more quickly.
(Side thought: Let’s say I was standing in a room full of strangers speaking loudly to outside litigation counsel about sensitive matters. But I’m wearing a t-shirt that reads, “I might be saying stuff that I only want my lawyer to hear because some of that stuff might be privileged. By walking withiin range of my voice, you are required to cover your ears and make “la-la-la-la” noises until I’m done speaking,”: Would that be sufficient to preserve privilege? Because that’s pretty much what the email footers amount to.)