DIY professional service automation

Any of us may need non-routine specialist expertise on occasion, whether for legal contracts, disputes, claims, rental agreements, property purchases, employment conditions, pension arrangements, tax liabilities, inheritance, wills, probate, insurance options and claims, conditions of engagement, investment advice, warranties and guarantees, instruction manuals, planning guidance, building warrants, permissions, or statements about rights and regulations.

A quick web search reveals numerous initiatives aimed at employing AI to support professional advice services for customers. For example, LegalProd describes its service as follows:

“Automated legal advice is a service that uses technology, and more specifically artificial intelligence, to provide legal advice without direct human intervention. This automation relies on algorithms and databases rich in legal information to answer users’ legal questions. A central element of this innovative approach is automatic natural language processing, enabling machines to understand and process human language queries.” https://www.legalprod.com/en/automated-legal-advice

Such AI tools are often marketed towards service providers, such as legal firms, financial advisors, insurers, architects and planners, who might integrate AI systems into their service offerings. However, individual consumers frequently supplement such professional expertise with their own research—whether through web searches or conversations with friends—as a preliminary step before consulting a specialist, or to confirm or query professional advice.

Much of this expertise focuses on working with documents that are numerous, dense, and often written in highly specialised language. With the advent of advanced AI tools, individuals can experiment with a form of DIY consultation.

DIY advice

I’m interested in how the individual consumer might access their own customised “expert system.” For instance, as demonstrated in previous posts, Google’s NotebookLM allows users to upload a cache of documents, including reports, letters, and contracts written in specialized language and tailored to their specific situations (e.g., contracts that mention individuals by name). The language model incorporates these documents into its context and responds to user queries and prompts about them.

For example, a student uploads a rental agreement to a language model and asks it to highlight key clauses, such as those concerning early termination or deposit refunds. The LLM provides a plain-language summary of the relevant sections. Or, an individual uploads correspondence from tax authorities alongside personal income records. They prompt the LLM to identify discrepancies or generate a draft response addressing specific issues. As another example, a freelance contractor, reviews terms of engagement. By uploading a contract, the user can ask the LLM to identify potential risks or unfair terms, such as unilateral termination clauses or unclear payment schedules On the domestic front, an individual managing a home insurance claim might upload communication with the insurer as well as the original policy. The LLM can help structure a detailed response or highlight clauses relevant to their claim. (ChatGPT helped me identify these examples.)

A test case

In my own case, I encountered some documents about a financial investment. I had to read and absorb over 10 documents that included pubic domain PDFs about a particular investment provider, a personalised investment plan, emails from an independent advisor, statements about my personal assets, government web pages about tax implications of certain investment, etc. It was a heterogeneous collection of documents of varying lengths, some including legal terminology including “fine print,” as well as informal communications.

Once uploaded, I asked NotebookLM questions about the general content of these documents and their implications for me. At one stage I asked about a simple option that bypassed this “preferred” investment scheme. I received a terse reply: “The system was unable to answer.” I inquired “why” and was told “The previous turn of our conversation focused on why [named trust] might be a better option than [alternative]. The sources and our conversation history have not discussed why this [investment method] is unable to answer your questions, and so this question cannot be answered using the available sources.”

I searched the web for relevant government guidance on the alternatives and uploaded that to NotebookLM. It acknowledged the new upload and responded taking that document into account. At several stages in our 26 pages of consultation the platform reminded me to check with my financial advisor (identified by name), and even offered a list of questions.

To me this was a good test case with benefits that are ongoing. (I haven’t yet decided about the relevant investment.) The AI has become a third party, or at least one of several, in a discussion involving documents (which I have read in varying levels of detail), friends, and a (human) advisor.

I can imagine various scenarios emerging from this practice. If successful and widespread, people could come to rely on this kind of AI support, putting their trust in such systems ahead of human advisors and of their own scrutiny of documents. The use of LLMs may also relieve the people (and machines) who draft such documents of the need to render them legible to consumers. Consumers could draw on their own AI to translate, interpret, assess and apply such documents.

Note

  • Featured image is by WordPress’ generative AI service. Prompted for a chaotic DIY toolshed.


Discover more from Reflections on Technology, Media & Culture

Subscribe to get the latest posts sent to your email.

Leave a Reply