With the rise of technology, AI (Artificial Intelligence) has grown dramatically. This has changed the way people think about technology and will have far-reaching effects in the future. Because of this, it is essential to try to minimize its flaws and use technology wisely.
I don’t know how many of you have heard of Joshua Browder. He is the 26-year-old founder of DoNotPay, a US-based company that has made a “robolawyer,” an AI-powered bot that helps users with things like appealing parking tickets, negotiating airline ticket refunds, and disputing service provider bills. Even though the app came out in 2015, I had never heard of him or the app until recently.
Recently, I heard that his company is willing to pay a million dollars to any person or lawyer who will repeat precisely what their robolawyer tells them to say in front of the Supreme Court judge. This made me curious. It’s still unclear if anyone will take Josh up on his offer if the US Supreme Court gives permission or what will happen. But the news says that the DoNotPay app will help two people in the US fight speeding tickets in court next month. In addition, the company has promised to pay the users’ fines if their appeals fail because of the robolawyer.
The app uses an AI model called GPT, which stands for “Generative Pre-trained Transformer.” This is the same technology that runs ChatGPT, which reportedly had a million users in less than a week after it started. AI technologies are continuously improving, and more attention is paid to “ethics” and “explainability.” In the end, the software must be able to explain how it came to a particular result or conclusion. This is important to minimize, if not wholly eliminate, the risk of biases and prejudices creeping into AI software. This software trains using hundreds of millions of content elements on the web (articles, images, reports, videos, etc.) that were all created by humans and, as such, carry the individual beliefs, biases, convictions, etc., of their original creators.
In the next few decades, AI will change many fields, such as law, medicine, finance, and more. Of course, not all fields will change at the same rate or in the same way, but they will all change. For example, doctors and nurses are already using AI to improve how well they can diagnose and confirm treatment plans. Law firms are also starting to use AI to make the tedious process of looking through case laws and legal decisions to find precedents and the reasoning of the judges involved easier. Soon, lawyers will just have to type their questions into ChatGPT, and the program will give them well-thought-out answers in minutes. The real skill, of course, will be in asking the right questions, figuring out how sensible the answers are, and deciding what to do next. Think of it like a junior lawyer advising a senior lawyer before the senior lawyer goes to court.
Incomplete information is dangerous. Patients (and their caregivers) have used search engines for a long time to find information about their symptoms, diagnostic tests, and treatment options. They then argue with qualified medical professionals about their choices, sometimes forcing doctors to explain their hypotheses and reasoning. Soon, it’s likely that clients of lawyers and law firms will also be tempted to take a similar approach. Lawyers will also have to spend time and energy teaching clients about the law and jurisprudence. It might be a good idea to develop new pricing models to discourage “brainstorming” and “legal strategy” sessions that aren’t necessary.