At the recent conference on The Business of AI in New York City, attendees were cautioned about the risks associated with overreliance on generative AI (“GenAI”) systems, including the potential for hallucinogenic responses, dissemination of false information, and compromised confidentiality. Despite their utility, careful consideration of each use case is essential to balance the benefits against potential long-term risks, with a focus on narrower applications such as marketing materials or form comparison and updates. Additionally, legal implications and ethical considerations surrounding GenAI adoption were underscored, prompting businesses and law firms to prioritize compliance with evolving regulations and establish robust policies to address data privacy concerns.
Things You Need to Know
GenAI, both image generators and text generators, can have remarkable and useful output. Certain types of limited AI systems have already been helpful in document review and data synthesis use cases for many years. New users (and new use cases) of GenAI systems should exercise caution due to the many potential risks, including hallucinogenic responses, dissemination of false information and provision of erroneous advice. GenAI tools also pose heightened risks of data and confidentiality breaches.
When contemplating the adoption of GenAI tools, it is essential to conduct a thorough review of each use case to weigh the associated risks and benefits and to negotiate contractual protections and parameters with vendors and users. From the examples reviewed so far, providers of GenAI services often disclaim or limit liability for output, and users may be required to grant ownership of intellectual property rights resulting from AI model updates. These terms may be negotiated or will need to be carefully weighed in the cost/benefit analysis. While GenAI can offer efficiencies, its application should of course be carefully considered, particularly in areas already tightly governed, such as healthcare and employment.
Resources
Resources are available to help navigate the changing legal and ethical considerations surrounding GenAI around the world. Notably, the EU, Singapore, China, and the US are all actively passing AI-related legislation, and organizations such as the USPTO and the US Copyright Office are facilitating valuable insights and guidelines.
Who This Affects
GenAI impacts a wide array of stakeholders, including customers seeking transparency regarding the use of their data in GenAI systems, and how the use of such tools will affect the cost of goods and services. Law firms already must take care to ensure their own use of GenAI is in compliance with client and outside counsel AI policies. And for all those using content that has been produced (even in part) by generative AI systems, serious consideration should be taken with respect to whether that use will be tracked, documented and/or disclosed (and if so, how). There remains a feeling of unwanted manipulation or "ick factor" with GenAI efforts for many people.
What's Next?
Moving forward, it is crucial to address concerns surrounding the use of content produced by GenAI systems, including tracking, documentation and disclosure practices. Establishing due diligence questionnaires and regularly reviewing company AI policies can help mitigate risks associated with GenAI adoption. Moreover, clarifying the primary and secondary uses of data shared with GenAI systems and updating company policies accordingly are essential steps in navigating this evolving landscape.
For more insights from the conference, you can visit the event page here.
Disclaimer: An initial draft of this summary was generated in part by use of ChatGPT 3.5.