Source: Ada
Protecting customer data and privacy
Data security and privacy are the primary concerns when using generative AI for the customer experience. With the vast amounts of data processed by AI algorithms, an increased concerns about data breaches and privacy violations are heightened.
You and your company can mitigate this risk by carefully taking stock of the privacy and security practices of any generative AI vendor that you’re thinking about onboarding. Make sure the vendor you partner with can protect data at the same level as your organization. Evaluate their privacy and data security policies closely to guarantee you feel comfortable with their practices.
Commit only to those vendors who understand and uphold your core company values around developing trustworthy AI.
Customers are also increasingly interested about how their data will be used with this type of tech. So when deciding on your vendor, make sure you know what they do with the data given to them for their own purposes, such as to train their AI model.
The advantage your company has here is that when you enter a contract with an AI vendor, you have the opportunity to negotiate these terms and add in conditions for the use of the data provided. Take advantage of this phase because it’s the best time to add restrictions about how your data is used.
Ownership and intellectual property
Generative AI autonomously creates content based on the information it gets from you, which raises the question, “Who actually owns this content?”
The ownership of intellectual property (IP) is a fascinating but tricky topic that’s subject to ongoing discussion and developments, especially around copyright law.
When you use AI in CX, it’s bestto establish clear ownership guidelines for the generated work. At Ada, it belongs to the customer. When we start working with a customer, we agree at the outset that any ownable output generated by the Ada chatbot or input provided to the model is theirs. Establishing ownership rights in the contract negotiations stage helps prevent disputes and enables organizations to partner fairly.
Ensuring your AI models are trained on data obtained legally and licensed appropriately may involve seeking proper licensing agreements, obtaining necessary permissions, or creating entirely original content. Companies should be clear on IP and copyright laws and their principles, such as fair use and transformative use, to strengthen compliance.
Reducing the risk
With all the excitement and hype around generative AI and related topics, it really is an exciting area of law to practice right now. These newfound opportunities are compelling, but we also need to identify potential risks and areas for development.
Partnering with the right vendor and keeping up to date with regulations is, of course, a great step on your generative AI journey. A lot of us at Ada find joining industry-focused discussion groups to be a useful way to stay on top of all the relevant news.
But what else can you do to ensure transparency and security while mitigating some of the risks associated with using this technology?
Establishing an AI governance committee
From the beginning, we at Ada established an AI governance committee to create a formal internal process for cross-collaboration and knowledge sharing. This is key for building a responsible AI framework. The topics our committee reviews include regulatory compliance updates, IP issues, and vendor risk management, all in the context of product development and AI technology deployment
This not only helps to evaluate and update our internal policies, but also provides greater visibility about how our employees and other stakeholders are using this technology in a way that’s safe and responsible.
AI’s regulatory landscape undergoing massive change, along with the technology. We have to stay on top of these changes and adapt how we work to continue leading in the field.
ChatGPT has brought a lot more attention to this type of technology. Your AI governance committee will be responsible for understanding the regulations or any other risk that may arise: legal, compliance, security, or organizational. The committee will also focus on how generative AI applies to your customers and your business, generally.
Identifying trustworthy AI
While you rely on large language models (LLMs) to generate content, ensure there are configurations and other proprietary measures layered on top of this technology to reduce the risk for your customers. For example, at Ada, we utilize different types of filters to remove unsafe or untrustworthy content.
Beyond that, you should have industry-standard security programs in place and avoid using data for anything other than the purposes for which it was collected. At Ada, what we incorporate into our product development is always based on obtaining the least amount of data and personal information that you need to fulfill your purpose.
So whatever product you have, your company has to make certain that all its features consider these factors. Alert your customers that these potential risks to their data go hand-in-hand with using generative AI. Partner with organizations that demonstrate the same commitment to upholding explainability, transparency, and privacy in the design of their own products.
This helps you be more transparent with your customers. It empowers them to have more control over their sensitive information and make informed decisions about how their data is used.
Utilizing a continuous feedback loop
Since generative AI technology is changing so rapidly, Ada is constantly evaluating potential pitfalls through customer feedback.
Our internal departments prioritize cross-functional collaboration, which is critical. The product, customer success, and sales teams all join together to understand what our customers want and how we can best address their needs.
And our customers are such an important information source for us! They ask great questions about new features and give tons of product feedback. This really challenges us to stay ahead of their concerns.
Then, of course, as a legal department, we work with our product and security teams on a daily basis to keep them informed of possible regulatory issues and ongoing contractual obligations with our customers.
Applying generative AI is a total company effort. Everyone across Ada is being encouraged and empowered to use AI every day and continue to evaluate the possibilities – and the risks – that may come along with it.
The future of AI and CX
Ada’s CEO, Mike Murchison, gave a keynote speech at our Ada Interact Conference in 2022 about the future of AI, wherein he predicted that every company would eventually be an AI company. From our viewpoint, we think the overall experience is going to improve dramatically, both from the customer agent’s and the customer’s perspective.
The work of a customer service agent will improve. There is going to be a lot more satisfaction out of those roles because AI will take over some of the more mundane and repetitive customer service tasks, allowing human agents to focus on other fulfilling aspects of their role.
Become an early adopter
Generative AI tools are already here, and they’re here to stay. You need to start digging into how to use them now.
Generative AI is the next big thing. Help your organization employ this tech responsibly, rather than adopting a wait-and-watch approach.
You can start by learning what the tools do and how they do it. Then you can assess these workflows to understand what your company is comfortable with and what will enable your organization to safely implement generative AI tools.
You need to stay engaged with your business teams to learn how these tools are trying to optimize workflows so that you can continue working with them. Continue asking questions and evaluating risks as the technology develops. There is a way to be responsible and stay on the cutting edge of this new technology.
This post is part of G2’s Industry Insights series. The views and opinions expressed are those of the author and do not necessarily reflect the official stance of G2 or its staff.