In October 2017, I attended a technical talk where hundreds of people gathered to learn about the future of artificial intelligence. The panelists agreed that machine learning enhances many aspects of business, and in some cases, performs tasks typically handled by experts who have years of experience. They debated risks and opportunities, corporate accountability, and ethics.
As the discussion wrapped up, one of the panelists said something that changed how I think about ethics in AI. She pointed out that it’s not the case that machines are going to replace our experts. Teams won’t be made redundant by AI, but rather, by other teams who have learned how to better use AI to tap into business opportunity.
The importance of human judgement
From an ethical standpoint, AI has had a positive impact on society, and enables innovation. But this innovation necessarily involves risk. To create an AI platform that is both ethical and successful, the implementation needs strong human involvement from experts with experience and judgement in making decisions.
Developing AI is different, and the outcomes have far broader implications for the teams that develop AI, sell it, and work to deploy it across large organizations. AI is a tool, and our use of it must be responsible, acceptable, fair and transparent. Trustworthy AI must involve human agency and oversight.
How we incorporate ethics into our AI platform
Many companies address these kinds of questions with a tidy set of ethics guidelines, and then figure how to police them. Instead, at AppZen, we’ve adopted a “how” in our ethics approach, which is proactive. We’ve integrated ethics into our culture and decision-making.
At AppZen, we’ve developed our AI platform to help finance teams make fast, informed decisions about their spending. Our platform identifies high risk transactions, including fraud, that would go unnoticed by finance teams, while also helping them comply with regulations and stamp out corruption. We’re focused on a very narrow, deep deployment of AI for finance. Finance teams are both tactical and strategic, and equally focused on understanding how to bring AI into their everyday operations to improve and scale their processes and reshape their policies, while mitigating risks, including ethical ones.
Below are the ways we incorporate ethics into the development of our AI every day.
1) We focus on people
Our deployment of AI is about people, and at AppZen, we’ve always approached it from a very human perspective. Our AI is a tool that helps people (in our case, AP teams and auditors) make better decisions.
2) Data is crucial
At AppZen, there is a huge emphasis on data. We believe fairness is important, and our data science teams are careful to consider bias and accuracy in our training data.
“We use transactional data that reflects the real world, and our machine learning models and other methods treat things in the same way,” said Kunal Verma, CTO at AppZen. “We use different methods to arrive at the same conclusion to mitigate bias, and be more confident the outcome is fair.”
However, no matter how representative our data is, all models contain a level of bias or skewness, because without the gradient of bias, we cannot make a decision, i.e. optimize. The key is to detect and avoid unfair bias as much as possible.
3) We prioritize diversity
Our culture is open and transparent, which extends to the people who build our software. We reward those who choose to be part of creating greater diversity in the workplace. For example, we recognize that including more women in technical and operational roles across every level in the organization is not just a social courtesy, but a business imperative. We believe our inclusion of a variety of perspectives and viewpoints is crucial for building an unbiased AI.
4) We focus on our customers
We try to understand our actual user experience with our products and anchor all decisions in this value. For us, this plays out in a few ways.
- Our audit results are clear and understandable. We always explain to customers how we find high risk spend, and in the design of our UI, we always make it very simple to see their savings over time. The algorithms we use are complex, but how we present results is simple. We’re intentional about communicating the right information, when and where it is needed.
- We’re upfront about what we believe AI can accomplish. Interestingly, many companies claim to use AI, but are instead something that looks like it (such as Robotic Process Automation). We don’t misrepresent AI as capable of doing everything, and we don’t set false expectations. Not only are we in fact using AI through traditional machine learning, deep learning, and other heuristic-based methods, but we’re transparent about how we are using it.
5) We always strive to do the right thing
This is anchored in our culture. It’s how we hold ourselves as a company, rooted in the values of our founders. Even in the fast-paced rush of a product launch, we embrace our responsibility to our customers. For instance, we take time to focus first on learning and understanding the problem, before diving in with a solution.
6) We don’t take shortcuts, even under pressure
We take the time to explain how AI works with our customers. We teach them about our AI models, machine learning, deep learning, natural language processing, and how AI cross-checks against hundreds of different data sources. We’re very clear with each other (and more importantly, with our customers) about AI can and cannot do, and we carefully consider any unintended consequences.
Working at an AI company is exciting and fast-paced. Our success will depend on our ability to align on ethical questions, and actively work to integrate ethical responsibility into the design and development process of our products, so we can continue to drive value for our customers. At AppZen, we make it our priority to consider fairness and potential bias, and strive to be responsible and transparent, to help our customers see our AI for what it is: A tool that helps people make better decisions for a brighter future.