Blog Layout

The AI Enigma, Part III: Ethics

Cathy White, Head of AI and Automation


A lot of ink has been spilled on AI ethics already, and a lot more will be as the technology becomes more powerful. Much of that discussion has been focused on bias, with numerous news articles highlighting the way that AI systems have shown signs of bias, regurgitated racism, embraced sexism, and generally performed in a way that can be most generously described as “suboptimal.”


But the ethical considerations of AI run deeper than questions of how we’re training nascent neural networks. As businesses increasingly deploy AI to solve problems and create new efficiencies, and as more powerful AIs continue to be developed, the risks in this space must be considered alongside the rewards. This article will focus on three foundational areas that, for business and technology leaders, will fundamentally impact AI ethics long term in their organization.


The Influence of Leadership


In terms of organizational ownership, many companies look at AI as a technology solution – and assign ownership within the organization accordingly. This works for some, but for the majority, doing that is a step in the wrong direction. Organizations need to start thinking differently about ownership, with well-defined criteria for what it takes to implement AI both technically and ethically as a leader.


In research and practice, three roles make a regular appearance in terms of adopting, driving, and deploying AI – the Chief Data Officer, Chief Digital Officer, and Chief Technology Officer. When you press organizations about assignment to these roles, the context is often narrow and predictable: “it’s emerging technology,” “it’s data-driven,” or “it’s a platform.” These statements aren’t necessarily wrong, but we’re not talking about next-gen networks or storage – we’re talking about building a capability that can potentially create existential risk to the brand, reputation, and viability of the company as a whole.


AI development is akin to raising a child. The parents need to see the whole picture – you can’t focus on their education, sense of responsibility, or social skills in a vacuum. Learning and development can only happen through the interplay and interdependencies of life experiences. Similarly, the leaders who own AI must have organization-wide visibility, access to necessary domain experts, and the ability to understand and influence the interdependencies.


On a practical level, there are key areas of expertise required to own AI. No one leader will personally maintain all of them, but they will need access to resources with these disciplines:


  • Data science and data governance
  • Infrastructure, tools, and processes required for a flexible, agile AI tech stack 
  • Foundational AI-specific concepts like neural nets, NLP, and machine learning
  • Attracting and retaining the right talent needed to stand up, deploy, and grow AI
  • Regulatory and compliance knowledge both specific and adjacent to AI


With that in mind, AI ownership will naturally lean towards technical leaders. It’s fundamentally driven by technology, it’s disruptive, and it relies on many core competencies that already live in the IT organization. However, technical skills alone should not dictate ownership. The correct choice is someone who can drive the process of defining and evolving AI success that is measured in business value and balanced with risk mitigation.


Navigating Privacy and Consent


AI adds complexity to a topic that’s already complex: data privacy and consent. Similar to GDPR, HIPAA, PII and other regulations, the right way to address the challenge comes down to engagement and protection. Because data is the lifeblood that fuels AI success, the trust a company builds with customers around the use of their data is critical. Consumers are eager to adopt AI but skeptical of potential drawbacks, so building trust through transparency is vital to brand reputation and recognition.


To maintain a high standard for privacy and consent, three core concepts should be part of your AI ethics roadmap:


  • Transparency into which, how, and why data is consumed and processed
  • A strong governance mechanism to control and adopt privacy and consent frameworks
  • A technical architecture supporting AI that can accommodate changes over time


We’ve all heard the stories of “black box” AI systems that offer no rationale for why they make any given decision. There will always be trade-offs between speed, accuracy, and transparency – attempts to improve any of those generally come at the expense of the others. The more data that AI models can parse, the better their performance will be, but more data also means more risk and more complexity in terms of privacy. This is where standards for data science and handling come in. There can be no AI implementation without governance and consistent auditing for how data is utilized. Broadcasting this governance process will reinforce transparency and can build trust for more advanced applications of AI moving forward.


Addressing Voluntary and Involuntary Bias


Bias in AI can stem from the people designing the AI, the structure of the algorithms, training data, production data, and more. In some cases, the bias can expose itself in very obvious ways. Watson, IBM’s famous AI, started swearing after being introduced to the public internet and wouldn’t stop until Urban Dictionary was scraped from its memory. Tay, Microsoft’s Twitterbot, became one of the most toxic singularities on the internet at one point in time.


All code is trained like a child, and the influence from the point of origination lingers in the same way that a parent, teacher, or other influencer helps shape our personal belief systems. It’s important to remember this concept as you consider the array of topics you want AI to address for your customers and organization – and factor it into the team you select to build and govern your AI.


This is where it gets interesting. Your team needs to be diverse, but it also needs to reflect the AI’s focus and subject matter. For instance, if you’re building AI to assist in the diagnosis and treatment of women’s health, your team will not be complete without a sizable contingent of women who have direct knowledge of the subject and outcomes.


Beyond the build team is AI governance. Based on the application and content of the AI, you need to be aware of how bias could emerge and build that into your governance planning. Teams often struggle with identifying the true source of bias. Is it inherent in the code, or is it in the data and algorithms that follow? (The answer: Usually, it’s both.)


Corporate values are always a key influence on the practice of governance, so they factor into this equation as well – after all, your AI should reflect your company’s goals and values. It’s the CEO’s responsibility to ensure that’s the case, clearing the path for employees to adopt AI and consumers to trust it.


Finally, KPIs and performance metrics are a great way to unearth bias in AI over time. Measuring the outcomes of AI initiatives against their intended results can expose potential bias early on. What makes this especially valuable is the risk mitigation it offers – spot the trend early, evaluate, and remediate.


Closing Thoughts


The very concept of AI ethics is paradoxical in nature, which is what makes this issue such a thorny one. We’re trying to give ethical boundaries – a human invention – to an entity that has no innate ethics. In fact, the human race itself doesn’t even have a singular comprehension of ethics; they vary widely from one culture to the next. It is on this makeshift framework that we are attempting to build artificial intelligence.


And yet the advancement of our technology means that we have no choice but to apply our own limited, flawed, and often self-contradictory standards to a machine (so to speak) that has no sense of right and wrong. To understand ethics is to understand pain. But to whatever degree an AI can be said to “understand” anything, today’s models primarily understand probabilities – which is why, when asked to draw “AI ethics,” the AI model Craiyon delivered a set of nine bizarre doppelgangers of stock images of robots with electronic brains, including the one at the top of this article.


As hard as it may be to resolve, the ethical debate around AI is a quintessentially human one. Just like us, our digital creations are messy and imperfect and opaque, and there is no lack of opinions concerning how to raise them. As they continue to grow, we will get closer and closer to them, and we will rely on them more and more. For that reason, it is our own ethical responsibility – as companies and as people – to give them the most complete ethical framework humanly possible.




About Cathy White


Catherine White is the Head of AI and Automation at Yates Ltd, bringing to bear 25 years of success as a Fortune 500 leader creating and capturing opportunities for global competitive advantage, business growth, and efficiencies through innovative IT strategy, operations, restructuring, and transformation initiatives. She has deep technical experience in planning and implementing hybrid cloud, machine learning and AI, automation engineering, Dev Ops planning, and Agile processes.


Prior to Yates, Catherine was a Vice President at Johnson & Johnson responsible for all technology infrastructure globally as well as architecture and platform engineering. Prior to J&J, she led IaaS automation and directed several infrastructure functions (AIX Power Series, Monitoring and Linux Engineering) at JPMorgan Chase. She ultimately took responsibility for enterprise portfolio management in consumer and community banking along with architecture governance and total cost of ownership optimization at JPMorgan. Following her portfolio management role, Cathy served as Executive Director of Digital Technologies, responsible for driving automation, AI, and machine learning into digital marketing and customer experience platforms. 


Catherine holds a Master of Science in Technology Management from Stevens Institute of Technology.


Do you have questions about automation and AI strategy, technology, or suppliers? Get in touch to set up a briefing.


18 Jan, 2024
Enterprises Aim to Move Beyond Pilots, Accelerate Consumption of AI in 2024—Everest Group, Yates Ltd. Despite the global economic turndown, enterprises widely adopted AI in 2023, with generative AI playing a substantial role, according to a survey of CIOs conducted by Everest Group and Yates Ltd. DALLAS, January 18, 2024 — If chief information officers (CIOs) have their way in 2024, expect to see more enterprises making adoption of generative artificial intelligence (gen AI) a strategic priority with an aim to move past small pilots to scaled implementations. This forecast summarizes the sentiments of more than 100 CIOs interviewed by Everest Group in collaboration with Yates Ltd. The survey also revealed that improving the velocity of existing operations is the primary motivation driving enterprise gen AI initiatives. Key Findings from the Survey: Sixty-one percent (61%) of global enterprises are actively exploring and piloting gen AI and 22% have already deployed gen AI for at least one or more processes. Another 15% plan to pilot gen AI soon. The three top objectives CIOs are trying to achieve through gen AI are: accelerating consumption of existing digital tools reducing the latency of knowledge sharing shortening the product development lifecycle. CIOs identifying their top three challenges to scaling gen AI initiatives most often named lack of clarity on success metrics (73%), budget/cost concerns (68%) and the fast-evolving technology landscape (64%). Additionally, 55% named data security and privacy concerns, while 41% cited talent shortage. See the full press release here . You can also access the full report here .
By Michael Voellinger 02 Nov, 2023
WPP to Deliver Keynote Address at the Spark Executive Forum AI Session STATELINE, NEVADA, USA, November 2, 2023 /EINPresswire.com/ -- The Spark Executive Forum is pleased to announce that Yuri Aguiar, Chief Enterprise Data Officer at WPP, and Roy Armale, SVP of Product and Platform at WPP, will co-keynote the AI-focused session taking place November 8, 2023, in New York City. In their joint keynote, Armale will discuss how WPP is utilizing generative AI to spur innovation and create business value both internally and externally for clients. Aguiar will cover the elements of scale when working with enterprise data and the efforts to automate a complex environment of structured and unstructured data. "Generative AI has created a step change in the way large enterprises are thinking about what’s possible in terms of efficiency, growth, and competitive advantage, but it requires planning for an entirely new set of opportunities and challenges," said Charlotte Yates, Founder of the Spark Executive Forum and CEO of Yates Ltd. "Senior executives need a fresh approach to setting strategy and a risk framework that accelerates the evaluation of partners, investments, and use cases. WPP has been at the forefront of AI adoption, and we’re thrilled to have them join us to share their story and key learnings.” “Yuri and I are honored to speak at the Spark Executive Forum this year,” said Roy Armale, SVP of Product and Platform at WPP. “AI continues to be a driving force behind WPP’s technology strategy and transformation and is reshaping the landscape of the advertising industry at an unprecedented scale. We look forward to delving into this topic with business leaders at the Forum.” Yates Ltd, organizer and sole presenter of the Spark Executive Forum, is a leading consulting firm that helps clients create the strategy, roadmaps, funding, and execution plans to modernize their operations and achieve critical business goals. Spark provides senior leaders with a private, non-commercial venue for collaboration and idea exchange on relevant issues, opportunities, and trends impacting their business. Founded in 2018, Spark has become a premier event for C-level executives, attracting participation from some of the most influential and prominent leaders in business and technology. For Yates Ltd: Michael Voellinger Yates Ltd +1 201-888-1925
By Michael Voellinger 19 Jul, 2023
Ruminating on Eagles guitarist Joe Walsh's recent comments on AI.
31 May, 2023
Five Takeaways from the 2023 Spark Executive Forum
31 May, 2023
Spark 2023 Recap Part 1: The Generative Generation
31 May, 2023
Spark 2023 Recap Part 2: The Great Cloud Reckoning
31 May, 2023
Spark 2023 Recap Part 3: From Data to Outcomes
31 May, 2023
Spark 2023 Recap Part 4: Making Sustainability Scale
31 May, 2023
Spark 2023 Recap Part 5: A Personal Journey
By Michael Voellinger 01 Jan, 2023
If you missed it, please see CEO Charlotte Yates and Managing Director Michael Voellinger in this recent LinkedIn Live event hosted by Everest Group: Software-defined platform operations is a rising trend unlocking immense potential in businesses. It allows businesses to think holistically about how they create value and how that value is measured. The model also accelerates innovation by productizing the dynamics between operations and technology. This shift to a product and platform-based mindset can produce greater organizational agility and customer focus and achieve goals and value faster. But success requires new recruiting and retention strategies, organization designs, strategic supplier relationships, and ongoing change management. Join moderator Michael Voellinger, Managing Director at Yates Ltd., with Peter Bendor-Samuel, Founder and CEO at Everest Group, Charlotte Yates, Founder and CEO at Yates Ltd., and Stuart McGuigan, Former Chief Information Officer at Johnson & Johnson, as they explore the path to software-defined operating platforms and the ways in which organizations must learn, adapt, and make new investments. Our speakers will explore: ✅ How companies can embrace and drive change through a platform mindset ✅ How to implement a new governance structure to support software-defined operating platforms ✅ How to adapt to a different trajectory of investments
More Posts
Share by: