AI is starting to hold rockstar status in our daily lives, prompting the question: โI๐ง ๐ฅ๐ข๐ ๐ก๐ญ ๐จ๐ ๐ญ๐ก๐ ๐ซ๐ข๐ฌ๐ค๐ฌ, ๐ก๐จ๐ฐ ๐๐๐ง ๐ฐ๐ ๐ฎ๐ฌ๐ ๐๐ซ๐ญ๐ข๐๐ข๐๐ข๐๐ฅ ๐๐ง๐ญ๐๐ฅ๐ฅ๐ข๐ ๐๐ง๐๐ (๐๐) ๐๐ญ๐ก๐ข๐๐๐ฅ๐ฅ๐ฒ?”
This was a key topic we discussed at Nehemiah Entrepreneurship Community‘s recent Global Forum on Kingdom Business and Artificial Intelligence.
Whether we measure AI headlines or AI-related stock prices, enthusiasm for AI today is frenzied. As it synthesizes massive amounts of data in seconds and delivers answers to problems we never considered asking, AI is disrupting every industry by creating exponential improvements to efficiencies and creative content at a speed weโve never seen.
Unlike tools of the past, AI continually LEARNS from vast amounts of data, enabling it to perform NEW tasks and create NEW content without having to be programmed.
The result?
๐๐ฎ๐ฆ๐๐ง๐ฌ ๐๐ซ๐ ๐๐๐ฅ๐๐ ๐๐ญ๐ข๐ง๐ ๐๐๐๐ข๐ฌ๐ข๐จ๐ง๐ฌ ๐ญ๐จ ๐๐ ๐๐ญ ๐ ๐๐ซ๐๐๐ญ๐ก๐ญ๐๐ค๐ข๐ง๐ ๐ซ๐๐ญ๐.
This opens new windows of opportunity, but it also creates new risks:
โข Economic Risk: AI is already displacing jobs, and even though new jobs will be created, it will likely not be for the same people.
โข Legal Risk: AI scrapes large volumes of information, which may be stolen, inaccurate, or misrepresentative.
โข Security and Privacy Risk: AI can collect and track enormous amounts of information, often without our knowledge, which can lead to privacy and security breaches.
โข Environmental Risk: AI consumes ten times more computer energy than traditional IT applications, threatening to create a burden to the grid.
โข Decision-Making Risk: AI has no moral center. AI’s morality is shaped by the biases of its human trainers – intentionally or unintentionally. Unfair or unsafe decisions can result from biases or incorrect data.
How do we navigate all these risks?
๐๐ ๐๐๐ง ๐ฌ๐ญ๐๐ซ๐ญ ๐๐ฒ ๐ซ๐๐๐ฅ๐ข๐ณ๐ข๐ง๐ ๐ญ๐ก๐๐ญ ๐๐ ๐ข๐ฌ ๐ ๐๐ซ๐ข๐ ๐ก๐ญ, ๐๐ง๐ญ๐ก๐ฎ๐ฌ๐ข๐๐ฌ๐ญ๐ข๐ ๐๐ฎ๐ญ ๐ง๐๐ข๐ฏ๐ ๐๐ก๐ข๐ฅ๐.
And as Graham Nash encouraged in his song, we need to โteach our children well. โ
๐๐ฎ๐ฆ๐๐ง๐ฌ: ๐๐๐๐๐ก ๐๐จ๐ฎ๐ซ “AI ๐๐ก๐ข๐ฅ๐๐ซ๐๐ง” ๐๐๐ฅ๐ฅ
How do humans responsibly raise any child? By teaching then right from wrong. If AI is not taught ethical values, it can make unexpected or unethical decisions. For example, self-driving cars need to be taught to prioritize people’s safety over speed. Customer service AI applications need to be taught to value customer relationships and avoid making discriminatory decisions.
Values must be taught intentionally. Children taught right from wrong can make better decisions when they are apart from their parents. In the same way, AI requires ethical guardrails to make informed choices.
๐๐ซ๐๐๐ญ๐ข๐ง๐ ๐๐ญ๐ก๐ข๐๐๐ฅ ๐๐ฎ๐๐ซ๐๐ซ๐๐ข๐ฅ๐ฌ
I help organizations proactively navigate the risk of high-stakes strategies. How humans are going to deploy AI is high-stakes. Proactively thinking through the ethical risks of AI is our moral responsibility. It is simply good risk management.
Before we mindlessly check โagreeโ and start randomly deploying AI applications across our organizations, we need to regularly ask the following questions to establish ethical guardrails:
1. What Values are Core to Our Business?
Nehemiah Entrepreneurship Community is a global nonprofit that helps entrepreneurs build businesses grounded in their values and their faith. They encourage founders to clarify their values and life purposeโ even before drafting a business plan! Aligning business strategy with personal values simplifies decision-making and ensures employees and partners, and supporters are on the same page. In the same way, getting clear on your core values will help you train your newest team member, AI, to understand what is most important to your organization.
2. Where Do We Want People to Add the Greatest Value?
Efficiency is often the driver of technology deployments. But people are our most treasured asset, and humans are designed for relationships. As you make decisions about where to use AI in your organization, bias towards using AI to SUPPORT people (instead of eliminating them) in order to help them deliver value propositions as shining stars.
3. Where Can We Delegate to AI to Improve Experiences?
Keep the end in mind as you delegate to AI. Prioritize how new tools can improve customer, employee, and supplier experiences while also enhancing speed and efficiency.
4. How Are We Guiding AI to Make Ethical Decisions and Use Trustworthy Data?
Think through the ethical conundrums AI may encounter. Just as you teach children right from wrong, ensure your AI teammate is using trustworthy data and has been given adequate context to make ethical decisions. This may require getting under the hood of the tools you are deploying.
5. Are We Keeping a โHuman in the Loopโ?
To provide governance, humans need to continually check in on how AI is actually making decisions. Establishing escape routes for when technology fails and ensuring humans can quickly step in can reduce frustration and preserve relationships. As the environment changes, you will need to provide AI with updated context and decision-making guidance.
So yes, despite the risks, humans CAN use Artificial Intelligence ethically.
But only if humans step and LEAD.
After all, AI needs us humans to be the grown-ups in the room!
Fast Track Tips
If you find the idea of creating ethical guardrails for AI resonates, here are three tips to fast track results:
1. TALK ABOUT IT: At your next team meeting, share this article and discuss how it applies to your organization. You might also bring this list home and discuss it with your family. Openly discuss the real-world benefits of AI and the implications of NOT having ethical guardrails around this powerful technology.
2. SET A DEADLINE: Define a timeline to establish your own guardrails and give yourself an incentive/consequence for at least having drafted answers to each question. It’s a living document, but start with your best ideas for now. (Discussing your values is always a great starting point to kickoff the discussion.) Then put in place a time to talk about it each quarter.
3. START WITH WHAT YOU HAVE TODAY
Inventory the AI applications you are CURRENTLY using, both personally and professionally. For each one, evaluate the current limitations/risks of AI, possible consequences, and how the guardrails apply. Where do they fit as a member of your “team”? You may need to do some research to learn more about how these applications could be used. This exercise will not only start you on your way to being the “grown-up in the room” but build your awareness as you evaluate new AI tools. It can simply make you a wiser human as you step up to take the lead.
Let me know how it goes! I welcome hearing your thoughts and learnings ( and if you think there are additional questions we need to include in the list!)
– Susan
This article is part of “Fast Track Insights”, providing practical ideas and tips to get results faster, whether you are driving a new strategy (or getting one back on track). I want to help you avoid common mistakes.
Subscribe here to receive practical insights once or twice a month.
To learn more about De-Risk System for Impactโ workshops and custom engagements, and my upcoming book, explore my website at www.gotomarketimpact.com, or message me at susan.schramm@gotomarketimpact.com.