AI ethics for business – Part 3

I recently shared insights I gained after completing a free, online course on AI ethics for business offered by Seattle University. Those insights were shared in two blog posts:

Today, I am sharing a conversation I had with three other E&C colleagues who also competed this course: Tom Fox, Sean Freidlin and Patrick Henz. The conversation is featured in episode 123 of Tom Fox’s Innovation in Compliance podcast series.

Keep learning!

It might come back, you know?

To understand what is happening today, we should devote at least 80% of our time studying history and no more than 20% on current news.

This is true of any topic. War, entertainment, natural disasters, exploration, politics, sports, etc.

And, of course, it is true of the current pandemic. At this point, we won’t learn much from current news. The precautionary measures won’t change. Some numbers will go up, some will go down. A new celebrity will become infected. Not terribly helpful.

Instead, study the last pandemic of 1918 and you’ll learn that after it was “over”, it came back.

Twice.

So today’s leaders need to have a plan for what they’ll do if the COVID-19 dies down mid-summer, only to return at the end of the year. And then again in 2021 or 2022.

How will governments assist those in need? How will employers support their employees, customers, suppliers and local communities? How will health facilities serve the sick?

Leave Facebook aside. Learn from history. Take action.

What are you after?

This post on semiotics by Seth Godin got me thinking.

Can a business leader increase her social standing by being ethical?

Will her ethical behavior increase her sense of belonging with an important group?

If so, what flags, signals or other communications should she use to convey her status?

And if she does all of these things well, will it be enough for her?

Would it be for you?

Cheap t-shirts, pigs, and ethical decisions

Those who lived through the Great Depression spent the rest of their lives saving every penny they could, and placing them only in the safest of investments.

This severe worldwide economic depression of the 1930s changed how an entire generation (or two) lived on a day-to-day basis. It also changed how everyone prepared for the future.

The current pandemic is likely to have the same effect. Just take two bits of news I read today:

A crisis exposes vulnerabilities in a system. In most cases, these vulnerabilities were not invisible before the crisis. They were simply deemed unlikely, thus ignored.

The garment worker, the factory owner, and the pig farmer will have this crisis on their mind for the rest of their lives. It will change how they work, how they assess risk, and how they live, just like the Great Depression did.

More importantly for the very near term, business leaders and workers will make important decisions to save their businesses or, in some cases, their lives. These decisions will have significant compliance and ethical considerations. For now, and for the future.

Are they ready?

Let the pressure propel you

I often write about two topics on this blog: the fraud triangle and the science of total motivation.

The fraud triangle tells us that given enough pressure, a person will engage in fraud (or any wrongdoing, really) if they think they can get away with it and if they can rationalize their behavior. The science of total motivation (ToMo) demonstrates that pressure can hurt business performance.

The common culprit is pressure. And the question for any organization affected by the COVID-19 pandemic is: are your employees feeling additional pressure right now? If the answer is yes, then you know what to expect.

But external pressures can also propel organizations willing to improve their culture, to experiment with ways to cope with this crisis, to align their purpose with the goals of public health, and to take steps today that will make them stronger when the pandemic is over. The worse thing they can do is to hunker down.

Premeditatio malorum

I occasionally write about Stoicism on this blog. I find the philosophy well suited for E&C professionals.

The Stoics like to do an exercise called premeditatio malorum, imagining things that could go wrong or be taken away. Stoics are not pessimists, they simply like to be prepared.

Seneca, one of the richest man of Rome and a Stoic, would regularly practice poverty. He would eat a meager fare, walk around town with ragged clothes and barefoot, and sleep on the floor. He knew he might lose all his money one day. He had seen it happen to others and didn’t foolishly believe himself immune to misfortune. He wanted to be prepared. And so each day that he was still rich, he enjoyed it.

We all do this exercise from time to time. We read the news, learn of somebody else’s misfortune resulting from an earthquake, a flood, or a tornado, and for a brief second we try to imagine ourselves in their shoes. The thought alone is often so uncomfortable that we quickly abandon it.

What did the average American or European think about when they first heard of the coronavirus outbreak in China? How many asked themselves how they would react if their own government banned gatherings, closed schools and restaurants, and locked down entire towns? More importantly for this blog, how many E&C professionals thought about the new risks that could be created by such a situation?

Premeditatio malorum is a simple exercise that gets you ready for challenges, softens the blow when they actually happen, and makes you grateful when they don’t.

The effect of unfairness on your E&C team

LRN’s Susan Divers recently published an excellent post on the importance of holding senior executives accountable to ensure a strong ethics & compliance (E&C) program.

Her post reminded me of a negative outcome of unfairness that we rarely talk about: the demoralizing effect on the E&C team.

Imagine that a senior executive is found to have violated a policy that would typically result in termination but is instead simply provided with “coaching” by the c-suite, who is judging him too important to lose. Perhaps less than a dozen people will be aware of the investigation and of the discipline imposed, but this small group is sure to include the CECO and a couple of her lieutenants. This will be like clipping their wings. Like deflating their balloon. Insert other rainy analogies here. How are they supposed to carry on, effectively, the mission of creating and touting an ethical culture?

Imagine instead that the executive is terminated and that employees are told that it was the result of a breach of company values. How empowering for the E&C team! What an incentive for all employees to act ethically! What a powerful way to recruit good people and keep bad ones at bay!

How long until all c-suites understand this dynamic?

AI ethics for business – Part 2

For Part 1 of this series, click here.

I recently joined a small group of E&C professionals who decided to complete the free online course on AI Ethics for Business, offered by Seattle University.

We agreed to complete a module or two every week and to share our insights. Here are some of my insights from Module 2:

  • In these early days of AI, it might be best to create technology-specific regulations rather than impose a general regulatory framework. Let’s think specifically about the regulations required for self-driving cars, face recognition and nurse robots, and then extract general concepts.
  • Most data-collection efforts start with good intentions. A magazine needs your home address for delivery, and later they ask for your age and your income to attract the right advertisers to place adds in the magazine. But then, the advertisers offer to buy that information from the magazine to send direct-mail marketing to subscribers, without their consent. What’s a magazine to do? How transparent must they be with their subscribers? How much agency should the subscriber have? Similar scenarios (and questions) are now being played with data collected by our phones about where we’ve been, by our watches about our resting heart-rate, by our cars about how fast we are driving, and this data is being being fed to AI engines.
  • Machines learn to make decisions based on datasets that humans provide. These datasets almost always contain biases. Let’s say I want a machine to learn how to identify a good poker player. I will feed this machine with all the data that we have about the players who reached the final table of the Main Event at the World Series of Poker since its founding in 1970. The machine will see that only one woman ever made it, back in 1995. What will the machine learn from this?
  • The concerns that humans have about AI and machine learning revolve around agency: we want to have control over the types of decisions machines make; we want to understand how those decisions are made; and we want to be able to override those decisions.

A side-note: I find the end-of module quizzes very poor. If all you remembered of these modules was the information included in the quizzes, you would have a dismal understanding of AI ethics.