AI has been all over the news lately, with questions about whether some of the most advanced AIs are approaching sentience. (They aren’t, according to most experts in the space, contrary to the alarmist view of one ex-Google AI expert. No Skynet anytime soon, phew!)
AI is popping up in all sorts of useful ways throughout the business world. The everyday examples are quality-of-life improvements like AI-powered design recommendations in Microsoft PowerPoint and the machine learning-powered text to speech and voice assistants on our smartphones.
On the more advanced end, machine learning and AI are solving complex business intelligence and data analytics problems and even assisting human agents in determining eligibility for loans or picking the best candidates for a job.
Here’s the thing, though: on all fronts, there are all kinds of ethical issues to worry about, including AIs that end up adopting racist or sexist biases. That’s a big deal if we’re increasingly trusting an AI to decide who gets a home loan or a job and who doesn’t!
Microsoft has recently taken new leadership on the AI ethics front, and it’s something worth mentioning (even if it might not affect everyday life for you — yet).
Here’s what you need to know.
Microsoft to Retire and Retool Azure Face
Azure Face is an AI-powered facial recognition tool from Microsoft. Most of the big tech firms have developed some kind of similar tool, which can recognize certain facial characteristics and draw conclusions about the people who those faces belong to — conclusions about gender, age, race, and even emotional state or expression.
In theory, having the ability to analyze and recognize these characteristics at scale could be incredibly important and powerful. But the problems here are numerous: such a system could easily be abused for human rights violations or invasions of privacy, for one.
And like we mentioned above, these systems can become biased and even reinforce negative stereotypes in deeply harmful ways. These systems are only as good as the data that trains them, which means they may be better at identifying certain characteristics and worse at others. Putting too much trust in them — say, in a law enforcement context — can lead to big problems, like false accusations and arrests.
Microsoft’s recently published standard for responsible AI doesn’t allow for this kind of thing — and that’s why Microsoft is taking Azure Face offline and reworking the system according to its responsible AI guidelines.
Limits for Custom Neural Voice
Microsoft has another AI-powered service called Custom Neural Voice. This is a text to speech system that is barely distinguishable from natural human speech, and it’s deeply impressive. (Google demoed a similar system called Duplex several years back and received tons of negative backlash.)
The problem here is that, as AI voices become more and more indistinguishable from real humans, unethical users might use these tools nefariously. They might trick unsuspecting consumers into disclosing personal information or use a system like this to impersonate a real human at scale. (You think robo-spam calls are bad now?)
So Microsoft has pledged to limit the use of Custom Neural Voice to select businesses, ones that have in some way established that they will use the tool ethically.
Making AI Work for You
The business world at large is relying on AI-powered applications more and more. This progress creates certain ethical challenges, but used wisely, AI has the potential to transform business growth and effectiveness.
Are you missing out on any AI-powered possibilities for your company? Some of them are simpler to use than you might expect! Reach out to our team if you’re wondering where AI could improve your workflows and efficiency. We’re happy to help!