“AI algorithms may be flawed,” the company said. “Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.”
There’s also the use of AI by the military. In October, an undisclosed number of Microsoft employees wrote an open letter to the company expressing concern about a $10 billion project to develop cloud services for the Department of Defense. In the post, the employees asked about the “violent application” of AI technology and the level of transparency the company would provide to those developing it.
“How will workers, who build and maintain these services in the first place, know whether our work is being used to aid profiling, surveillance, or killing?” the post said.
These are all critical questions for Microsoft after the company added AI to its strategic vision in 2017, officially making it a top priority.
At the weekly AI 365 meetings, Nadella and Scott are joined by Chief Financial Officer Amy Hood and other top executives. Scott said the meetings are important so that business leaders are on the same page and have a clear sense of where projects may be overlapping. They’re also useful for allowing a group that’s seeing strong results from a particular technique to explain it so that the model can be replicated for other projects.
“You look at something like machine learning where, especially on the frontier, there’s a small number of people who really have that frontier-pushing expertise and drive, and you really, really don’t want to waste their effort,” Scott said.