[1] The concept of "ethical AI" hardly existed just a few years ago, but times have changed. After countless discoveries of AI systems causing real-world harm and a slew of professionals ringing the alarm, tech companies now know that all eyes – from customers to regulators – are on their AI. They also know this is something they need to have an answer for. That answer, in many cases, has been to establish
in-house AI ethics teams.
Now present at companies including Google, Microsoft, IBM, Facebook, Salesforce, Sony, and more, such groups and boards have been largely positioned as places to do important research and even safeguard against the companies' own AI technologies. But when Google
fired Timnit Gebru and Margaret Mitchell – leading voices in the space and the former co-leads of the company's ethical AI lab – this past winter after Gebru refused to rescind a research paper on the risks of large language models, it felt as if the rug had been pulled out from under the whole concept. It doesn't help that Facebook has also been criticized for steering its AI ethics team away from research into topics like misinformation out of fear that it could impact user growth and engagement. Now, many in the industry are questioning whether these in-house teams are just a facade.
"I do think that skepticism is very much warranted for any 'ethics' thing that comes out of corporations," Gebru told VentureBeat, adding that it "serves as PR [to] make them look good."
So is it even possible to do real AI ethics work inside a corporate tech giant? And how can these teams succeed? To explore these increasingly important questions, VentureBeat spoke with a few of the women who pioneered such initiatives – including Gebru and Mitchell, among others – about their own experiences and thoughts on how to build AI ethics teams. Several themes emerged during the conversations, including the pull between independence and integration, the importance of diversity and inclusion, and the fact that buy-in from executive leadership is paramount.
>> Read more here. [2] Intel today
announced a major update to its neuromorphic computing program, including a second-generation chip called Loihi 2 and Lava, an open source framework for developing "neuro-inspired" applications. The company is now offering two Loihi 2-based neuromorphic systems – Oheo Gulch and Kapoho Point – through a cloud service to members of the Intel Neuromorphic Research Community (INRC) and Lava via GitHub for free.
Along with
Intel, researchers at IBM, HP, MIT, Purdue, and Stanford hope to leverage neuromorphic computing – circuits that mimic the human nervous system's biology – to develop supercomputers 1,000 times more powerful than any today.
INRC, the ecosystem of over 150 academic groups, government labs, research institutions, and companies founded in 2018 to further neuromorphic computing, claims to have achieved breakthroughs in applying neuromorphic hardware to an array of applications, from voice recognition to autonomous drone navigation. Some members of INRC see business use cases for chips like Loihi. For example, Lenovo, Logitech, Mercedes-Benz, and Prophesee hope to apply it to enable things like more efficient and adaptive robotics and rapid search of databases for similar content. Last year, Accenture tested the ability to recognize voice commands on Loihi versus a standard graphics card and found the chip was up to 1,000 times more energy-efficient and responded up to 200 milliseconds faster.
>> Read more here.
[3] GitHub has formally
launched Enterprise Managed Users (EMUs), a new type of user account for GitHub Enterprise Cloud (GHEC) customers that can be provisioned and managed centrally via the company's identity provider (IdP).
This represents part of GitHub's broader efforts to transition software development away from local environments and into the cloud. Another example is the company's browser-based
Codespaces platform, which it
recently launched for enterprises.
GitHub's EMUs, which were first announced in private beta last year, give admins granular control over GitHub accounts across the company by tying GitHub Enterprise Cloud to their IdP of choice, such as Google, Microsoft, or Okta. They're particularly notable from a security perspective, as repositories associated with EMU accounts are automatically blocked from making private code publicly visible, which goes some way toward averting human error.
>> Read more here.
No comments:
Post a Comment