[1] 2021 was an
eventful year for AI. With the advent of new
techniques, robust systems that can understand the relationships not only between words, but words and photos, videos, and audio became possible. At the same time, policymakers — growing increasingly wary of AI's potential harm — proposed rules aimed at mitigating the worst of AI's effects, including discrimination.
Meanwhile,
AI research labs — while signaling their adherence to
responsible AI — rushed to commercialize their work, either under pressure from corporate parents or investors. But in a bright spot, organizations ranging from the U.S. National Institutes of Standards and Technology (NIST) to the United Nations released guidelines laying the groundwork for more explainable AI, emphasizing the need to move away from black-box systems in favor of those whose reasoning is transparent.
As for what 2022 might hold, the renewed focus on
data engineering — designing the datasets used to train, test, and benchmark AI systems — that emerged in 2021 seems poised to remain strong. Innovations in AI accelerator hardware are another shoo-in for the year to come, as is a climb in the uptake of AI in the enterprise.
>> Read more. [2] The most sophisticated
AI language models, like OpenAI's GPT-3, can perform tasks from generating code to drafting marketing copy. But many of the underlying mechanisms remain opaque, making these models prone to unpredictable — and sometimes toxic — behavior. As recent research has shown, even careful calibration can't always prevent language models from making
sexist associations or endorsing conspiracies.
Newly proposed techniques promise to make language models more transparent than before. While they aren't silver bullets, they could be the building blocks for less problematic models — or at the very least models that can explain their reasoning.
Explainability in large language models is by no means a solved problem. As one study found, there's an "interpretability illusion" that arises when analyzing a popular architecture of language model called bidirectional encoder representations from transformers (also known as 'BERT').
It's what's known as the
automation bias — the propensity for people to favor suggestions from automated decision-making systems. Combating it isn't easy, but researchers like Georgia Institute of Technology's Upol Ehsan believes that explanations given by "glassbox" AI systems, if
customized to people's level of expertise, would go a long way.
>> Read more. [3] VentureBeat kicked off its 2021
digital twins coverage with Accenture's prediction that digital twins would be a top trend for the year. Meanwhile, other consultancies and systems integration leaders expanded their respective digital twins' practices to improve product development, manufacturing, and supply chains. Lessons from these early implementations are helping to shape best practices that will allow more enterprises to succeed in 2022.
Digital twin capabilities are infused into other tools for
product design, engineering, construction, manufacturing, and supply chain management rather than sold as individual tools. Enterprises face numerous challenges around weaving together a mix of data architecture, knowledge graphs, processes, and cultures required to get the most from
digital twin implementations. Looking ahead to 2022 there are advances in these tools and new uncertainties in the
world economy that are likely to play out in the year to come across core capabilities, medicine, construction, and sustainability.
>> Read more.
No comments:
Post a Comment