Speeded forward by the rapid adoption of a contactless and largely digital lifestyle over the past 2 years, AI will continue to grow as exponentially as ever in 2022. Below, we briefly summarize 2022’s most anticipated AI trends, and share some commentary from our team about them:
Responsible AI, stricter regulations, and increased explicability
AI’s increasing role in our everyday lives will go hand in hand with national and international efforts to regulate AI and to make sure it remains explainable and ethical. Some of the biggest concerns regarding the irresponsible or unethical use of AI revolve around privacy, manipulation, security, and discrimination.
In April 2021, the EU released its AI Act, a proposed framework to regulate AI in the 27-country bloc. The Act categorized types of AI based on risk, prohibiting some types of AI that are deemed to have an “unacceptable risk”. Other categories under the EU AI Act include “High Risk AI”, “AI with Specific Transparency Obligation,” and “Minimal to No Risk AI.” The proposal is still being debated, and if approved, will be implemented over a period of two years.
The UNESCO followed suit, adopting in November 2021 a series of recommendations for AI ethics, which it labeled as the “first global standard-setting instrument on the ethics of artificial intelligence”. Later in December 2021, the U.S. Federal Trade Commission released its “Trade Regulation Rule on Commercial Surveillance”, which states the Commission’s consideration of proposing a rulemaking process to “curb lax security practices, limit privacy abuses, and ensure that algorithmic decision-making does not result in unlawful discrimination.”
“Among our government and non-government clients in the Netherlands, we have observed an increased demand for the Responsible AI training programs that our Academy offers,” comments Xomnia’s Lead Data Scientist Guido Faassen.
“Entities that want to implement responsible AI practices have a variety of available tools (e.g. InterpretML, Fairlearn, counterfactual explanations), but it should be emphasized that it is not just about the tools,” comments Tim Paauw, CTO at Xomnia. “Common practices, processes, design methods and multidisciplinary discussion platforms that encourage critical thinking are of vital importance too.”
Data2vec, LLM, conversational AI, and language models
The growth of conversational AI is predicted to boost the functions of chatbots, voice-based smart assistants, machine translation, sentiment analysis, automatic video caption, among others. Text mining developments continue to grow very quickly too, facilitated by the emergence of transformers.
Large Language Models (LLM), models that are trained using huge amounts of data, are predicted to be a big trend this year. This technology has been used to write and translate text, converse (e.g. chatbots), and even write complex code. In 2021, Google unveiled its Language Model for Dialogue Applications (LaMDA), an LLM that, unlike chatbots that have predefined paths of conversations, “can engage in a free-flowing way about a seemingly endless number of topics.”
In 2021, we also welcomed GPT-3 (Generative Pre-trained Transformer 3), the largest NLP model to date. GPT-3’s creator, OpenAI, is working this year to break it’s record and build GPT-4, a model that will incorporate some trillion language processing data points.
Moreover, Meta’s AI team, has unveiled in January 2022 the Data2vec, which is expected to bring a breakthrough on the level of high-performance, self-supervised algorithms. According to the Meta AI team, Data2vec is “the first high-performance self-supervised algorithm that works for multiple modalities,” aka those involving speech, images and text. The algorithm serves as an example of holistic self-supervised learning, which, according to Meta AI, has already outperformed single-purpose algorithms that involve either computer vision or speech. Data2vec is predicted to directly compete with NLP technology.
Polars continues to grow among programmers
In 2021, the data science world welcomed Polars, the fastest Dataframe library in Rust with interfaces to Python and Javascript. Xomnia’s Machine Learning Engineer Ritchie Vink started developing Polars in 2020, and since its release around a year ago, it continues to gain traction among professionals across the industry. For instance, the Polars query engine is being used in nushell, and explorer.
Polars addresses the lack of DataFrame libraries in Rust. Unlike other widely-used libraries, Polars is multi-threaded, meaning that it allows using all the cores of a computer at the same time to achieve its full processing potential.
To read more about Polars and its memory model, which is based on Apache Arrow’s memory model, click here.
Migration to the cloud continues
The trend to move from working and storing data on-premise to working on the cloud continues to grow among organizations across the board.
“We have been noticing this trend for years, but now we see even more conservative clients like municipalities and entities with expansive operations and big teams willing to move from working on-premise to working on the cloud, or are seriously working on it already,” comments Marius Helf, principal machine learning engineer at Xomnia.
Neural networks and deep learning become more accessible
Marius also points out a growing trend in the toolkits that allow the application of neural networks and deep learning on more “casual” basis.
“You don’t necessarily need experts anymore to apply these neural networks and deep learning toolkits, which present solutions that are more or less ready-to-use,” adds Marius. “This makes the applications of deep learning for certain use cases much easier than they were a while ago.”
“Many of those models or libraries of models that are ready to use can be found in Python. The Darts library, for instance, has half a dozen models that are ready for application for time series and deep learning,” the machine learning engineer explains.
AI and healthcare
Biotechnology and improving/scaling healthcare globally have dominated the agendas of many governments and organizations in the past two years. In 2022, AI’s role in healthcare will continue to grow.
Some expected outcomes of this are advancements in contactless patient care, medical decision making, and preventive measures with the aid of AI.
Another breakthrough in biotechnology is Deepmind’s AlphaFold, an AI system that quickly predicts protein structures, which is expected to play an increasing role in the coming years. Proteins play a central role in treating diseases, and since the shape and function of a protein are closely tied, AlphaFold’s ability to predict protein structures "unlocks a greater understanding of what it does and how it works,” according to DeepMind.