8 AI Trends to watch out for in 2023

8 AI Trends to watch out for in 2023

With 2023 now in full swing, we are in the midst of a red-hot AI summer. It can be a job in itself, keeping up with all the developments within AI, while ignoring distractions.

Here are some trends to watch out for this year:

1. Generative AI disrupts more industries.

Last year there was a tremendous amount of public interest in generative AI models such as ChatGPT, DALL-E 2, Whisper, Midjourney and Stable Diffusion being made available to the general public.

The amount of hype for ChatGPT even surprised Sam Altman, the CEO of OpenAI which developed ChatGPT. Altman said: “I can see why DALL-E surprised people but was genuinely confused about why ChatGPT did. We put GPT-3 out almost three years ago, put it into an API and the incremental update from that to ChatGPT should have been predictable and want to do more introspection on why I was miscalibrated on that.”

Generative AI has already had an impact on a growing number of industries.

Journalism is one industry feeling the impact of generative AI, with Buzzfeed (if you can call Buzzfeed journalism!) announcing it will use ChatGPT to generate articles. However, it’s not all smooth sailing, with CNET finding errors in more than half of its AI-written stories.

We will likely see this trend continue with the disruption of more industries and professions. In January this year, Google released the MusicLM model, with the ability to generate music from text, suggesting we could soon see similar disruptions in the music industry.

Google recently released its Large Language Model (LLM) Bard, and a New York Times article expected OpenAI to release GPT-4 sometime in the first quarter of this year. These LLMs are breaking state-of-the-art accuracy benchmarks, and will quickly make their way into the way we search for information (both Microsoft and Google are incorporating their LLMs into Bing and Google Search respectively), write emails and messages, and interact with customer service to name a few applications.

So 2023 is shaping up to be the year when Generative AI makes a significant impact on our lives.

2. Lawmakers try to play catch-up and legal challenges begin.

As progress within AI accelerates at an unprecedented rate, legislators start to enable legislation to regulate the use of AI.

The EU is leading the way with the Artificial Intelligence Regulation Act (AI Act) passing the Council of the EU, which could be adopted by the end of 2023.

At the same time, legal challenges, for example, Getty Images is taking legal action against Stability AI (the developers of Stable Diffusion).

AI is impacting the legal system in other ways, with an AI legal assistant recently helping a defendant fight a speeding case in court.

The potential for harm from generative AI, facial recognition and deep fakes is considerable. Cybercriminals are using ChatGPT to attack businesses and individuals, and facial images are being used by totalitarian governments to surveil their citizens. It will require a concerted effort from researchers, policy makers and companies, to align AI models towards positive interests to avoid a dangerous future.

3. Educators grapple with AI.

In the education sector, generative AI presents an opportunity and a challenge, with students now able to generate answers to essay questions with ease. One surprising example was a story about ChatGPT being able to pass a final MBA exam at Wharton, raising questions about how useful essay-based courses will be at testing students and how the education sector can adapt.

There has been a mixed reaction from educators, with New York City public schools blocking ChatGPT, while some educators are looking for how it can be used within their classrooms.

4. Human & AI collaboration becomes the norm across many professions.

With a proliferation of new AI tools emerging, many people have written about the risk of jobs being automated out of existence. However, although certainly, some jobs are at risk of automation, many of these AI tools can help augment and improve the work of employees. This year, with generative AI models becoming available and easily accessible to the general public, we will likely see an explosion of workers starting to use these tools in their daily activities.

“Prompt Engineering”, the process of discovering and writing useful “prompts” for generative AI models, to produce desired results will become a useful skill if not a job title in itself. If “Data Scientist” was the sexiest job when Harvard Business Review wrote this article in the 2010s, then “Prompt Engineer” may very well be the sexiest job of the 2020s, and much more accessible for the wider workforce.

Marketing and copywriting have seen a proliferation of new generative AI tools, with companies such as Jasper and copy.ai helping marketers generate content for their campaigns.

Sales teams are also leveraging these tools to help draft and edit personalized sales emails, which can be time-consuming to write.

Software Engineering has seen the arrival of GitHub CoPilot, an assistant to help programmers write code faster and more accurately.

Designers have long benefitted from machine learning/AI within their software, with tools such as Photoshop’s content-aware fill being a staple of a designer’s toolkit for over a decade. With generative text-to-image diffusion models such as Stable Diffusion becoming available, more AI capabilities will land in the hands of 2D and 3D designers.

5. Multimodal models go mainstream.

A core long-term goal of artificial intelligence is to build systems that can learn about concepts across different domains such as textual and visual domains, not just one. These are known as “multimodal” neural networks, they understand multiple “modalities” and bring AI closer to humans’ ability to understand many modalities such as vision, sound, and text, with the eventual objective of multimodal models which can see, read and hear.

In the past few years, many important multimodal models have been released, such as CLIP and DALL-E. This year, we have already seen the release of new multimodal models, such as Salesforce’s BLIP-2, which has shown an impressive ability to answer text-based questions from a user about an image.

Multi-modal text-to-image models are disrupting the art and design world, with AI models winning art and even photography competitions, provoking a backlash from within the art and photography community towards AI art generators, which are perceived as a technological threat to humans trying to make a living in the creative industries.

This year expect to see new multi-modal models which are more accessible to the general public, and large new companies being built on this technology, trained for specific domains such as medicine, consumer goods, retail and education.

6. Researchers drive to make AI models more grounded in reality.

One of the challenges with generative Large Language Models such as ChatGPT is they can sometimes be very confident about very wrong information. This is known as the “hallucination problem” and experts such as Meta’s chief AI scientist, Yann LeCun, have expressed big concerns:

"In terms of underlying techniques, ChatGPT is not particularly innovative... Why hasn't the public seen programs like ChatGPT from Meta or Google? The answer is, Google and Meta both have a lot to lose by putting out systems that make stuff up," says Meta's chief AI scientist, Yann LeCun.

Researchers are working on methods to improve the reasoning ability and real-world accuracy of Large Language models.

Some of these approaches include models that can generate their training data to improve themselves and models that can fact-check themselves.

7. Continuous learning sees AI models become life-long learners.

Many people assume that machine learning models continuously update and learn new information over time, however, this is very often not the case. Even state-of-the-art models such as ChatGPT and LaMDA, are trained on historical data, causing layman users to be puzzled when asking seemingly simple questions such as “What time is it?” to be greeted by ChatGPT’s response of “As a language model, I don't have access to the current time. You can check the time on your device.”

Or when asking ChatGPT “Who is the Prime Minister of the UK?” receiving the response:

“As of my knowledge cut-off in 2021, the Prime Minister of the United Kingdom is Boris Johnson.” Although the UK has since had two more Prime Ministers since then

[Note: This was accurate at the time of writing this article, and who knows, we may have had 4 or 5 more Prime Ministers by the time you are reading this article!].

To address this challenge, researchers are working on Continual AI, systems which continuously learn and update based on new information.

Sam Altman is excited about this prospect:

“I think we will have models that continuously learn.

So right now, if you use GPT whatever, it’s stuck in the time that it was trained. And the more you use it, it doesn’t get any better and all of that.

I think we’ll get that changed. So I’m very excited about all of that.”

With progress being made around  Reinforcement Learning from Human Feedback and Active Learning, look out for AI systems that can better incorporate new information and learn over time.

8. Challenges and solutions arise around AI & sustainability.

On the one hand, AI poses a challenge to sustainable goals, as large-scale machine learning models are consuming an ever-growing amount of energy.

On the other hand, AI is being used on multiple fronts to aid sustainability efforts.

One front where AI is playing a key role in energy use, AI models are being used to manage the careful balance of electricity supply and demand in real-time.

In a zero-carbon future, renewable energy will be generated from a variety of sources, such as offshore wind, photovoltaic solar panels, hydroelectric plants, and microgrids.

In a green-energy future, renewable energy will come from a diversity of sources, such as microgrids, wind farms and solar panels. The energy generated by such sources is prone to uncertain fluctuations depending on prevailing weather conditions, unlike the more predictable outputs from gas or coal plants.

AI is being used in agriculture to transform production by better monitoring environmental conditions and crop yields.

Within transportation, AI is being used to reduce traffic, improve supply chain logistics, and enable more autonomous driving, which could optimize and reduce the amount of transport needed.

In water resource management, AI is helping reduce waste, and improve weather forecasting to help reduce water usage.

In manufacturing, robotics and predictive maintenance are making production facilities more efficient.

And in facilities management, AI is optimizing the recycling of heat and energy use within buildings by tracking the number of people in rooms or predicting the availability of renewable energy sources.

Lastly, in materials science, AI is discovering new enzymes to improve plastic recycling. Researchers from UT Austin engineered an enzyme capable of degrading PET, a type of plastic responsible for 12% of global solid waste.

When considering the overall impact of AI on a net-zero future, a Nature paper summarizing the impact of AI on various Sustainable Development Goals, found the positive impacts of AI significantly outweighed the negative impacts.

Hopefully, this year, we will see a greater focus on improving the efficiency of AI models and applying them to more game-changing technologies to help humanity reach a more sustainable future.

Conclusion

2023 is shaping up to be another exciting year for Artificial Intelligence, and while there are many challenges and risks, it is encouraging to see the hard work of the research and technology community to align AI efforts with long-term human goals.