Three Ways We Know That 2024 Is The Year Of AI Adoption

Three Ways We Know That 2024 Is The Year Of AI Adoption

New research is making things a lot clearer, and giving us real visions of what’s possible.

If you didn’t already believe we’re headed for lightning-speed innovation, just check out these three sources…

I know it seems like people keep pounding that drum – saying we’re going to have all sorts of intelligent robots taking over aspects of our lives in the next few years – but if we’re so strident about it, that’s because people with an inside view can really see it happening, and know what it means.

I wanted to write this blog post to show how that works – how to convince an average person that they really should care about the groundbreaking work that’s being done!


So with that in mind, how do we know exactly how quickly AI is being integrated into our society?

The first source is your general Internet commentariat. For example, check out this Tom’s Guide article talking about 2024 and how it’s going to be the year of adoption.

2023, the writer argues, was the year in which we learned in a theoretical way about how large language models work. 2024, by contrast, is going to be the year where we see markets fundamentally renovated.

“We’re going to see generative AI in fridges, toys, exercise equipment, lawnmowers and in our cars,” wrote Ryan Morrison, just after last Christmas. “Chatbots will allow us to interact with objects the same way we talk to ChatGPT today, and AI vision technology will give appliances the ability to see what we’re doing … The reality is that we’ve just seen a year where the floodgates of decades of research were blown open. New breakthroughs in technology were coming all the time, and investment reached record highs.”


Here’s the second source, and this one is important – I keep wanting to stress a lot of what MIT scientist Alex Amini said in a recent talk at the MIT Venture Studio class just days ago about where AI is headed this year.

His prediction? This summer, we’ll be seeing those big enterprise applications!

Amini is a leader in a lab where researchers are working on new kinds of networks called liquid neural nets – where new types of artificial neurons have the ability to continuously process information, and where scientists use a new differential equation to represent the interaction between two artificial neurons through simulated synapses.

Many of the ramifications, he suggests, will be evident this year.

“What does this ecosystem look like in two years? Or one year even?” he asks. “Will it be liquid neural networks completely displacing transformers, and transformers are obsolete? I think it’s very likely that transformers are obsolete in the near future.”

Importantly, he also has some thoughts about the regulation of this quickly approaching technology.

“If you look at, basically, U.S. regulation of large language models, it’s fascinating to me, because the way that they judge a more capable language model is purely based on the number of flops that it takes, the amount of compute that it uses to train on. … And to me, this is like a totally backwards way of thinking, right? … It doesn’t matter how much compute you use to train … It’s just that that’s the best proxy that we have, today, to judge these things.”

Amini goes into a very deep description of how this stuff works, and we can cover that at another time, but in general, he talks about the model of using transformers for neural net models and how that may soon become obsolete, notwithstanding (his example) Facebook’s big investment into Nvidia chips.

In the near future, he said, we will build and assess models not based on scale, but based on capabilities.

He also talked about a “mixture of experts” idea where different components will play off of one another to do the kinds of in-depth cognitive work that we associate with the human brain.

“We have these feedback systems … and … one is, maybe not adversarial, but one is having … insight (into) the first one,” he says. “And you can use that to improve the quality of the first (system). I think one really exciting thing that that I am seeing from OpenAI is this real investment into the ‘mixture of experts’ idea. … don’t just train one model, but train a model with multiple pathways through that same model, so that you can combine different concepts and knowledge bases together. And the model can basically choose … which pathway to take to enter a given question. it diversifies the knowledge. And that helps with a lot of things, including robustness. And when you think of this adversarial training, or adversarial objective of these models, that becomes especially important for that as well.”

All of that really informs us about how fast AI is going, but here’s a third source that I think you’ll be surprised by!

As I was writing this, Microsoft Copilot sprang up, unbidden, to the right of the screen.

It opened the question – “is 2024 the year of AI?” – and then provided a response, citing events at this year’s CES tradeshow, and more.

“AI is improving at returning relevant content from prompts,” wrote the non-human sentient model, “whether it’s textual information, or high-quality images. This year, we’ll witness AI’s greater impact and reach into more people’s lives.”

And it said it in the Queen’s English, or King’s English, if you will. In fact, when I asked the AI to tell me which of these monikers I should use, it came back with a whole bunch of assistance, including this:

“Interestingly, the Queen herself doesn’t strictly adhere to the Queen’s English these days,” the AI told me in a calm, professional manner. “Her accent has evolved over her reign, and even the BBC now features a range of regional voices. So, while the Queen may not use the Queen’s English, it remains a hallmark of the upper classes in the U.K. Additionally, there’s a grammatical quirk known as the ‘royal ‘we’ or majestic plural. It involves referring to oneself with the plural pronoun ‘we’ instead of the singular ‘I.’ … As for the ‘King’s English’, it’s a term less commonly used. Historically, it referred to the English language as written and spoken correctly by educated people in the U.K. when the country had a king. The standard title for English monarchs from Æthelstan until John was ‘King of the English.’ Later, in the Norman period, ‘King of the English’ remained standard, with occasional use of ‘King of England’. In summary, both the Queen’s English and the King’s English denote formal, correct language usage associated with royalty and the upper echelons of society. However, language evolves, and even the royals adapt to changing linguistic norms!”

In other words, if you’re not convinced the AI is taking over, just ask AI about what AI will do this year – and it will tell you on its own, in a way that’s really pretty capable, arguably more capable than the average person on the street.

Get where I’m going?

We’re seeing this work and this proof up close and personal, with all of the experts who are weighing in on these new models and showing us exactly what the roadmap is going to look like. It’s incumbent on us – for instance, regulators and the business community, to pay attention.

(Full Disclosure: I am an advisor for LiquidAI, the MIT group that is building new forms of networks similar to some of what was discussed above.)

Source link


Be the first to comment

Leave a Reply

Your email address will not be published.