ChatGPt was launched on 30 Nov 2022. It has been an explosive 14 months of non-stop AI growth since then. I have written about its progress every few months since then and it is time to do it again, to look into my crystal ball and see what lies ahead in 2024.
Firstly, let’s talk about Large Language Models (LLM), the engines behind the rapid acceleration of AI. They are working horses behind the output of ChatGPT, Bard and a host of generative AI image tools like Mid-Journey.
Basically, an LLM is a very large deep-learning model that is pre-trained on vast amounts of data. They are artificial neural networks that mimic the human brain in absorbing and digesting unlimited amounts of data as part of their training to be able to come out with amazing outputs based on any multi-modal inputs, be they words, voice or images. An LLM simply consists of a huge dataset that is controlled by a few hundred lines of code that feeds and trains the black box.
Just to give you an idea of how big the datasets have become within the last 3 years, I will share a personal experience. Back in 2021, I did a full-time IBM AI course and our project team wanted to create a simple Python program to detect deepfake photos. Our programmer could only find a free 15,000 images dataset to train our GAN (Generative Adversarial Network) model.
As of today, a mere 2+ years later, Meta has introduced its open-source LLM called LLama to the world at zero cost. It is a fine-tuned generative text model which has been pre-trained on 70,000,000,000 data points. That’s right, 70 Billion… It is one of the biggest and most advanced LLMs of its time now.
The sky is the limit and 100B+ LLMs are coming very soon. So much online data is being produced and recorded every day that it has become an unlimited resource for LLMs to tap. Elon and Tesla are a good example of what I mean, which I will expand on later.
We are starting to see multiple ways to commercialize and monetize AI daily. There can be so many new uses to be discovered and it is only limited by your imagination. OpenAI recently allowed its premium users API access to its ChatGPT 4.0 engine so that one can customize it with a smaller dataset to create personalized and narrowly trained chatbots. They call the newly created end products GPTs. They can be narrowly focused applications like being a coach companion chatbot on how to be a better writer/tennis player/gamer etc…
At the same time, they just launched an online store that is similar to Apple’s app store. All creators can now post their GPTs for others to purchase. This will eventually develop into a whole new industry of professionally made services to cater to any specific need that you may have. You can sell these GPTs apps for $0.99 and make decent money. The world is your oyster as your marketing reach for your virtual non-physical product is now global.
As I have described previously in my blog, I can create a mini-me chatbot that mimics me exactly. Using the LLM access I get, I can train my GPTs using the data from all my WhatsApp conversations. They easily contain at least 5 to 10 GB of my characteristics which I can train the model on to become a version of myself.
I can then add an avatar that looks like me to this newly created mini-me model. Next, I will train a voice AI app to speak like me with samples of my voice. I can then pump the model into a Metaverse Sim-City-like world where I can exist in an alternate universe. This sounds crazy but it is doable now! It is a reminder that the 2016 award-winning episode of the British TV series Black Mirror called San Junipero is now possible.
Just a few weeks ago, a YouTuber by the name of Dudesy created a one-hour comedy special about the late George Carlin, the famous comedian who died in 2008. AI provided new and current jokes mimicking his style of presentation with graphic images provided by generative AI.
Boy, are we going to have a blast this year with all the political elections happening. There will be so many deepfakes going around that all of us will be hard-pressed to know what is real and not real anymore. Another massive online scam version 2.0 will be unravelling into 2024 LOL.
On a more serious note, I predict some things will be fast-tracked and become useful for the majority as AI evolves in the next 12 months. Below are a few examples that can easily come true within a matter of time as the LLMs can be refocused to attempt to solve specific problems.
I had recently completed reading the Elon Musk book by Walter Isaacson which was a fascinating understanding of the man and what motivates him. The writer psychoanalyzed Elon and shadowed him for 2 years to write this highly recommended book.
The area related to my topic this week concerns Tesla and the self-driving solution that they were trying to resolve. The previous solution was to provide algorithm-style instructions to the car based on inputs of external stimuli (visual and radar). This was highly complicated as the algo may not be able to pre-empt all situations and react in the right human way when faced with a situation that the algo code did not cater for. For example, an accident in a particular environment like a petrol station.
Then Elon had a light bulb moment. Hang on, why are we still using algo for autonomous driving instructions? Tesla collects tons of real-time driving data from every car they sell. They record the information of every trip of each Tesla car all the time.
Hence they already have a treasure trove of data they can use to train an LLM model. They just need to remove the bad drivers and only use the rest of the data to train the new autonomous AI model. It will become a model driver that will be familiar with handling all possible situations from the years of data Tesla has stored. AI-trained self-driving cars can become Tesla’s new edge very soon as it pivots and morphs from an EV company into an AI tech powerhouse.
Using the same method as above, can you imagine what else we can do? On the medical front, we can train a model to detect cancer. Show it millions of healthy X-rays and then another batch of those that cancer has been diagnosed. Once the model is live, any patient’s X-ray can be scanned and the AI will provide a superior assessment of the probability of cancer and whether it will happen in a certain time frame. Some doctors have already started experimenting with this new approach to medical analysis.
Sounds too far fetch? Why not? the building blocks are the same, aren’t they? We can test brain scans for signs of dementia and other diseases that only doctors with many years of experience can detect. I foresee the biggest AI impact will be in the area of science and medicine. It has already helped scientists fast-track the COVID-19 vaccine development, allowing for comprehensive computer simulations instead of having to conduct long drawn-out human tests.
Joke of the week. EU AI Act was passed late last year but it is only to be implemented in 2 years. That is like a lifetime in AI time and it may even become sentient AGI by then. Copyrights infringements? How do you enforce that on a pre-trained LLM? How can an LLM neural network unlearn something it has picked up? Can a human brain do that???
Food for thought but 2024 will be another exciting year of AI given what we saw what ChatGPT can do in 14 months. AI revolutionized and upended whole industries within a short period of time. AI moves at light speed and humans will need to up their game to simply just to keep abreast of the new developments. We have to embrace AI or be left behind. My favourite quote from Thanos that speaks to AI – “It is inevitable.”
Leave a Reply