|
Getting your Trinity Audio player ready...
|
Go is a far more difficult game than chess because it has far more permutations. In 2016, Google’s Deep Mind team created AlphaGo (using machine learning and a neural net), which played 8 time world champion Korean champion Lee Sedol, who had won18 international titles, the second most ever. Because of the time constraint in a Go game, it was thought that a program would not have enough time to number crunch solutions to beat a human for at least another ten years. Yet AlphaGo beat Lee 4 -1. Lee was stunned and said it was “an entity that cannot be defeated.”
But it was. Less than two years later, Deep Mind created AlphaGoZero, which used machine learning over just 72 hours, without any input from humans except for inputting a large number of previously played games for the AI to learn the game and to analyse. Then the AI played a large number of games with itself to develop sophisticated games strategies.
AlphaGoZero defeated AlphaGo 100-0.
Think about that. Humans invented Go and played it for about 3,000 years. A completely self-learning algorithm took just 72 hours to learn the game and to devise new winning strategies that humans and human trained AIs can never hope to compete with. AI programs have already beaten world champions at chess and Jeopardy, which is a sophisticated trivia game using natural language.
This is the potential of AI to change the world. AI can be transformative in the fields of healthcare, science, education and so much more, in ways we cannot even conceive of today.
ChatGPT was released in November 2022 and showed what AI could do, using Large Language Models (LLM). It led to a flood of funding for AI companies. Nvidia, the global leader in the production of chips used for AI, is up 900% since January 2023. The iSTOXX AI Global Artificial Intelligence Large 100 Index is up 93% over the last two years.
In recent years, for the first time in human history, technology has been created that not only does not need to be taught, but is replacing humans in various blue collar and white collar jobs. Think about how we do our various jobs. We get an education, we learn on the job, and then use our experience and judgement to make decisions as needed as we take on more responsibility.
AI can do all of that itself, but much better due to the vast amounts of data it can parse far quicker than humans. AI’s processing speeds will only keep increasing as better chips are created. It is self-correcting as it learns from inputs it receives. It does not need vacations or sick days and will not unionize for better wages and working conditions.
The dream of people involved with AI is the merger of AI with quantum computers. That promises even more revolutionary change.
But there are huge problems with AI:
- A Canadian professor at the University of Toronto, Geoffrey Hinton, won the 2024 Nobel peace Prize for Physics (along with Princeton University researcher, John Hopfield). Their work in using concepts from statistical physics allowed for the design of artificial neural nets that function as associative memories and find patterns in large data sets, which led to the development of AI.
Ironically, Hinton quit a role at Google to speak more freely about the dangers of the technology he helped create, “particularly the threat of these things getting out of control.” He now predicts that there is a 10-20% probability that AI will wipe out humanity within a decade.
- AI programs need high quality data and lots of it. Researchers found that without high-quality human data, AI systems trained on AI-made data get dumber and dumberas each model learns from the previous one. It’s like a digital version of the problem of inbreeding.
- Open AI’s Chat GPT, which started the AI revolution in the public’s mind, uses data it gleans over the internet and is being sued by multiple plaintiffs for copyright infringement, putting the viability of Large Language Models into question.
- The current AI business model has serious issues. It’s unsustainable, both financially and environmentally. Training large AI models requires an astronomical amount of computational power, which results in a massive negative environmental footprint. Many AI companies have yet to prove their technology can scale profitably. Potential doesn’t pay the bills. The focus has been on pumping out AI-driven features without thinking about long-term sustainability.
The January 2025 release of the Chinese DeepSeek LLM caused huge consternation, leading Nvidia to lose the largest amount of market cap of any US company in a single day. It is a open source software, that has been developed (it is claimed) at a fraction of the cost of ChatGPT and uses one tenth the energy.
Let the AI wars begin.




