AI Week in Review 23.03.25
Google Bard joins the LLM chatbot fight, and Bill Gates declares "The Age of AI Has Begun"
AI Tech and Product Releases
Google Bard is now available to limited release. Here’s how to join the waitlist. Some feedback suggesting Google is letting Microsoft get ahead. There’s no API access, and some feedback suggests it’s not as well-developed as Bing chat. But Stay tuned, the race to build the best “Answer Engine” has just begun. And it’s not just the duopoly, but some innovative startups in the race, including you.com, perplexity.ai and others.
Futurism notes a Bing and Google Bard ‘hall of mirrors’, where both get fooled by ‘fake news’, specifically a joke comment on Hacker News about Google Bard being shut down was treated as real news. Wow, fake-news fools the bots!
Chat GPT announces plug-ins to “help ChatGPT access up-to-date information, run computations, or use third-party services.” This includes bringing in real-time sports scores, stock prices, the latest news, etc. via web browser, attaching a python interpreter, and giving access to chatGPT to perform actions (such as ordering a table via Open Table). A great step forward in answers with more truth-grounding real-time relevance and utility.
Jim Fan says: “If ChatGPT's debut was the "iPhone event", today is the "iOS App Store" event.” and compares this with LangChang, the open-source LLM integration alternative as the “Android” option:
Note that we do have an "Android App Store" already - the open-source LangChain ecosystem, built by @hwchase17. Open-source ftw
One of those plug-ins is Wolfram Alpha. Steven Wolfram announces ChatGPT Gets Its “Wolfram Superpowers”! It connects chatGPT to Wolfram Alpha as an expert subroutine:
Under the hood, ChatGPT is formulating a query for Wolfram|Alpha—then sending it to Wolfram|Alpha for computation, and then “deciding what to say” based on reading the results it got back.
Many other plug-ins already, and this is likely to explode as there
AI Research News
A recent paper on GPT-4 in action touts some amazing capabilities. “Sparks of Artificial General Intelligence: Early experiments with GPT-4” is a 150-page paper from testers of pre-release GPT-4 in Microsoft Research, that catalogs some of the many powerful capabilities of GPT-4. They explore its code capabilities, math abilities, world-model understanding, human interaction and understanding, coding, multimodal composition and more. For example, they showed how GPT-4 can output test in SVG format to render a (crude but passable) picture of a unicorn. Most importantly, their exploration of how GPT-4 can call upon and use tools and subroutines is likely the path forward for many very powerful AI applications.
A Large Language Model release from Huawei Researchers. Pangu-Σ: A Large Language Model With Sparse Architecture And 1.085 Trillion Parameters:
Our experimental findings show that PanGu-Σ provides state-of-the-art performance in zero-shot learning of various Chinese NLP downstream tasks.
MIT researchers used machine learning to build faster and more efficient hash functions, which are a key component of databases. These learned model improved runtimes by 30%. It’s an example of machine learning and AI “eating software” by improving prior algorithmic approaches.
AI Business News
NVIDIA’s GTC happened. See my other article on it.
AI Opinions and Articles
I agree with this: “AI isn’t close to becoming sentient – the real danger lies in how easily we’re prone to anthropomorphize it.” The author says:
The new chatbots may well pass the Turing test, named for the British mathematician Alan Turing, who once suggested that a machine might be said to “think” if a human could not tell its responses from those of another human.
But that is not evidence of sentience; it’s just evidence that the Turing test isn’t as useful as once assumed.
With AI, we are dealing with algorithmic language manipulation, world-building, reasoning, and creation (art, music, text). Now, all of that looks a lot like how humans think and create, and therefore ‘sentience’, but it’s not. At the same time, this looks a lot like goal-post moving. We thought the Turing Test was THE test for AI, but in fact, LLMs are getting close to Turing-Test passing, just like they can do well on SATs and LSATs. I will go out on a limb to say that human-like sentience is not in the cards for AI in the near term or even the long-term, because sentience is not that useful for AI as a tool and likely something incredibly hard, beyond any concepts of reasoning, etc. that AI could need to become AGI.
In the article “AI chatbots with Chinese characteristics: why Baidu’s ChatGPT rival may never measure up”, Deakin University researcher Fan Yang explains how Chinese Communist Party censorship and control is colliding with AI innovation in China. China’s Government is blocking use of chatGPT, Baidu’s ERNIE Bot is only available to users approved by the Government, and heavy censorship has been placed on chat bots.
ERNIE Bot will not be a Chinese substitute for ChatGPT, but that might be how the Chinese state wants it. As earlier efforts to make Chinese AI chatbots have shown, the Chinese Communist Party prefers to maintain strict censorship rules and government steering of research – even at the cost of innovation.
On our curmudgeon Watch: The AI-hostile article “The stupidity of AI” in The Guardian was dragged publicly on Twitter this week over the obviously wrong statement that there wasn’t any new development in AI in decades. The author amended the major gaffe to say there was nothing new in “academic AI research” in decades. Still wrong!
Bill Gates declares “The Age of AI Has Begun” in a 7 page GatesNotes post. High-level and thought-provoking, it’s another declaration that compares this “AI moment” to similar inflection points in recent technology history.
“The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone.” - Bill Gates
A Look Back …
At NVIDIA GTC, Ilya Sutskever’s Fireside chat discussed a lot about how they got the insights they did to develop AlexNet and later the LLMs. Ilya understood early on that a large enough and deep enough neural network could characterize the problem sufficiently to solve it, so long as you had the right and sufficient inputs. It’s why AlexNet came about, because the image data set was so large, a large and deep neural network could be effective. That was the intuition, and his intuitions have paid off from that point through the development at OpenAI of the GPT large language models.