AI Week in Review 23.05.27
Microsoft builds Windows CoPilot and PlugIns, QLORA fine-tuning and Meta AI speaks 1,000 languages
Top AI Tools
We will have to make this week’s top tool Bing Chat for bringing in plug-ins.
AI Tech and Product Releases
This Week it’s Microsoft’s Turn to take the announcement spotlight as they rolled out many announcement at their Build conference. Some announcement high-lights:
Microsoft is adding AI to Windows OS with Windows Copilot, offering as part of Windows 11 “centralized AI assistance to help people easily take action and get things done.” Microsoft is expanding its AI Co-pilot to appear next to Windows 11 apps, to give people quick access to personalized answers and relevant suggestion, and help edit, summarize, create and compare documents.
Microsoft’s Azure AI Studio lets developers build their own AI ‘copilots.’ Extending the co-pilot concept to customized AI copilots to extend applications, Microsoft provides support in Azure to build and provision customized AI co-pilots.
ChatGPT and Bing come together as the default search experience for an enhanced ChatGPT. “Microsoft has announced the integration of Bing Search into OpenAI's ChatGPT in order to provide more relevant and potentially new responses.”
Plug-ins come to Bing Chat: Microsoft introduced third-party plug-ins from companies such as Expedia, Instacart and Zillow to its Bing chat conversations, making it easier for people to take actions after receiving information.
Microsoft launches Fabric, a new end-to-end data and analytics platform. Their pitch is that Fabric is the one platform to do it all: “There’s a unified compute infrastructure; there’s a unified data lake. There’s a unified product experience … unified governance … a unified platform … and the unified business model. There’s literally just one thing to buy.”
Google Bard can now display images.
A new model Falcon-40B is leading the Hugging Face Open LLM leader-board. It’s a 40B parameter causal decoder-only model trained on 1 trillion tokens of RefinedWeb. For its size, it is very powerful.
Last week, we talked about the Drag your GAN AI research that was like “Photoshop on steroids.” Well, Abobe Photoshop is adding AI generation features to give Photoshop those steroids directly. Adobe’s Generative Fill AI feature lets you add, remove and extend visual content based on natural-language text prompts.
AI Research News
Researchers from Microsoft and UNC present Composable Diffusion (CoDi), a generative AI model that takes any combination of input modalities, such as language, image, video, or audio, and generates any combination of output modalities. Similar to recent similar results from Meta, you can inputs text and audio and generate images or video from that.
Several new results on fine-tuning of LLMs are breaking yet more ground in doing more with less fine-tuning. QLORA, Efficient Finetuning of Quantized LLMs, is an efficient finetuning approach that combines model quantization and Low Rank Adapters (LoRA), which reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU.
The paper LIMA: Less Is More for Alignment showed the ability to fine-tune a 65B parameter LLaMa language model on only 1,000 carefully curated prompts and responses, yet achieve strong performance and results. The author conclude that “almost all knowledge in large language models is learned during pretraining, and only limited instruction tuning data is necessary to teach models to produce high quality output.”
The ultimate translator has arrived - Meta’s new AI models can recognize and produce speech for more than 1,000 languages. They retrained an existing AI speech model and extended it by using the New Testament Bible with both text translations (in over 1,000 languages) and unlabeled New Testament audio recordings (almost 4,000 languages).
AI Business News
An AI-generated fake photo of an explosion at the Pentagon went viral on Twitter, got retweeted by news outlets, and caused a brief $500 billion market selloff in the stock market. One interesting take on this is that its AI based retweeting and AI-based programmed trading that are taking cues from this fake photo, and no human was in the loop to check the veracity of the original claim. This is a systemic vulnerability; don’t be surprised if more AI generated ‘flash crash’ events happen again.
Deloitte’s Big Bets On Google Generative AI, ‘Proactive’ Workforce Transformation:
“Google Cloud’s worldwide partner leader Kevin Ichhpurani told CRN that his cloud company is forming a tighter relationship with Deloitte to bring Google’s generative AI capabilities to enterprises in every industry.”
Making sense of AI research in medicine, in one slide. “At a recent AI conference, Atman Health chief medical officer and Brigham and Women’s associate physician Rahul Deo boiled the issue down in a single slide: the riskiest, most impactful studies draw far less attention these days than the rest of the research.” He makes a good point that focus is on lower-impact areas.
Lawyer admits using AI for research after citing ‘bogus’ cases from ChatGPT. A New York lawyer used ChatGPT to help write a legal brief on a court case, and got burned when ChatGPT created multiple fake case citations then claimed they were real.
White House unveils new efforts to guide federal research of AI. The Biden administration released an updated Federal AI research plan, which includes greater emphasis on international collaboration with allies, and is asking for public input on critical AI issues.
AI Opinions and Articles
OpenAI’s leaders (Altman, Brockman and Suyskever) co-authored an article on Governance of Superintelligence. First, they shared a prediction:
Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.
Their key suggestions for dealing with “future AI systems dramatically more capable than even AGI” include:
Coordinate the leading development efforts to “maintain safety and help smooth integration of these systems with society.”
Have international oversight over superintelligent AI efforts. “Something like an IAEA” to inspect, audit, test for compliance, etc. on advanced AI, with an “agency focus on reducing existential risk.”
Solve AI safety: “We need the technical capability to make a superintelligence safe. This is an open research question.”
Don’t limit current and open-source AI capabilities: “It’s important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here.”
Governance of high-powered AI should have substantial public oversight.
In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive. … We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.
- OpenAI’s Altman, Brockman and Suyskever
A Look Back …
As AI has grown in importance, the Federal Government’s focus on AI has grown One of the last acts of the Trump administration in January 2021 was launching the National Artificial Intelligence Initiative Office, following up on establishing and funding NSF’s AI Research Institutes and DOE’s QIS Research Centers in 2020.
The Biden administration’s 2023 AI Research and Development Strategic Plan released this week is an update to the 2019 AI Research and Development Strategic Plan released by the Trump administration, which in turn was an update to the the first National AI Research and Development Strategic Plan in 2016.
I'm still somewhat surprised that A.I. is making stuff up, in place of accurate information. Fake court citations is fairly complex, compared with making up the name of the owner of a real business, for example.
As a baseline product, I would have thought guardrails to prevent these outputs would have been top of the list, unless someone specifically asks for fiction.
Or is this what invariably happens when the machines take over, and there's no way to fix it, and eventually we won't know or care?