Like every single other person on Earth, I have been absolutely floored by recent advances in artificial intelligence, and have spent a huge amount of time trying to keep abreast of everything that's happening. While clearing out recent AI tweets I bookmarked, it occurred to me that I could provide a service by curating what I'm reading and turning it into a short weekly update.
My current plan is to try it out for a little bit and, if there's interest, I'll continue it indefinitely.
For background: I'm Trent Fowler, a machine learning engineer, AI enthusiast, and co-host of the Futurati Podcast. If you like this update consider subscribing to our show on Youtube or wherever you get your podcasts.
***
Welcome to "This Week in AI", April 28th, 2023, where we keep track of the philosophical, economic, and technological implications of the Second Cognitive Revolution. Share this post if that's something you'd like to see!
My top recommendation for the week is Rob Bensinger's "The basic reasons I expect AGI ruin", which lays out a simple case for AGI to go poorly for humanity.
Tweets
Michael Nielsen put together a tweet thread on the emergence of unexpected abilities in LLMs.
An anonymous TikToker recently put out an deepfake track from Drake and The Weeknd, which then went viral. Alt Man Sam weighs in on what this could mean for the future of music.
Yasser Elsaid has built a tool that allows you to add documents to a database and then "chat" with them.
FastRLAP is a reinforcement learning algorithm capable of learning how to drive very quickly.
Automating basic data analytics with GPT Code Interpreter.
"The Little Book of Deep Learning"
PyCodeAGI is an agent that can build functioning apps using LLMs.
Dan Hendrycks enumerates what he sees as the problems with "effective accelerationism."
Text-to-audio generation with instruction-tuned LLMs.
Will future generative AI models be able to effectively handle complex, multi-agent dialogue?
Talk to AutoGPT agents directly in Telegram.
Palantir has demo'ed an AI agent that can help coordinate battlefield drones (AI doomers are pissed.)
Are we about to automate prompt engineering?
Posts
Lamini is an engine for customizing LLMs for particular use-cases.
Replit trained its own code-completion LLM.
Papers
Language is ambiguous, and that could be a problem for LLMs. "We're Afraid Language Models Aren't Modeling Ambiguity" explores this issue.
Imbuing LLMs with multimodality through modularity.
Meta-reasoning over chains-of-thought could supercharge question-answering systems.
A new survey paper examines the power and impact of ChatGPT.
How should we weigh the promise and the peril of powerful AI models? A paper from Stanford takes a look.
How good are LLMs at planning? Can we help LLMs do better? (A related question: should we?)
***
As I said, I'm keeping these first few editions brief. Please share it and drop me a line if there's a change you want to see or something you think I should cover. If there seems to be a real interest in this I'll devote more time and attention to it, so let me know if you find this valuable!
Share this episode.