Like every single other person on Earth, I have been absolutely floored by recent advances in artificial intelligence, and have spent a huge amount of time trying to keep abreast of everything that's happening. While clearing out recent AI tweets I bookmarked, it occurred to me that I could provide a service by curating what I'm reading and turning it into a short weekly update.
My current plan is to try it out for a little bit and, if there's interest, I'll continue it indefinitely.
For background: I'm Trent Fowler, a machine learning engineer, AI enthusiast, and co-host of the Futurati Podcast. If you like this update consider subscribing to our show on Youtube or wherever you get your podcasts.
***
Welcome to "This Week in AI", April 14th, 2023. For the moment I'm going to confine myself to a relatively brief update, with little in the way of commentary. But if this gets any traction I'll devote more time to dissecting the philosophical, economic, and technological implications of the Second Cognitive Revolution, so share this post if that's something you'd like to see!
My top recommendation for the week is our interview with security expert Jeffrey Ladish, "Applying the 'security mindset' to AI and x-risk".
Tweets
It's commonly argued that advancing AI will spend relatively little time at human level intelligence, but Matthew Barnett isn't so sure.
Most LLMs are closed-source, with LLaMa being a prominent exception. Cameron Wolfe digs into what this means for the broader generative AI ecosystem.
Keerthana Gopalakrishnan, mother of robots, announces the development of RL-powered robots able to sort trash and reduce food waste.
Posts
Putting LLMs into production.
ReAct prompting is a framework that affords LLMs the ability to use external tools, and can lead to more interpretable models.
"AutoGPTs could Transform the World At the Speed of A.I."
"Foundation models for generalist medical artificial intelligence"
Like the U.S., China is considering regulations around strong generative AI models.
Steven Landsburg tested GPT-4 on economics, and it failed miserably.
LangChain is making it easier to incorporate LLMs into software development.
Sarah Constantine argues that AGI could be dangerous, but current-generation technologies pose no immediate risk.
Cohere AI has written a great set of posts on incorporating generative AI into applications.
ChatGPT makes anomaly detection in time series data much easier.
Using the ChatGPT Code Interpreter plug-in to test different python implementations.
Stanford is offering online classes on foundation models and LLMs.
Nathan Labenz at Cognitive Revolution suspects that future GPT models will be substantially economically transformative.
ResumeBuilder conducted a survey in February, and found that half of the 1,000 companies they talked to are using generative AI.
Andrew Ng and Yan LeCun oppose the call for a halt to AI progress.
Papers
OpenAGI is a framework for evaluating the reasoning and problem-solving capabilities of LLMs.
"Eight Things to Know about Large Language Models"
"ReAct: Synergizing Reasoning and Acting in Language Models"
"ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application"
"Generative Agents: Interactive Simulacra of Human Behavior" (a demo of the agents interacting with each other)
Videos
David Shapiro talks about AutoGPT and offers "heuristic imperatives" as a solution to the alignment problem.
***
As I said, I'm keeping these first few editions brief. Please share it and drop me a line if there's a change you want to see or something you think I should cover. If there seems to be a real interest in this I'll devote more time and attention to it, so let me know if you find this valuable!
Share this episode.