Like every single other person on Earth, I have been absolutely floored by recent advances in artificial intelligence, and have spent a huge amount of time trying to keep abreast of everything that's happening. While clearing out recent AI tweets I bookmarked, it occurred to me that I could provide a service by curating what I'm reading and turning it into a short weekly update.
My current plan is to try it out for a little bit and, if there's interest, I'll continue it indefinitely.
For background: I'm Trent Fowler, a machine learning engineer, AI enthusiast, and co-host of the Futurati Podcast. If you like this update consider subscribing to our show on Youtube or wherever you get your podcasts.
Welcome to "This Week in AI", April 14th, 2023, where we keep track of the philosophical, economic, and technological implications of the Second Cognitive Revolution. Share this post if that's something you'd like to see!
My top recommendation for the week is our interview with Zvi Mowshowitz, "Should we halt progress in AI?", but second place goes to this absolutely hysterical deepfake interview between Eliezer Yudkowsky and Lex Fridman (spoiler: Eliezer has found something besides alignment to care about...)
Will AI have more of a beneficial impact on low-performing or high-performing workers? We're not sure yet, but Ethan Mollick presents some preliminary findings.
A one-stop shop for community-built LLMs.
Putting ChatGPT into your company Slack.
How can we enforce rules on the use of advanced AI?
Can GPT-4 do science?
BabyAGI-asi can execute arbitrary Python code. Should security professionals be concerned?
Turns out that a lot of the math behind training and using LLMs is fairly basic.
Text-to-video is getting insanely good.
GPT-4 is incredibly good at helping you navigate the idea maze.
Some people have argued that "emergent" abilities in LLMs are illusory. Those people are wrong.
Google and DeepMind are getting the 'ol band back together with the announcement of Google DeepMind.
"The Complete Beginners Guide To Autonomous Agents"
Is GPT-4 an early AGI, or is it a mirage?
Some early ideas for a comprehensive US AI policy.
Should we jettison explanation and let AI help us with science?
Robin Hanson ponders reasonable and unreasonable AI fears.
"LongForm: Optimizing Instruction Tuning for Long Text Generation with Corpus Extraction"
"HOW ROBUST IS UNSUPERVISED REPRESENTATION LEARNING TO DISTRIBUTION SHIFT?"
"Sam Altman: Size of LLMs won’t matter as much moving forward"
"The Hacking of ChatGPT Is Just Getting Started"
Applying the "boosting" methodology to language prompting.
Gary Marcus weighs in on the next decade in AI.
A video walkthrough of Camel, BabyAGI, AutoGPT, and Camel LangChain.
LLM whisperer Riley Goodside talks about the prompt engineering revolution.
Robin Hanson famously disagrees with Eliezer Yudkowsky, and he lays out his reasoning in this interview with the Bankless Podcast.
Connor Leahy wants to emulate human cognition as an approach to developing robust, safe AI.
As I said, I'm keeping these first few editions brief. Please share it and drop me a line if there's a change you want to see or something you think I should cover. If there seems to be a real interest in this I'll devote more time and attention to it, so let me know if you find this valuable!
Share this episode.