This Week in AI (03/31/2023)

March 31, 2023
Trent Fowler

Like every single other person on Earth, I have been absolutely floored by recent advances in artificial intelligence, and have spent a huge amount of time trying to keep abreast of everything that's happening. While clearing out recent AI tweets I bookmarked, it occurred to me that I could provide a service by curating what I'm reading and turning it into a short weekly update.

My current plan is to try it out for a little bit and, if there's interest, I'll continue it indefinitely. 

For background: I'm Trent Fowler, a machine learning engineer, AI enthusiast, and co-host of the Futurati Podcast. If you like this update consider subscribing to our show on Youtube or wherever you get your podcasts. 

***

Welcome to "This Week in AI", March 31st, 2023. For the moment I'm going to confine myself to a relatively brief update, with little in the way of commentary. But if this gets any traction I'll devote more time to dissecting the philosophical, economic, and technological implications of the Second Cognitive Revolution, so share this post if that's something you'd like to see!

The big story of course is an open letter by the Future of Life Institute claiming that it's time to put a 6-month moratorium on experiments with powerful LLMs, signed by visionaries like Elon Musk, Steve Wozniak, and Tristan Harris. Eliezer Yudkowsky penned an opinion piece in Time Magazine saying that this doesn't go nearly far enough, and we may need an international agreement on halting such experiments indefinitely. 

In the meantime, people are using LLMs for code debugging, analytics, game programming, fiction writing, and dozens of other tasks. 

Tweets

Peter Doocy asks whether we should be afraid that AI will kill us all at a White House Press Briefing.

Jason Abaluck on regulating AI.

Jacy Rees Anthis writes on some key questions to consider for digital minds.

Is it time for AI rights?

"Wolverine", a GPT-4 powered python debugger that can iteratively explain why your code is crashing.

Using AI to predict where users will look at as they engage with your designs.

AI is the future of cybersecurity

Abacus.ai is building a tool that will give users the ability to generate knowledge on their own knowledge base.

How will plugins impact the economics of development and the ability to run LLMs locally?

Sebastian Raschka gets into the weeds on how LLMs are evaluated and trained, offering a lens into LLM shortcomings (see also this earlier tweet.

Ethan Mollick compares working with Bing's AI to working with a Ph.D. student.

Using Replit and ChatGPT to create a dashboard for a business.

GPT-4 writes a 115-page fantasy novel (which is apparently pretty good.)

Researchers at Microsoft want to give LLMs agency and volition. I'm not at all sure this is a good idea.

Databricks is building 'Dolly', a 'democratized' LLM.

Having ChatGPT recreate the classic video game pong by saying "making the classic video game pong"

Replit is partnering with Google to enhance the use of generative AI in software development.

What are the next steps for LLMs? How far can they be scaled? Sebastian Raschka weighs in.

An AI tool that works in Excel to automate tedious tasks.

Asking ChatGPT how we could stop a powerful AI from becoming a paperclip maximizer.

Making a responsive lo-fi radio station with Replit, ChatGPT, and Midjourney.

How will collective stores of knowledge like Stack Overflow be impacted by the use of LLMs?

Once spreadsheets were introduced there was talk of an apocalypse in bookkeeping jobs. What actually happened?

Prediction: in the future, every major artist will have trained a generative model on their corpus, and Spotify will dominate the space.

Richard Ngo points to an argument that LLMs can learn to reason causally.

LLMOps

You can now just chat directly with ChatGPT on the phone.

Posts 

"Generative AI set to affect 300mn jobs across major economies"

"We May be Surprised Again: Why I take LLMs seriously."

"'We will live as one in heaven': Belgian man dies by suicide after chatbot exchanges"

"Pausing AI Developments Isn't Enough. We Need to Shut it All Down"

"OpenAI faces complaint to FTC that seeks investigation and suspension of ChatGPT releases"

Papers

"What does it take to catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring"

"Teaching Algorithmic Reasoning via In-context Learning" (if done correctly, LLMs can be taught how to do accurate quantitative reasoning.)

Videos 

"Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367"

"Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast"

***

As I said, I'm keeping these first few editions brief. Please share it and drop me a line if there's a change you want to see or something you think I should cover. If there seems to be a real interest in this I'll devote more time and attention to it, so let me know if you find this valuable!

 

Share this episode.

DISCLAIMER - THE DAVINCI INSTITUTE, FUTURATI PODCAST AND/OR THOMAS FREY ARE NOT PROVIDING INVESTMENT ADVICE Nothing on this site or anything contained within is to be taken as financial advice. I am simply stating my observations and experiences as are any guest appearing. Past performance is not a guarantee of future return, nor is it indicative of future performance. Investing involves risk and the investment will fluctuate over time, and you may gain or lose money. As with all financial decisions, you should consult a licensed professional. All opinions of the hosts, guests, information on the website, social media posts and/or sponsors are for entertainment purposes only.

Copyright © 2021 DaVinci Institute