This Week in AI (03/24/2023)

March 24, 2023
Trent Fowler

Like every single other person on Earth, I have been absolutely floored by recent advances in artificial intelligence, and have spent a huge amount of time trying to keep abreast of everything that's happening. While clearing out recent AI tweets I bookmarked, it occurred to me that I could provide a service by curating what I'm reading and turning it into a short weekly update.

My current plan is to try it out for a little bit and, if there's interest, I'll continue it indefinitely. 

For background: I'm Trent Fowler, a machine learning engineer, AI enthusiast, and co-host of the Futurati Podcast. If you like this update consider subscribing to our show on Youtube or wherever you get your podcasts. 


Welcome to the inaugural issue of "This Week in AI". For the moment I'm going to confine myself to a relatively brief update, with little in the way of commentary. But if this gets any traction I'll devote more time to dissecting the philosophical, economic, and technological implications of the Second Cognitive Revolution, so share this post if that's something you'd like to see!


By utilizing the newly-released ChatGPT plugins, @swyx has managed to use it to build a simple app store.

Alexandros Marinos -- who became famous by taking controversial stances on the COVID-19 vaccines -- has loaded podcast transcripts into ChatGPT and had it highlight opinions that differ substantially from the mainstream. 

Having an LLM partially or fully automate basic tasks is an obvious next step. With getlindy, an AI executive assistant, this dream just got one step closer to being a reality.

Amjad Masad, of Replit fame, has long been singing the praises of LLMs as a coding partner. A new version of Replit's Ghostwriter will now be able to write directly to your files (with your permission, of course.)

Jim Fan, an AI scientist at NVIDIA, announced the deployment of NVIDIA's foundation-model-as-a-service (though I have doubts about the longevity of the FMaaS acronym). Now powerful models for a variety of applications, including text, images, and protein folding, are just an API call away!


"ChatGPT Gets Its “Wolfram Superpowers”!" - Stephen Wolfram has been at the forefront of physics and complexity theory for decades. As soon as he got ahold of ChatGPT, he naturally began to speculate about supercharging it with access to his own Wolfram computational knowledge platform. The basic idea is that Wolfram technology would be able to help ameliorate some of the LLMs well-known shortcomings, such as its notable tendency to just make stuff up. A mere two months later he's succeeded, and the results are extremely promising!

"The Age of AI has begun" - Bill Gates publishes a post claiming that AI will be as revolutionary as the internet. He draws from decades of experience in building such revolutionary technologies to make the case, and I, personally, find it very persuasive. 


"The Death of a Technical Skill" - This is from way back in 2020, and it examines the fate of Flash programmers when Steve Jobs announced Apple would no longer support Adobe Flash.

From the conclusion:

Our main empirical finding is that despite a large reduction in demand for Flash skills, wages changed very little, due in part to how rapidly workers adapted to this change. The supply of Flash workers proved to be remarkably elastic with no discernible evidence of a decline in wages, in part because of rapid adjustment by the supply side of the market. The adjustment was rapid because workers were forward-looking about human capital choices.

I have a feeling that a lot of us are going to be facing a similar situation in the years ahead, and could learn from this analysis.  

"Sparks of Artificial General Intelligence: Early experiments with GPT-4" - One of the most remarkable aspects of ChatGPT is its versatility. It is able to generate text, carry on long conversations, translate between dozens of languages, write kid's books, poems, code, games, and more.

This has naturally prompted some to wonder as to whether we might not be glimpsing the beginnings of AGI. This paper addresses the question head-on. 


 "The Future of Work With AI - Microsoft March 2023 Event" - I've been amazed at the rapidity with which Microsoft jumped on the generative AI trend. In this demo, they show how they're integrating LLMs into their core offerings, and what that integration makes possible.

"How Will AI Change Ethics? with Pedro Domingos" - AI expert Pedro Domingos joins the Salem Center to discuss various issues at the intersection of AI and ethics, including bias, privacy, and the possibility of technological unemployment. 

"E12: Effective Accelerationism and the AI Safety Debate w/ Bayeslord, Beff Jezoz, and Nathan Labenz" - This episode of the Moment of Zen podcasts featured several members of the so-called 'effective accelerationist' movement. I found myself disagreeing with a number of their critiques of AI Safety, but it was nonetheless extremely thought-provoking. 


As I said, I'm keeping this first edition brief. Please share it and drop me a line if there's a change you want to see or something you think I should cover. If there seems to be a real interest in this I'll devote more time and attention to it, so let me know if you find this valuable!


Share this episode.

DISCLAIMER - THE DAVINCI INSTITUTE, FUTURATI PODCAST AND/OR THOMAS FREY ARE NOT PROVIDING INVESTMENT ADVICE Nothing on this site or anything contained within is to be taken as financial advice. I am simply stating my observations and experiences as are any guest appearing. Past performance is not a guarantee of future return, nor is it indicative of future performance. Investing involves risk and the investment will fluctuate over time, and you may gain or lose money. As with all financial decisions, you should consult a licensed professional. All opinions of the hosts, guests, information on the website, social media posts and/or sponsors are for entertainment purposes only.

Copyright © 2021 DaVinci Institute