We use cookies on this site to enhance your experience. Visit our Privacy Policy for more info.

AI-list: Chips and dips

AI is a rapidly developing sector that Insight has invested more than $4B since 2014. We’re currently tracking 23,315 AI/ML startups and ScaleUps. Naturally, our team has seen a lot, and the discussions around this topic generate a lot of opinions.

Here’s a look behind the curtain into some of what we’re reading, sharing, and discussing across the team at Insight lately. Think I missed something? Email me and tell me all about it.


Less is not more

Like every one of my middle school crushes, this week began with extreme enthusiasm followed by extreme nerd outrage for Meta’s LIMA (Less Is More for Alignment) model.

The concept of an effective language model that doesn’t need to be trained on massive amounts of data is appealing — it would open up incredible possibilities for open-source innovation and speciality uses cases. Alpaca was another model making some noise along these lines earlier in the year:

The False Promise of Imitating Proprietary LLMs

…unfortunately, research published yesterday throws cold water on all of this. “An emerging method to cheaply improve a weaker language model is to finetune it on outputs from a stronger model, such as a proprietary system like ChatGPT (e.g., Alpaca, Self-Instruct, and others). This approach looks to cheaply imitate the proprietary model’s capabilities using a weaker open-source model.”

“Overall, we conclude that model imitation is a false promise: there exists a substantial capabilities gap between open and closed LMs that, with current methods, can only be bridged using an unwieldy amount of imitation data or by using more capable base LMs. In turn, we argue that the highest leverage action for improving open-source models is to tackle the difficult challenge of developing better base LMs, rather than taking the shortcut of imitating proprietary systems.” Brutal.

Generative AI Pilots Have Companies Reaching for the Guardrails

Via WSJ, chip maker Nvidia recently released a guardrails tool to help companies anxious about use of ChatGPT and other AI tools employees may be using with proprietary or sensitive company data. The release helps developers set limits on what users can do with LLMs, “such as restricting certain topics and detecting misinformation and preventing execution of malicious code.” Per the article, Apple, Verizon, and other companies have restricted or banned access to AI tools like Chat GPT — but from our own study of the space, we see many big financial institutions finding ways to embrace AI tech to their own benefit.

Of course, there’s also always the perpetual question of bias in AI, which we expect to hear more about as responsible AI concerns grow louder with widespread adoption. And all this comes on the heels of Sam Altman’s testimony to Congress last week, with subsequent announcements from OpenAI about AI governance.

Chips

Neuralink just got FDA approval to launch their first-in-human clinical study. All snark and singularity commentary aside, chip implants could mean massive quality of life improvements and regained autonomy for those with disabilities.

Indeed, implanted chips have shown incredible, life-changing promise, at least in one recent case.

Dips

“Remember when software wasn’t connected to the internet? Didn’t think so.”

This is the savage intro for Microsoft’s blog post covering announcements from this week’s developer conference. Excuse me while I cry in elder millennial…I remember booting up my Gateway desktop (delivered in those cow-print boxes!) in MS-DOS to play Doom II from disk (an incredibly inappropriate game for a grade schooler to play, but that’s a topic for another time).

Anyway, Microsoft announced a multitude of AI-powered plugins integrated into their ecosystem of products. It’s not particularly elegant (hope you love chat bubbles) but the new Copilot stack caught our eye.

After the smashing success of the Clippy jumpers this past holiday season, I was truly hoping for a nostalgic resurgence of this iconic personality in Microsoft’s AI rollout — but it seems like these persistent, faceless bubbles will be the new norm.

Grill fodder

Bill Gates, have you even met Tony Stark? “Whoever wins the personal agent, that’s the big thing, because you will never go to a search site again, you will never go to a productivity site, you’ll never go to Amazon again,” he said this week at an AI event. Yeah Bill, when can I get Paul Bettany at my beck and call?

Another example of incumbents adding generative AI features. Adobe announced a beta of “generative fill” this week in Photoshop, and the demo is as slick as you would expect.

Of course, the internet had some fun with this news.


Say hi. Are you an Insight portfolio company integrating AI into your solution? I want to hear about it! Let’s connect.


Editor’s note: Articles are sourced from an ongoing, internal Insight AI/Data Teams chat discussion and curated, written, and editorialized by Insight’s VP of Content and Thought Leadership, Jen Jordan, a real human. (Though maybe not for long?)

Image credit: Bret Kavanaugh via Unsplash.

The AI-list: Mr. Altman goes to Washington

AI is a rapidly developing sector that Insight has invested more than $4B since 2014. We’re currently tracking 23,315 AI/ML startups and ScaleUps. Naturally, our team has seen a lot, and the discussions around this topic generate a lot of opinions.

Here’s a look behind the curtain into some of what we’re reading, sharing, and discussing across the team at Insight lately. Think I missed something? Email me and tell me all about it.


OpenAI CEO in “historic” move calls for regulation before Congress

It seems unlikely that lawmakers will be able to take any action on AI before it becomes entirely mainstream. Still, concerns arose this week when OpenAI CEO Sam Altman asked Congress for AI regulation “above a crucial threshold of capabilities.”

Via Axios, concerns that top the list include dangerous and harmful content, impersonation of public and private figures (remember Balenciaga Pope?), and, with 2024 already looming ahead of us: election misinformation.

Watch the whole testimony below while you’re clearing out your inbox (or writing an AI roundup that your boss now expects weekly).

Meta pulls the curtain back on its A.I. chips for the first time

After ending 2022 with barely a literal (virtual) leg to stand on, Meta has been dropping open-source AI innovations at a rapid pace in 2023. This week, they gave us a peek into the chip they’re using to power the Metaverse and generative AI technology. Meta dropped some juicy numbers behind their largest LLaMA model: LLaMA 65B contains 65 billion parameters and was trained on 1.4 trillion tokens.

The company also mentioned they developed an internal generative AI coding tool to help their developers work more efficiently, similar to GitHub Copilot.

Per CNBC, Meta has also been “overhauling its data center designs to focus more on energy-efficient techniques, such as liquid cooling, to reduce excess heat.” A theme — and an opportunity — we’ll see more of in the coming months is the incredible environmental toll of training AI models. Training one AI model is estimated to produce 626,00 pounds of carbon dioxide equivalent, “nearly five times the lifetime emissions of an average American car.”

Investing Opportunities With Generative AI [Video]

What do investors want in an AI-driven business? Via Bloomberg, Insight’s very own George Mathew gave his take on what separates the hype from the real opportunity: the ability to have private data available to build AI products, a focus on user experiences, and having a great workflow that fits the way software fits within the industry. The combination of those aspects can make for a “very compelling” business.

Bits and bots

There’s an app for that. OpenAI dropped a ChatGPT iOS app this week, killing dozens (hundreds?) of third-party apps in the process. I’m sure Google is thrilled.

Meanwhile, the chatbot horserace continues. It seems the recent upgrade to PaLM 2 has given Bard a competitive advantage in head-to-head comparison.

AI long-form. Insight’s AI chat went on a tangent this week when asked for their key reads on, “the impact AI could have on society and civilization.” Here are a couple of recommendations, if you’re looking for a good read during the upcoming Memorial Day Weekend:

  • Of God and Machines
  • Superintelligence by Nick Bostrom
  • The Most Human Human and The Alignment Problem, both by Brian Christian (these are next on my list)

Weekend listening. Yann LeCun on Why Artificial Intelligence Will Not Dominate Humanity, Why No Economists Believe All Jobs Will Be Replaced by AI, Why the Size of Models Matters Less and Less & Why Open Models Beat Closed Models 

Cybersecurity is going to get a lot more difficult. And I *just* successfully explained what phishing is to my mom.

The internet can’t get enough of the Wes Anderson aesthetic. From TikTok trends to incredible AI-generated movie trailers like my personal favorite below (happy belated May 4th).


Editor’s note: Articles are sourced from an ongoing, internal Insight AI/Data Teams chat discussion and curated, written, and editorialized by Insight’s VP of Content and Thought Leadership, Jen Jordan, a real human. (Though maybe not for long?)

Image credit: Google Deepmind via Unsplash. “Neuroscience.” Artist: Chris Schramm