We use cookies on this site to enhance your experience. Visit our Privacy Policy for more info.

Thought Leadership

AI-list: Chips and dips

Jen Jordan | May 26, 2023| 3 min. read
brain image from museum

AI is a rapidly developing sector that Insight has invested more than $4B since 2014. We’re currently tracking 23,315 AI/ML startups and ScaleUps. Naturally, our team has seen a lot, and the discussions around this topic generate a lot of opinions.

Here’s a look behind the curtain into some of what we’re reading, sharing, and discussing across the team at Insight lately. Think I missed something? Email me and tell me all about it.


Less is not more

Like every one of my middle school crushes, this week began with extreme enthusiasm followed by extreme nerd outrage for Meta’s LIMA (Less Is More for Alignment) model.

The concept of an effective language model that doesn’t need to be trained on massive amounts of data is appealing — it would open up incredible possibilities for open-source innovation and speciality uses cases. Alpaca was another model making some noise along these lines earlier in the year:

The False Promise of Imitating Proprietary LLMs

…unfortunately, research published yesterday throws cold water on all of this. “An emerging method to cheaply improve a weaker language model is to finetune it on outputs from a stronger model, such as a proprietary system like ChatGPT (e.g., Alpaca, Self-Instruct, and others). This approach looks to cheaply imitate the proprietary model’s capabilities using a weaker open-source model.”

“Overall, we conclude that model imitation is a false promise: there exists a substantial capabilities gap between open and closed LMs that, with current methods, can only be bridged using an unwieldy amount of imitation data or by using more capable base LMs. In turn, we argue that the highest leverage action for improving open-source models is to tackle the difficult challenge of developing better base LMs, rather than taking the shortcut of imitating proprietary systems.” Brutal.

Generative AI Pilots Have Companies Reaching for the Guardrails

Via WSJ, chip maker Nvidia recently released a guardrails tool to help companies anxious about use of ChatGPT and other AI tools employees may be using with proprietary or sensitive company data. The release helps developers set limits on what users can do with LLMs, “such as restricting certain topics and detecting misinformation and preventing execution of malicious code.” Per the article, Apple, Verizon, and other companies have restricted or banned access to AI tools like Chat GPT — but from our own study of the space, we see many big financial institutions finding ways to embrace AI tech to their own benefit.

Of course, there’s also always the perpetual question of bias in AI, which we expect to hear more about as responsible AI concerns grow louder with widespread adoption. And all this comes on the heels of Sam Altman’s testimony to Congress last week, with subsequent announcements from OpenAI about AI governance.

Chips

Neuralink just got FDA approval to launch their first-in-human clinical study. All snark and singularity commentary aside, chip implants could mean massive quality of life improvements and regained autonomy for those with disabilities.

Indeed, implanted chips have shown incredible, life-changing promise, at least in one recent case.

Dips

“Remember when software wasn’t connected to the internet? Didn’t think so.”

This is the savage intro for Microsoft’s blog post covering announcements from this week’s developer conference. Excuse me while I cry in elder millennial…I remember booting up my Gateway desktop (delivered in those cow-print boxes!) in MS-DOS to play Doom II from disk (an incredibly inappropriate game for a grade schooler to play, but that’s a topic for another time).

Anyway, Microsoft announced a multitude of AI-powered plugins integrated into their ecosystem of products. It’s not particularly elegant (hope you love chat bubbles) but the new Copilot stack caught our eye.

After the smashing success of the Clippy jumpers this past holiday season, I was truly hoping for a nostalgic resurgence of this iconic personality in Microsoft’s AI rollout — but it seems like these persistent, faceless bubbles will be the new norm.

Grill fodder

Bill Gates, have you even met Tony Stark? “Whoever wins the personal agent, that’s the big thing, because you will never go to a search site again, you will never go to a productivity site, you’ll never go to Amazon again,” he said this week at an AI event. Yeah Bill, when can I get Paul Bettany at my beck and call?

Another example of incumbents adding generative AI features. Adobe announced a beta of “generative fill” this week in Photoshop, and the demo is as slick as you would expect.

Of course, the internet had some fun with this news.


Say hi. Are you an Insight portfolio company integrating AI into your solution? I want to hear about it! Let’s connect.


Editor’s note: Articles are sourced from an ongoing, internal Insight AI/Data Teams chat discussion and curated, written, and editorialized by Insight’s VP of Content and Thought Leadership, Jen Jordan, a real human. (Though maybe not for long?)

Image credit: Bret Kavanaugh via Unsplash.