By Janus Boye
The hype around that thing called AI can be deafening and it’s quite overwhelming to try to stay on top of all the seemingly relevant AI developments.
To help us untangle what’s really happening and the impact it is having, we recently invited digital platform product lead Seb Barre from TELUS in Toronto to walk us through how he sees the big picture and notable changes.
Seb made the interesting point that we are entering a new do-it-yourself era for generative AI. While the first wave (last year) was dominated by large, proprietary offerings, including OpenAI, today other options have arrived on the scene, which allows organisations to seize new use cases and also approach it with more flexibility and lower costs attached.
As expected, it became a 30-minute packed members’ call on large language models, action models, new devices, privacy, how search is failing us, open source and much more. Seb shared plenty of interesting tools and also shared how to get your organisation to embrace AI.
Let’s dive into the new era of do-it-yourself AI.
What is DIY Gen AI?
Simply put, it means that you can run things locally on your own machine and in your own IT landscape without data leaving to the cloud and entering the premise of hyperscalers like Amazon or Microsoft. This is interesting in particular considering privacy concerns with popular tools like Chat GPT, and as Seb reminded us, big tech has historically struggled with this.
Seb didn’t do slides for the call, but showed a series of demos starting with NVIDIA Chat with RTX, a personalized gen AI chatbot based on a custom large language model. It’s essentially a repackaging of open source models, and if you have the right graphics card and enough RAM, you can easily run a fairly high quality chatbot on your own machine. It can run retrieval augmented generation, which means you can ask it questions about specific documents (custom data). In other words: This enables you to use your own data/documents relevant to a question or task and provide them as context for the LLM.
As Seb said, people want to do this themselves and keep the data and answers behind the firewall. This has led to the arrival of a new practice called Machine Learning Ops, ML Ops, as illustrated in the diagram.
ML Ops is an internal practice that understands how to take an open source large language model and then internalize it, augment it, train it, refine it and get it all into a state where it becomes valuable.
Inside TELUS they’ve decided to call their Gen AI products Copilots rather than Assistants. At the moment, it seems that this will be the term that people will settle on, and Microsoft has also recognised that it’s so broad and generic that they can’t trademark it. The term Assistant is problematic as it’s not doing things for you or taking control, but with Google and Facebook soon to come out with their AI tools, this naming discussion is still up in the air.
Also, Seb described how inside TELUS, they’ve worked with a kind of “coalition of the willing” to drive understanding and adoption in this area. It wasn’t so much a business decision to get started, it was more a sense of people wanting to use it and value being shown quickly.
Today several gen AI products have been built internally, including for the call center, where the tool is much more contextually aware than in the past. They are investigating live transcripts to feed it - effectively using Copilot as an extra agent and soon they are planning to launch their first public facing gen AI product tied to customer support.
AI + privacy: An extreme combination
Seb then moved onto the arrival of privacy focused tools powered by Gen AI.
One example of this is Rewind, currently much hyped and only available for Mac. Rewind is really a personalized AI powered by everything you’ve seen, said, or heard. Rewind compresses, transcribes, encrypts, and stores your data locally so only you have access and you can then ask Rewind to summarize meetings, draft an email, and much more.
Seb also showed us KIN, which takes a different approach to the same concept. It’s kind of an AI-powered personal assistant or even lifecoach - essentially an advice machine, which you want on your own machine and to be kept private.
Next up was Crystal by Igenius, a product that connects to your existing data sources to enable interactions that feel natural, accessible, and human-first. As Seb said: You can feed this your data and then have meaningful conversations with it - kind of like a data scientist.
Search is failing us - chat search hybrids to the rescue
Moving on, Seb mentioned that Google search has been getting worse in recent months and is less valuable than in the past. Several members and call participants shared a similar view experiencing that Google search results have become less helpful than they used to be.
In the past year, we talked about how search is changing at several peer group meetings, where many members have also said that they’ve stopped using Google and are now searching for answers using Chat GPT, even though it was never intended for this. As Seb mentioned, there’s still value in traditional search, e.g. when you do want that list of links to explore.
Seb used Perplexity AI as an example of a vendor seizing the current search opportunity. It’s a company with the compelling tagline - where knowledge begins - and it uses a Gen AI to provide customized recommendations based on user preferences. Seb mentioned that he’s now using the paid version, Perplexity Pro, as his default search engine to see if he gets tired of it, but as said, so far it’s been a good experience.
There’s also Microsoft Copilot, which from the beginning differentiated itself by combining Bing search with a chatbot and citing the sources it used to display its answer. You can upload photos and ask questions about it.
Seb also highlighted the Arc Web Browser, which came out in July 2023 and aggregates search results into one page. They refer to it as browsing for you - as an alternative to you having to go through lists of links. At the moment it’s only out for Mac iOS. On the topic of hardware, let’s move on.
DIY AI with emerging new devices
It’s not just software where the action is. To weave gen AI into our daily lives, there’s also plenty of interesting innovation happening with emerging devices, such as the Humane Ai Pin, which is an AI powered supercomputer that allows you to stay connected and stay present everywhere you go. It’s a wearable device that functions as an external memory and can be used to interact with the environment. An interesting concept without a screen, but it does come with a tiny projector that can show you results on your hand as well as being controlled by hand. Certainly a cool new UI.
There’s also the fully open source frame AI glasses by Brilliant Labs, which you can now preorder at $349 USD. Think of it as next level Google AR/XR glasses.
One new gadget that Seb has pre-ordered is the Rabbit R1, one of the new LLM-first dedicated devices. It’s a small thing, smartphone sized, that acts as a proxy to various apps and websites that you use using AI-assisted scraping. Rabbit calls it a "pocket companion" and it does have a SIM card so it's network-enabled.
The company behind it talks about the concept of large action models, which are AI models trained to understand human intention and predict subsequent actions to take.
As Seb mentioned, it works kind of like an AI-driven web scraper. As an example, Seb mentioned how it understands to go to Spotify and play your desired song, but what happens when Spotify changes its interface? So far, the firm behind has promised that the machine will understand how to adapt, although they haven’t explained how.
Seb also referenced an academic paper titled “AppAgent: Multimodal Agents as Smartphone Users” (PDF), which introduces a novel multimodal agent framework that leverages the vision. In the demo shown below, you see App Agent exploring and deploying on Gmail.
This novel approach bypasses the need for system back-end access, thereby broadening its applicability across diverse apps. Central to the agent' s functionality is its innovative learning method, where the agent learns to navigate and use new apps either through autonomous exploration or by observing human demonstrations. As Seb mentioned, wrapping up this part, App Agent is also considering using this for automated testing, where its explorative functionality could be put to good use.
Enough on how devices and tools are paving the way for the next generation of AI technology, let’s move onto the final part of the call, where Seb focused on what’s happening with open source.
Open source enabling DIY Gen AI at the personal level
As Seb stated, much of the open source Gen AI DIY innovation happened with the arrival on LLaMA. As it says on the Github page:
The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud.
Much of the tooling that’s coming out today is based on this, including Ollama, which is one of the better known ones. Ollama is an open source tool, which just requires a few clicks and then you are up and running with a large language model locally. Seb showed his local implementation which looked like Chat GPT and he had it analyse a picture that he uploaded which is did quite well. Again, just to bring it home, Seb highlighted that this was all running locally on his computer.
One of the cool things about Ollama is that you can easily modify the model. Seb showed a live demo where he refined the model file and turned his agent into an unhelpful pirate. As he reminded us, if you do run this locally, you are likely to experience slower response times based on your computing power. In wrapping up this demo, Seb also explained that you could run this on your corporate internal network and give access to contracts or services and easily turn it into a helpful conversational agent, without your data leaving to the cloud.
Learn more about everything happening with AI
Expectedly, Seb was asked how he stays on top of everything in this fast moving field. One of the things they did early at TELUS was doing their work in the open, so that everyone who wanted could be a part of it. They also built integrations with Slack and Google chat, so that today you can go into a Slack thread and ask the bot to summarize it. This approach has clearly made a positive difference in terms of getting more people onboard and showing value at every step along the way.
Looking externally, Seb mentioned that he tries to stay up-to-date on Hacker News and social media.
We’ve covered this emerging topic extensively in the past years and have also shared openly what we’ve learned along the way. Back in October, we were joined by Sree Sreenivasan, who is the former Chief Digital Officer of New York City, the Metropolitan Museum, and Columbia University and he shared Sree's Non-Scary Guide to AI.
In the summer of 2023, Seb hosted another member’s call where he shared early adopter experiences with Github Copilot. Much has happened since then with Github Copilot inside TELUS. From the initial pilot roll out, today there’s a mandate to get all 6000+ TELUS developers using Copilot everyday. As Seb said: “Everyone who uses it loves it”.
Gen AI has also been a regular topic in our group meetings and conferences and will continue to be on the agenda at most sessions for the foreseeable future.
Our Swiss member Samuel Pouyt has also authored two important pieces on the topic:
Exploring the Impact of AI: Unveiling the Evolution from Assistants to Artilects (2023)
How AI Assistants will impact businesses and consumers (2018)
Finally, you can also lean back and enjoy the entire recording from the call below.