📍 Today in health, it. We're going to take some time to take a look at open AI and Google their announcements around their AI assistance. It's been a little bit, it's been a week and a half and some of the dust is settled. We're still not looking at anything we're really talking about. I don't know, vaporware, slideware, whatever at this point, but they have a good track record of rolling this stuff out so we should see it soon.
My name is bill Russell. I'm a former CIO for a 16 hospital system. And creator of this week health set of channels and events dedicated, transform healthcare. One connection at a time. We want to thank our show sponsors who are investing in developing the next generation of health leaders. Notable service now, enterprise health parlance. Certified health and Panda health. Check them out at this week.
health.com/today. All right, this story and all the new stories we cover, you can find on this week, health.com/news. Check it out. You can also sign up for our newsletter and get these news. Articles directly to your inbox. There you go. All right. One last thing, share this podcast with a friend or colleague use it as a foundation for daily or weekly discussions on the topics that are relevant to you and the industry. A great way to mentor someone.
th,:And I believe you're going to see. Announcements in the next 30 days from both of those organizations. That are either catching up or identifying a different focus. One of the two. That tends to be how, when it's, when you identify that it's too hard to catch up, you say, Hey, we are focused on this different area and we're going to innovate in that area.
I think that's what you'll see from apple. I think apple will say we're focused on the consumer and the consumer experience. And I don't know. You'll see a supercharge Siri of some kind, but really what you're going to see is use cases built around how a a consumer would use their phone. Alexa, hold another story.
They've got in thropic over there. Who knows the direction that they're going to go. But again, we're talking about consumer level devices. And keep in mind that a majority of this article will be focused on that specific topic. Like how is the world at large going to use these systems?
I might come back and touch on healthcare, but for today we're going to just look at these assistance. So here's what we have. Google and OpenAI announced they've built supercharged AI assistance tools that can converse with you in real time and recover. When you interrupt them. Analyze your surroundings via live video and translate conversations on the fly. This is amazing. It really is the stuff of Saifai.
Anyway. Open AI struck first on Monday when they debuted do flagship chats, GBT 4.0 as in O H a, but actually it's O as an Omni. Meaning it can look at. Can take in. Audio and video and other things. The live demonstration showed it, reading bedtime stories and helping to solve math problems on Tuesday.
Google announced, and this is from a couple of weeks ago on to Google announced its own new tools, including conversational assistant called Gemini live. Which can do many of the same things. It also revealed that it's building a sort of do everything. AI agent. Which is currently in development, but will not be released until later this year. Soon, you'll be able to explore for yourself, engage whether you'll turn to these tools. In your daily routine, as much as the makers hope or whether they are more like the Saifai. Party trick that eventually loses its charm. Here's what you should know about how to access these new tools.
So they start with opening eyes, Chatsy PT 4.0, what is capable of the model can talk with you in real time. With a response lay of about 320 milliseconds, which open AI says is on par with natural human conversation and all of the. The video demos that you saw, it was pretty interesting. How quickly it could respond.
It can be cut off and those kinds of things. You can ask the model to interpret anything. You can point your smoker, smartphone camera at, and it can provide assistant with tasks like coding and translating tax. It can also summarize information and generate images, fonts, and 3d renderings. How to access it. The cool thing about this is it's free, so they are trying to democracy. To marketize access to it.
Now, what you get for free and what you get when you pay for it are completely different things. And not in terms of the responses, but in terms of the speed at which they're responding and the amount you can ask of it. How much it costs GPT four will be free. Open AI will set caps on how much. You can use the model before you need to upgrade to it's paid plans.
Paid plans are $20 per month. Add up. Google's Gemini alive. This is Google's product. Most comparable to GPT four O a version of the company's AI model that you can speak with in real time. Google says that you'll also be able to use it. To communicate via live video. Later this year, the company promises, it will be useful, conversational assistant for things like preparing for a job interview or rehearsing a speech. Gemini live launches in coming months via Google premium AI plan. And Google advanced how much it costs.
Google advanced. Offers a two month free trial period and cost $20 per month after that. But wait, what's project Astra. Astra is a project. To do everything AI agent, which Google is aiming for. And they demoed at Google's IO conference, but will not be released until later this year. People will be able to use Astra through their smart phones and possibly desktop computers. But the company is exploring other options. As well, such as embedding it into smart glasses or other devices. Interesting. Which has better hard to say is what the article is going to come back and say, because they don't have their hands on it. That said, if you compare open, AI's published videos with Google's the two leading tools look very similar. At least in their ease of use to generalize GPT four. Oh, seems to be slightly ahead on audio demonstrating realistic voices, conversational flow, and even singing.
Whereas project Astra shows off more advanced visual capabilities, like being able to remember. Where you left your glasses open AI's decision to roll out. The new features more quickly might mean it's product. We'll get more use at first than Google's, which won't be fully available until later this year. It's too soon to tell which model hallucinates false information less often, or creates more useful response. And then they go on to talk about safety, which is obviously an incredibly important. Topic, both open AI and Google say their models will are well tested.
OpenAI has brought in 70 experts to test and Google is Tested their model for bias and toxicity. Both these companies are building in the future where AI models, search vet, and evaluate the world's information for us to serve up. A concise answer to our questions. Even more then with simpler chatbots, it's wise to remain skeptical about what they tell you. This, in addition to this week, you saw that X AI raised 6 billion putting its market valuation at 24 billion. And they have a gaggle of AI experts that they've brought in from various companies. To build that out.
So I think you will see that you also have anthropic over at Amazon. And I think you will see while anthropic is an investment from Amazon. And you will see them continue to push the boundaries on this as well. And I'm sure we'll see others. And I think the other interesting trend to follow here is going to be. The. Small language models.
I got to get used to saying that we've said large language models for awhile now, but the small language models, the models that are going to be embedded in chips. That you're going to see embedded in your phone. And those kinds of things. I think that's where apple is going to go with their release. In the coming weeks, they already have the hardware that can handle AI on the phone and on their remote devices, their iPads and other things. I think you will see things start to get embedded and you'll see a hybrid. Local cloud kind of model.
You're also seeing that with chips now come out local hybrid. You saw that with Microsoft's. Released this past week. This is an area. My gosh. I will try to keep you abreast of what's going on, because if you try to keep up on all the things that are going on in the AI world right now, It is really amazing.
I've never seen anything like it. In my career, nothing has ever moved this fast. I guess all the pieces are in place. And, we didn't see the internet takeoff this fast. We didn't see mobile phones take off this fast. We didn't see the PC takeoff this fast. Nothing we've seen nothing take off as fast because almost all of them required more build out more infrastructure. And what you were seeing here is just in the flat-out foot race. To see who can get out ahead of this. Curve and stay out ahead of this curve.
So we will keep an eye on it. That's all for today. Don't forget Cher. I said, I'd say something about healthcare and to be honest with you, a lot of this that we're talking about today, We'll end up with application within healthcare, but I think it's still going to require us to think through our frameworks in healthcare to allow for safe use of these models within healthcare. So it will be interesting to see.
All right, now that's all for today. Don't forget, share this podcast with a friend or colleague, use it as a foundation for mentoring. We want to thank our channel sponsors who are investing in our mission to develop the next generation of health leaders. Notable service now, enterprise health parlance, certified health and 📍 Pandora. Take them out if this week, how.com/today.
Thanks for listening. That's all for now.