In the United States, healthcare providers spend an enormous amount of time performing administrative tasks—documenting patient care, making referrals, prescribing medication, and updating records.
To date, most healthcare technology doesn’t address this problem so much as transfering it to new devices and systems. But Harjinder Sandhu set out to find a technology solution that would enable practitioners to spend less time on computers and tablets, and more time helping patients.
The answer was Saykara, an AI assistant for healthcare providers that monitors patient visits and organizes doctor-patient conversations into key points and action items.
On this edition of UpTech Report, Harjinder discusses how this technology works and how it could change your next visit to the doctor.
More information: https://www.saykara.com/
TRANSCRIPT
DISCLAIMER: Below is an AI generated transcript. There could be a few typos but it should be at least 90% accurate. Watch video or listen to the podcast for the full experience!
Harjinder Sandhu 0:00
In this country physicians spend an inordinate amount of time on administrative tasks such as documenting patient care, interactive EHRs, to put orders and things like that. And it’s a huge, huge cause of physician burnout.
Alexander Ferguson 0:21
I’m excited today to be joined by Harjinder Sandhu. He’s up in Kirkland, Washington, CEO of Saykara. Welcome Harjinder.
Harjinder Sandhu 0:30
Thank you. It’s great to be here with you today.
Alexander Ferguson 0:32
Absolutely. So your product is an AI assistant for physicians. So if anyone’s watching, and you’re a physician, or a nurse practitioner, and you’re wanting to automate charting, this may be a tool you want to look into. Now, I quote I noticed on your site from Dr. Meg Fitzsimmons, she said my charting is done within 30 seconds of leaving a patient’s room. So I’m curious when you started, say car, what problem did you set out to solve? And how has that changed over these years?
Harjinder Sandhu 1:02
So what the problem is basically this so in this country, physicians spend an inordinate amount of time on just administrative tasks such as documenting patient care, interacting the EHRs, to put orders and things like that. And it’s a huge, huge cause of physician burnout, not to mention productivity loss and loss of revenue for the, for the health systems. And it’s interesting, it’s universal across all specialties. Pretty much every physician you talk to say, you know, I just spent way too much time on my keyboard, I spend more time on my keyboard actually, than I do seeing patients. And so what we’re doing by automating the documentation process by listening in on doctor patient conversations and creating those clinical notes automatically, we’re relieving physicians of that burden, and allowing them to do what they really signed up to do, which is see patients and provide care.
Alexander Ferguson 1:55
I’m excited to dig in a little bit more in our conversation about the technology how you do that. But first I want to ask when you started say Saykara? Cara, say Cara Yeah, Saykara. No, get that right. And I think it’s like kind of like say Siri, say Saykara
Harjinder Sandhu 2:11
Exactly. Kara is the name of a virtual assistant. And so you literally say Kara in order to ask her to?
Alexander Ferguson 2:17
So Saykara, five you? Let’s see was this five years ago? Yeah, to start five coming up on six years. If what’s one thing you wish you had known five, six years ago,
Harjinder Sandhu 2:28
how complex this space is, I’ve been in healthcare for a lot long, long time. So I should have known better. But, you know, healthcare is really complex. And it’s there’s a lot of changes going on. So I think by and large, you know, if you’re, if you’re going into healthcare, you better go in eyes wide open and understand how complex this landscape is. Well,
Alexander Ferguson 2:53
I’m excited for our next episode in our segment to talk about your journey of what it took to get there. So for those who want to listen to it, stay tuned for that one. But to come more into this solution that you’ve built, you understood the problem, and you set out to solve it. Tell me, what does it look like a typical use case, maybe even show one of your talk about one of your customers in a daily activity.
Harjinder Sandhu 3:14
So, so our app runs on an iPhone. So physicians carry on their iPhone, they, when they open our app, what they see is a list of their patients for the day. So we import their patient schedule, they tap on a patient. And from that point on, different physicians can use it differently. But here’s a typical use case that we would, we would see physician goes sits with the patient, they get the patient’s permission to turn our app on because our app is going to listen and record that conversation. So they turn it on. And from that point on, it’s just listening. It’s and it’s trying to interpret what’s going on in that conversation. Now physicians can direct some of the conversation to Kara and say, Hey, Kara, you know, just make a note of this or that, or in other cases, they can just be completely passive and say, you know, just listen, listen and figure it out. And ultimately, what we’re going to do by the end of that encounter is going to have captured that encounter, what that conversation has been about. And our job is to get a clinical note that’s consistent with what’s happened in that doctor patient visit into the electronic medical record.
Alexander Ferguson 4:24
And if I understand correctly about your platform integrates both natural language processing and artificial intelligence within human assisted elements so that it’s not just relying on a computer to figure it all out, which would not be good for accuracy and healthcare. But it’s that combined assistance. Is that how it works?
Harjinder Sandhu 4:44
That’s correct. So the AI is doing its bit to try to interpret that conversation. We do have a human in the loop to make sure that what we’re creating is actually clinically correct. Because as you say, you don’t want to put spurious data and ultimately the physician has to be the final reviewer anyways. But our goal from the outset has been to use this human in the loop model to begin and then iterate towards a model where the human is doing less and less and ultimately end up with a purely autonomous solution. And we’re getting there we’re getting to, to certain parts of our system that actually don’t require human assistance. And more and more of that system can do automatically.
Alexander Ferguson 5:28
You’re investing in the future, the platform itself, the end user, the physician or nurse practitioner doesn’t see a difference. But you’re you’re investing the people hours to slowly reduce that as the technology advances. That’s correct. Yeah. This is a very exciting time for that to be and it sounds like you’re getting closer and closer to that fact, to be able to build this, have you built your own training models? Is that how for it to detect certain elements? Are you building? Are you using kind of third party or open source?
Harjinder Sandhu 6:00
Now? So there’s two parts of that answer. One is that we’ve built a lot of the core tech ourselves. And the really difficult part of this, which is really understanding what’s happening in a conversation, if you go into see a doctor, and you have a cough. And, you know, these days, they might be worried that you have COVID. So they’re going to do a COVID assessment, and so forth. What our system tries to do is try to understand not only the words that are being said, which is what a traditional speech recognition system would do. But also the kind of care that’s being provided here. So to understand that, hey, there’s a COVID assessment going on here and this particular visit, and what are the elements that we need to capture in order to properly document this particular visit? How long is the COC been going on? Have you had any interaction with other people that are sick, and so on, and so forth. So it begins to build a base a model to say, here’s all the information I generally need to collect based upon past experience for this kind of a visit. And so let me go hunting in this conversation, as I listen to this natural language. Let me go hunting for this information to see if I can piece together the story of this particular visit. And so it does that, as I said, you know it, it has human assistance today to make sure it gets all of it. And all that human assistance allows it to just get better and better. But the core tech is is I would say it’s we built it ourselves just to get it to a point.
Alexander Ferguson 7:24
Why is this? How is this different from other options out there? Maybe it’s just voice dictation or other types of solutions?
Harjinder Sandhu 7:33
Yeah, so if you think about what’s been out there, so I was part of Nuance Communications, which sells a product called Dragon, which is the leading speech recognition solution out there. Dragon does a really good job of capturing narrative. So and what that means is that if you speak, if you dictate, it’ll just take that and translate that directly word for word verbatim into a note. So and that’s what physicians use, a lot of physicians use they, they’re just verbatim dictating into the note. But what it doesn’t do is capture data, it doesn’t capture information and interpret that information. And so what most medical record systems these days want is they want information, they want data as discrete data. And so physicians are ending up doing is doing a lot of pointing and clicking to put that data in. So that’s the big difference there is that what we’re doing is we’re capturing discrete information and populating the medical records with that discrete information. The other side is, you know, there’s a lot of physicians that just employ scribes, which are, you know, really high cost people that are shadowing those physicians to take these notes. And so between those two extremes, you have a solution like ours, which does ostensibly what a scribe would otherwise do, but, you know, at a in a much more automated and cost effective way for positions.
Alexander Ferguson 8:48
So those who are looking for a better solution than just dictating it and having to enter it later, and much cheaper solution than actually paying for full time scriber this nicely in between. Yeah. What’s the future? For the company? If you had to paint a vision share any part of the roadmap? What would you share?
Harjinder Sandhu 9:07
So I think the biggest thing, when when I started the company, the bigger vision was that we were going to create something that was a true clinical AI system for physicians. And what that means is, in the short term, we’re taking what doctors are saying, and we’re passively listening in, right? So we’re we’re listening in and we’re trying to create a note, we’re not interrupting the doctors anyway, we’re not saying hey, you know, look at me, and, you know, do this as you you might imagine, a virtual assistant would do. So our first step is just to be passive and just create that document that note or put that order into the EMR, let the physician review it and sign off on it. But as you get better and better at interpreting dialogue and understanding what’s going on in this encounter, and we have more and more information and historical information, information from the medical records to bring to bear, what we imagine is that we will start transitioning, we’re starting some little pieces of that now. Is that we’re transitioning to being able to participate more actively in the process of delivering care to start nudging physicians and say, Hey, for this particular patient, you really need to document, you know, some counseling on nutrition, for example, or for this particular patient, this patient is due for their colonoscopy. Now, that patient might be there for an ankle sprain. And, you know, the rest of their care is the last thing in the mind of this physician. But that’s the kind of thing that a system that has all of the data available to it can start bringing to bear and say, hey, yeah, I know you’re focused on this thing. But there’s this bigger thing here that’s sitting there for that patient to be addressed.
Alexander Ferguson 10:36
And that’s a real time, ability for it to suggest it, or is it kind of a goal.
Harjinder Sandhu 10:41
And as I said, we’re just starting to get into those kinds of suggestions and stuff like that. But yeah, that’s the idea that in real time, the patient is there, the doctor and patient are sitting there, and before the doctor starts seeing that patient, or while they’re sitting there, it can start suggesting things passively, we don’t, the last thing we want, again, is to start interrupting that, hey, you know, doctor, kind of, you know, you should really changes, it’s much more passive than that it’s just on the screen. And if they want to look over and say, Hey, here’s some additional information I should incorporate into my dialogue, I’ll do that.
Alexander Ferguson 11:11
If you had to share one thought or insight for a physician, or a nurse practitioner, that they should know, going forward from here, even apart from your solution, just how to be thinking about technology and using it going forward. Is there any insight that you can share?
Harjinder Sandhu 11:29
Yeah, so you know, this is a problem, just in terms of documentation that, as I said, virtually every physician, every every care provider in the country is burdened with documentation. The solutions are, are coming. I mean, they’re here. We have solutions like ours, that provide something today that physicians can use, and they’re getting better and better. So I would say, we’re not too far from a world where pretty much every physician is going to be using a solution like this. And so for all the physicians out there that find themselves very, very frustrated, by sitting long hours in the evening trying to do their charting, and, you know, typing away their keyboards, you know, that the the days of that are numbered, I think for this physicians and for the physicians that want a solution today, they can have it, but those solutions are getting better.
Alexander Ferguson 12:20
What’s the business model is that it is a monthly yearly contract, how does that work?
Harjinder Sandhu 12:24
It’s a monthly service fee. So physicians pay for the month and they use it as much as they want for as many patients as they want in that month.
Alexander Ferguson 12:32
And where can folks go to learn more? And what’s that first step that they usually they should take.
Harjinder Sandhu 12:37
So they can go to our website, Saykara.com, SAYKARA.com. And usually the first step for physicians is, what we say is, you know, try it out, you can try it out for a month and see if it works for you, there’s really nothing to lose in that process, we don’t actually even need to deal with your IT group. In order to get started, you can just download the app, we set up an account, there’s some work that we do to make sure that the system understands your clinical notes, and the kinds of patients that you see. So actually, all of the investment in terms of the work is on our side for the physician. It’s basically a 15 minute training that we do. And it can be as simple as hey, turn it on and just start talking. But make sure you talk out loud and you know, if you’re doing a physical exam, we can’t see what you’re doing. You have to tell the system what you’re doing. But aside from that, I mean, it’s really not that hard.
Alexander Ferguson 13:28
That concludes the audio version of this episode. To see the original and more visit our UpTech Report YouTube channel. If you know a tech company, we should interview you can nominate them at UpTech report.com. Or if you just prefer to listen, make sure you subscribe to this series on Apple podcasts, Spotify or your favorite podcasting app.