Magazine
Intelligence Unit Special Reports Special Events Subscribe Sponsored Departments Follow Us

Twitter Facebook LinkedIn RSS

Did You Say Know or No?

Are you a health leader?
Qualify for a free subscription to HealthLeaders magazine.

Advances in speech recognition technology are helping providers automate the dictation process.

At its inception, speech recognition technology was offered up as yet another way to reduce healthcare costs by eliminating the need for medical transcriptionists through automation of the dictation process. The software soon revealed significant limitations, however, including the inability to recognize some accents and nuances in conversations—the difference between know and no, for example. Those issues have led some physicians to conclude that speech recognition technology may be more work than it's worth. But some new advances could have detractors changing their tune.

Many new SRT systems are real-time speech recognition applications that provide physicians with the ability to dictate in their own words, generating "once and done" documentation that they can dictate, edit, and review in succession. Most of the newer applications can recognize speech with up to 99% accuracy by including specialized medical vocabularies covering about 80 specialties and subspecialties.

Mark S. Block, insurance chair and Medicare CAC/PIAC representative, immediate past president of the Florida Podiatric Medical Association, and coding committee chair of the American Podiatric Medical Association, says he never dreamed when he began using SRT that seven years later he'd feel handicapped without it. "Speech recognition is evolutionary. You have to work with it and be patient. It used to be a little tedious. But it's evolved into something far more sophisticated and instantaneous," says Block, who says his use of Nuance Communications Inc.'s Dragon Medical has led to transcription accuracy percentage rates in the high 90s.

Even with SRT's advances, implementing the full range of tools available in most SRT applications can still be a daunting task, however. "There are a lot of tools nobody uses, and I don't know of anybody who uses all the tools," says Block. "As much as I use it, I still just use the basic tools. My main interest, and probably the main interest of most users, is to pick up the microphone and speak a sentence or page without a lot of follow-up editing," he says.

Block says he sees the future of front-end SRT, in which the provider dictates into a speech recognition engine and the words are displayed right after they are spoken, evolving to the point where artificial intelligence uses algorithms to determine the difference between commonly confused words (like know and no), furthering eliminating the need for extensive edits.

"In an ideal world, there would be 100% accuracy from the moment you pick up the microphone, and even though accuracy has substantially improved, AI is not that sophisticated yet, if it's even in there at all. We're still a few years out, but it's not an impossibility anymore," says Block. In the meantime, he says, the best advice he can give to anyone using front-end SRT is "enunciate, enunciate, enunciate."

—Kathryn Mackenzie

Comments are moderated. Please be patient.