POW! HIT! People & Organizations improving Workflow w/Health Info Tech
POW! HIT! People & Organizations improving Workflow w/Health Info Tech

@EHRworkflow Blog

The is a "sidekick" blog to my main blog EHR Workflow Management Systems at ChuckWebster.com. There I publish long, complicated, thoughtful blogs thousands of words in length. Here? Not so much. It's mostly for snippets of content too big for a tweet but too small to grace the main deck of the mothership.

If you're looking for my POW! HIT! Profiles, they're here (explanation and most recent) and here (alphabetical index). POW! HIT! stands for People and Organizations improving Workflow in Healthcare with Information Technology (or Ideas and Technology, depending on context). However, I'll write or post content relevant to POW! HIT! on this and the mother blog. Thanks for stopping by. Please leave a comment!

Chuck Webster MD MSIE MSIS

With degrees in Accountancy, Industrial Engineering, Computational Linguistics, Artificial Intelligence, and Medicine, Dr. Webster can see around corners. He designed the first undergraduate program in Medical Informatics, was CMIO for an EHR vendor, and wrote the first three winning applications for the HIMSS Davies Award for EHR Ambulatory Excellence. Chuck opines about healthcare workflow and related and unrelated topics from @wareFLO (#HIMSS13 Top Tweeter) and DMV: the District, Maryland, and Virginia.

My main blog is EHR Workflow Management Systems at ChuckWebster.com. Some posts are 5000 words or more! Here is for larger than a tweet, but smaller than a novella.

Dr. Nick of Nuance on Clinical NLP 4: Speech Recognition Error Rates?

4. What are typical medical speech recognition error rates? Cause for concern or manageable?

That's a common question and one that a single digit or rate does not really answer. That being said, for many people out-of-the-box recognition is in the high 90’s and in many cases in excess of 98-99 percent. Even for those who do not achieve that accuracy out-of-the-box with training, it is possible for most speakers to attain a high level of accuracy. With more dictation and correction to build a personalized or speaker dependent model you can even customize the engine to individual speaking and pronunciation style. Siri has achieved a level of accuracy and comprehension without having an individual voice and audio profile for the user. In the current version anyone can pick up an iPhone and interact with Siri. Siri has helped paint a clearer picture of what is possible with speaker independent (in other words speech recognition that does not require training) recognition that has achieved a very high rate of success and acceptability. Question 5 (Where is Nuance research heading?) and Dr. Nick's answer will be published very soon.

Follow @EHRworkflow to be sure not to miss it.

Feel free to catch up on earlier portions of this interview with Dr. Nick of Nuance:



Dr. Nick of Nuance on Clinical NLP: Introduction to Interview Series

Natural language processing (NLP) applied to medical speech and text, also know as Clinical Language Understanding (CLU), is a hot topic. It promises to improve EHR user experience and extract valuable clinical knowledge from free text about patients. In keeping with this blog's theme, NLP/CLU can improve EHR workflow and sometimes uses sophisticated workflow technology to span between users and systems.




Here's Dr. Nick without that pesky YouTube
play button (see later post in this series)
that turns everyone into an "Arrow Head"!

Therefore I was so delighted to Skype an interview with Dr. Nick van Terheyden, Chief Medical Information Officer at Nuance Communications. Warning: several of my questions were a trifle long...sorry! But I did want to explain where I was coming from in several instances. In every case Dr. Nick, as he is known, broke down complicated NLP/CLU ideas to their essentials and explained how and why they are important to healthcare.

I'll start with the tenth question first, since, well, I promised Dr. Nick I'd do so.


10. Most of my previous questions are pretty "geeky." So, to compensate, from the point of view of a current or potential EHR user, what's the most important advice you can give them?

To me, the core issue is usability and interface design. The interface and technology has struggled to take off in part because the technology has been complex, hard to master and in many instances has required extensive and repeated training to use. The combination of SR [speech recognition] and CLU technology offers the opportunity to bridge the complexity chasm, removing the major barriers to adoption by making the technology intuitive and “friendly”. We can achieve this with intelligent design that capitalizes on the power of speech as a tool to remove the need to remember gateway commands and menu trees and doesn’t just convert what you say to text but actually understands the intent and applies the context of the EMR to the interaction. We have seen the early stages of this with the Siri tool that offers a new way of interacting with our mobile phone, using the context of your calendar, the day and date, location and other information to create a more human-like technology interface that is intuitive and less intimidating.


1. Dr. van Terheyden, I see references to you as Dr. Nick. How do you prefer to be addressed?

Thanks for asking – Nick is fine but many folks refer to me as Dr. Nick…so much easier than “van Terheyden.”

Question 2 (Tell us about you!) and Dr. Nick's answer will be published very soon.

Follow @EHRworkflow to be sure not to miss it.


  • Dr. Nick of Nuance on Clinical NLP 2: Education and Career?
  • Dr. Nick of Nuance on Clinical NLP 3: Speech Recognition Stereotypes?
  • Dr. Nick of Nuance on Clinical NLP 4: Speech Recognition Error Rates?
  • Dr. Nick of Nuance on Clinical NLP 5: Where is Nuance Research Going?
  • Dr. Nick of Nuance on Clinical NLP 6: Adding Value to Clinical Narrative?
  • Dr. Nick of Nuance on Clinical NLP 7: Important for Meaningful Use?
  • Dr. Nick of Nuance on Clinical NLP 8: Race Between Point-and-Click vs Free Text?
  • Dr. Nick of Nuance on Clinical NLP 9: Privacy vs Learning NLP Algorithms?

MModal.com on Language & Workflow 9: How Can Developers Leverage M*Modal Technology?

9. By the way, while the web page didn't come up in response to my "workflow" query, I stumbled across an M*Modal developer certification program. Which leads me to my final question. All of this workflow technology and language technology for improving efficiency and user experience? How do I, as a developer (and I are one), harness what you have created?

HCIT vendors can take advantage of M*Modal’s free Partner Certification Program. M*Modal Fluency Direct speech-enables electronic health records (EHR) and other clinical documentation systems by verbally driving actions normally associated with point-and-click, templated environments.

  • No cost to certify or for yearly recertification
  • Access to product development engineers
  • Access to product development documentation
  • Onsite engineering-focused, peer-to-peer training session
  • Featured on program website
  • Allowed to use a specialized certified logo
  • Co-marketing and marketing opportunities
  • Signage for tradeshows
  • Product labels and specialized documentation

How to Get Started

M*Modal has made certification as simple and smooth as possible. The certification process consists of an onsite Speech Enablement Workshop at no cost to the vendor. To get started, vendors simply visit www.mmodal.com/certification and register or email us at This email address is being protected from spambots. You need JavaScript enabled to view it. . We will follow up with you and provide additional information that will prepare you for the certification workshop.

Well now! I have to admit you nailed that last question. You even used bullet points. I've never, ever, had an interviewee who (er, which, that, you tell me, you've got all the grammar rules!), who did that before.

I appreciate all the time you've spent with me. I hope I didn't put too much of a strain on the web server. If anyone has any follow up questions, are you on Twitter?

Cool! I already follow you.

Well, that was my interview, about the future of language and workflow, with the MModal.com website. I'm sure you'll agree that it's remarkable.

Feel free to read the previous questions I asked of the new MModal.com website.

MModal.com on Language & Workflow 8: Really? Context and Pragmatics Are That Important?

8. I picked up a copy of Introduction to Pragmatics. It was a great review, since the last graduate course in pragmatics that I took was so ago. And I read it! (I'm planning a blog post about importance of pragmatics to EHR and HIT interoperability and usability.)

At the end of the book, in the summary, was this:

"Who could doubt that the world of artificial intelligence will soon bring us electronic devices with which we can hold a colloquial natural-language conversation? The problem, of course, is pragmatics. Not to slight the difficulties involved in teaching a computer to use syntax, morphology, phonology, and semantics sufficiently well to maintain a natural-sounding conversation, because these difficulties are indeed immense; but they may well be dwarfed by the difficulties inherent in teaching a computer to make inferences about the discourse model and intentions of a human interlocutor. For one thing, the computer not only needs to have a vast amount of information about the external world available (interpreting I’m cold to mean “close the window” requires knowing that air can be cold, that air comes in through open windows, that cold air can cause people to feel cold, etc.), but also must have a way of inferring how much of that knowledge is shared with its interlocutor."


"Thus, the computer needs, on the one hand, an encyclopedic amount of world knowledge, and on the other hand, some way of calculating which portions of that knowledge are likely to be shared and which cannot be assumed to be shared – as well as an assumption (which speakers take for granted) that I will similarly have some knowledge that it doesn’t. Beyond all this, it needs rules of inference that will allow it to take what has occurred in the discourse thus far, a certain amount of world knowledge, and its beliefs about how much of that world knowledge we share, and calculate the most likely interpretation for what I have uttered, as well as to construct its own utterances with some reasonable assumptions about how my own inferencing processes are likely to operate and what I will most likely have understood it to have intended. These processes are the subject of pragmatics research."

In his recent interview, Juergen Fritsch, Chief Scientist at M*Modal, said "At M*Modal we have been focused on pragmatics since the very beginning". Could you expand on his comments?

You would be justified to suspect that the answer to this question is not to be found on MModal.com. However, Wikipedia says "Pragmatics is a subfield of linguistics which studies the ways in which context contributes to meaning."

"Context" occurs 48 times on MModal.com. For example:

  • "Healthcare Challenges and Context-Enabled Speech"
  • "the real context and meaning behind a physician’s observations"
  • "Context-specific patient information — from prior reports, EHRs, RIS, PACS, lab values, pathology reports, etc"
  • "providing real understanding of context and meaning in the narrative – not simply term matching or tagging"
  • "combine […workflow management…] with Natural Language Understanding to bring context to text"
  • "Enabling physicians to populate the EHRs with color, context and reasoning without changing their established workflow"
  • "context-aware content that is codified to standardized medical lexicons, such as. SNOMED®-CT, ICD, RadLex®, LOINC, and others"

I love the connection between context and workflow. I've written about that too. But my point here is: if pragmatics is about context and M*Modal is about context then M*Modal is about pragmatics too. I won't go any further into the subject of the importance of pragmatics to healthcare workflow. I'm planning a future blog post about import of discourse, reference, speech acts, implicature, intent, inference, relevance, etc. to EHR interoperability and usability.

In our interview, when Juergen said "You are absolutely right that without pragmatics we’d never be able to accomplish what we’re trying to with NLP technology," what did he mean?

"The context of any natural language statement is extremely important for the correct semantic understanding. It is not sufficient to identify a key clinical concept like ‘pneumonia’ in a statement like ‘Two months ago, the patient was diagnosed with pneumonia, which turned out to be a mis-diagnosis.” Pragmatics (context, really) informs us that the statement is about the patient, that it is about something that occurred 2 months ago, and that it was a false diagnosis. Without a level of pragmatics, we would completely misinterpret that statement."

Question 9 (How can software developers leverage M*Modal technology?) and MModal.com's answer will be published very soon.

Follow @EHRworkflow to be sure not to miss it.

Feel free to read the previous questions I asked of the new MModal.com website.














My Next Speaking Engagement


Login Form