I have a major call center use case that requires text transcription of 2 distinct voices, a call center rep. and a customer. I need to group the transcribed text by each of these 2 people. The speaker_labels feature returns a list of time ranges, identifying a word in each time range as belonging to one particular speaker. To aggregate the words into sentences and paragraphs, the service consumer must pick words out by timestamp and reconstruct the text for each caller. This is a very clumsy and error-prone task for the consumer. The service should provide blocks of text by speaker, thus eliminating this burden on the caller. This could happen in a couple of different ways:
1) label each set of words from a specific speaker, sort of like reading a movie script OR
2) list all the text for one speaker, then all the text for the next speaker, etc.
Either 1) or 2) would be an improvement over the way speaker_labels output is currently provided.
Since speaker_labels is still in beta, this would be an opportune time in the lifecycle of that feature to implement this improvement. I am more than willing to participate in the testing of an improved speaker_labels feature.
IBM Watson Health Implementations
NOTICE TO EU RESIDENTS: per EU Data Protection Policy, if you wish to remove your personal information from the IBM ideas portal, please login to the ideas portal using your previously registered information then change your email to "firstname.lastname@example.org" and first name to "anonymous" and last name to "anonymous". This will ensure that IBM will not send any emails to you about all idea submissions