What are some key workflows or processes that can be automated with Kubernetes? Experts share six examples
Hello, Alexa: How voice is changing software development
IT teams need to adapt as voice rewrites development rules. Keep these five fundamental changes top of mind
IT leaders responsible for customer-facing products and services need to be more vocal. That doesn’t mean you need to be the loudest talker in a meeting. It means voice technology plays a growing role in how people find and use all manner of applications and services – and it’s time for IT to prepare for the related changes to software development.
It’s not just people picking up their phones and peppering Apple's Siri or Google Assistant (or any other voice-recognition tech) with queries. Rapidly growing adoption of all sorts of voice-assisted devices at home, from Amazon's Alexa to voice-enabled car apps, means voice as a UI channel no longer requires a phone of any kind.
Just last week, Amazon introduced a new wave of Alexa-equipped products, ranging from speakers to a microwave. More notably, it rolled out a chipset that enables gadget manufacturers to incorporate Alexa into their own products. As industry watchers have observed, Amazon isn’t as interested in selling the devices as it is establishing its voice platform – with pathways leading back to Amazon retail services – as the de facto in consumers’ homes and even cars.
Consider this: comScore has predicted that by 2020, half of all online searches will be conducted by voice.
Is your organization listening? Voice capabilities are a growing consideration in software across a range of industries, especially when it comes to customer-facing apps.
“Our clients have requested – and prefer – that new applications have voice recognition capabilities,” says Ahmar Arshi, director of software development at alligatortek. “We are also seeing requests come in for old applications to be upgraded with voice-activated functionality so they can stand out in the market.”
Voice is headed toward table-stakes status in terms of how people interact with digital products and services.
“Voice is here – it’s prime time, and businesses know they need to add voice channels as customers expect it to be there today or very soon,” says Ed Price, director of compliance at Devbridge Group.
[ How will virtual reality change your business? Read our related article, 5 interesting AR/VR projects in action. ]
Here are five fundamentals that CIOs and other IT leaders need to keep top of mind as their teams adapt for an increasingly voice-enabled digital future.
1. A bad voice interaction is worse than none at all
Our phones and other devices might make us take voice UI for granted already. The problem for development teams is if they think that ubiquity translates to “easy.” A lousy voice interaction can do more harm than good.
“Voice-enabled applications and services are a whole new field of work that will require careful planning and careful field testing of any implementations,” says Lars Knoll, CTO at The Qt Company. “A badly done voice integration is probably leading to a worse user experience than not having one at all.”
2. GUI expertise is not VUI expertise
From a skills perspective, Arshi of alligatortek notes that it will be increasingly useful, if not required, for voice-oriented developers to understand the basics of pattern recognition and machine learning. And just as cloud architecture and development required developers to learn new platforms and tools, so will voice application development.
[ Which of today's IT roles are vanishing? Read our related article, 4 dying IT jobs. ]
One of the fundamental differences here is the front-end. Paying careful attention to its user interface is critical to experience (and avoiding a lousy one, as noted above).
“The voice user interface (VUI) is a new paradigm which is different than graphical user interface (GUI) – the human brain and interaction model changes with different customer expectations,” Price says.
He notes, for example, that a strong VUI requires proper pacing and relevance in question-answer interactions – something we might take for granted in a text-based search but that’s hardly a given with voice. Both developers and end users have a lot more practice at the former than the latter (at least in terms of person-to-machine interactions.)
“In a GUI, the user’s eyes and mouse movements have been trained over several years of behavior,” Price notes. “Voice is still in its infancy.”
As a result, voice development is as much a product design challenge as an engineering problem.
“A good product designer skilled in voice is key to developing the patterns of interaction,” Price says. “After the use-case patterns of interaction with your voice skills have been defined and tested verbally – the actual software development is not unlike developing for GUI.”
There’s also a forward-looking challenge that voice-enabled app developers will need to tackle: Elegantly weaving VUI and GUI together in use cases where customers will increasingly expect to use either or both.
“A large challenge for the coming years will be to develop applications that combine voice and touch input with graphical and voice output in a seamless way,” says Knoll, The Qt Company CTO. “They will expect that they can combine voice and touch as input, and that voice input is intuitive and relates to the context they see with their eyes on the display.”
3. Mobile flashback: Get ready for platform decisions
IT shops building voice services will face a similar situation as when they began delivering mobile apps: With so many devices and platforms, you’ll need to make choices about developing independently for each platform (and your priorities) or taking more of a one-size-fits-all approach.
“[It] is similar to developing for multiple mobile phone and tablet manufacturers,” Price says.
Or you can customize apps for particular platforms and devices for better control of user experience and other reasons.
“It’s a cost-benefit analysis,” Price says.