AI is certainly the buzzword of the tech community – it’s everywhere. And for all the growing concerns of evil, there is good to be had by it as well
Deepfaking
Generative Artificial Intelligence (AI) first came into my line of vision last year with OpenAI’s Dalle-E 2 which creates stunning, creative, realistic images from typed prompts. As much as it was mind-blowing, it failed to capture the imagination of those outside the tech space. That was less than a year ago…
Wind it on just a few months and another OpenAI product – Chat GPT exploded. Suddenly generative AI was being talked about in mainstream media.
Then last week the Google IO event was brim full of brilliant, innovative artificial intelligence ideas that actually almost overshadowed the hardware that was to come later in the show. Last week I highlighted one of the new platforms that I found of great interest – Google Music LM.
In that article, I highlighted the possible worrying implications of very soon being able to generate royalty-free music from text. Where will this leave artists and record labels I guess only time will tell, but it is the future.
Within our recent memories, we have just been through the web revolution and we thought that was going to be it for a good while yet to come. What more could there possibly be right? Then popped up artificial intelligence as if almost from nowhere.
Hold on to your hats ladies and gentlemen – I think we are in for quite a ride!
In the slow lane
Apple has been accused of not responding to the current boom of artificial intelligence products and services.
Apple is a wise old fox though and knows there is absolutely no need to rush. Artificial intelligence won’t be going anywhere and Apple is in this for the long game. We just know that they will at some stage they’ll come out guns blazing, but in the meantime, they are happy to observe the lay of the land from a relatively safe distance.
Apple’s CEO Tim Cook was asked about the advance of ChatGPT and generative artificial intelligence directly during the company’s recent quarterly earnings call.
Sticking firmly to Apple’s caution-first mantra he said he was ”very interested”in the potential use of AI but equally went on to emphasise the need to ”address a number of issues” and also to be both ”deliberate and thoughtful” in how artificial intelligence is used and applied. He concluded by saying that Apple would further develop AI ”on a very thoughtful basis.”
Cook then defended Apple’s current integration and use of artificial intelligence pointing out they already heavily leaned on it in several of their existing products and services such as Fall Detection, Crash Detection, and the ECG app on the Apple Watch. And we also know from a recent article in The Information that somewhere deep within Apple Park they are working hard at improving Siri’s shortcomings and are working on large language model improvements, which are due to surface next year.
The good side of AI
While I’m sure there will be dark days ahead when artificial intelligence faces abuse, there are also obvious and numerous positives to it as well – and yesterday Apple’s announcement highlighted just how positive they could be.
In a press release yesterday Apple announced a raft of new software features that are centred on helping with cognitive, speech and vision accessibility. These features will draw heavily on recent hardware & software advances and on-device machine learning. At release, there will also be tools aimed at assisting those who are at risk of losing their ability to speak too.
Coming later this year those with cognitive disabilities will be able to use their iPads and iPhones to help them become more independent with the help of Assistive Access, Live Speech and Detection Mode. Apple’s senior director of Global Accessibility Policy and Initiatives, Sarah Herrlinger said;
“Accessibility is part of everything we do at Apple. These groundbreaking features were designed with feedback from members of disability communities every step of the way, to support a diverse set of users and help people connect in new ways.”
Assistive Access
The first tool, Assistive Access is to help those with cognitive disabilities. Having spoken with many who suffer from such disabilities this set of tools calls on several existing apps but tailors them better to specifically help with cognitive problems.
The Phone & FaceTime apps have been combined into a single Calls app within which you’ll also find the Messages, Camera, Photos and Music apps too. The feature will have a distinct interface with high-contrast buttons and large text.
For those that find it easier to communicate visually, for example, the keyboard will be an emoji-only keyboard along with an option to record video messages as well. Users and those assisting will be able to choose between a more visual, grid-based layout for their Home Screen and apps or a row-based layout for users who prefer text.
Live Speech
This will help those that have trouble speaking and will soon mean users will be able to type what they’d like to say. It will then be read out on phone calls and FaceTime calls. It has also been developed to work with person-to-person conversations as well.
For those that are losing or have lost the ability to speak, Live Speech will become invaluable being able to save regularly used phrases or terms which can then be used quickly & easily live on calls.
Artificial intelligence really comes into play though with another part of the Live Speech feature – Personal Voice. This stunning development will create a perfect replication of the users actual voice.
Personal Voice is created by reading and recording a set of randomised text prompts for around 15 minutes into their iPhone or iPad. This is where Apple will use on-device machine learning to develop and create a perfect copy of the user’s voice as remembered by friends and family. And by being learned on device it means the information is kept one hundred per cent private and confidential.
Detection Mode
This is the last of the artificial intelligence-driven features that was announced by Apple yesterday in their press release and has been developed for users with vision disabilities.
Tasks that you and I may take for granted such as using a microwave or other household appliances may not be as easy for everyone. Detection Mode will utilise the camera and LIDAR scanners on iPhone and Point & Speak will learn to announce the text on each button as a user runs their finger over it.
Point & Speak has been developed even further though and has been built in to the Magnifier app and works hand-in-glove with VoiceOver. This will help users to better navigate their surroundings by utilising People Detection, Door Detection and Image Detection.
Wrapping up
All the tools that were announced yesterday to help celebrate this week’s Global Accessibility Awareness Day rely to a greater or lesser extent on artificial intelligence.
While not obvious headliners like ChatGPT or the platforms announced last week at Google IO, to those with any such cognitive disabilities, these could be potential life changers.
As I said, I am sure that the wondrous and as yet undiscovered abilities of artificial intelligence will fall foul in the hands of those determined to cause harm, but the advance of it should not be held back because of the minority.
There are many unanswered questions as to how artificial intelligence will fit into our lives and where that will leave us – and what will actually do – but that is for another discussion. What we need to focus on for now are the positives. And the benefits of artificial intelligence don’t get much more positive than those announced by Apple yesterday.
Getting involved…
Guess what – if you look forward to my articles & blogs landing each day, you can help that happen! By clicking via this link, you can join Medium, and get my blogs every day, the moment I publish them. And, you can even get email notifications about them too.
Before you go – join my mailing list here.