I enjoy helping others get more from their technology than they realize is possible. Apple has historically done a great job in many ways making hardware and software more accessible for differently-abled users. With the pandemic, new challenges have arisen: with physical and social distancing having been the norm for the past fifteen months, how can a person with low- or no-vision maintain their personal space if they are unable to see others in proximity?
In concert with the infographic and upcoming video component, this audio segment offers a brief tutorial to give the learner (or an interested companion) a primer for turning on the Magnifer software feature, a necessary first step to activating the People Detection (Social Distance) feature of Apple the Accessibility Settings. To meet course requirements for time, only the first portion of the audio is included on this site. The entire audio project ran better than four minutes.
Project Two Key Learning Objective: To help the user of an Apple iPhone 12 Pro or iPhone 12 Pro Max and iOS 14.2 or better successfully begin setting up the People Detection software feature, they will hear the narrator describe the process, and listen to how the People Detection feature works in an inhabited area.
Project Includes:
Project Also Includes Non-Original Track
Swing City Medium backing instrumental (see below for licensing)
Swing City Medium.aif-- Link to End User License Agreement
Swing City Medium.aif-- Link to Using Royalty-Free Loops in GarageBand
Audio Recording- in app
Audio Recording- imported track
Whole Track- volume adjustments
Individual Region- volume adjustments
Splicing Audio
Trimming Audio
Effect on Voice- adjusting mid-tones
Effect on Background Track- fade in + fade out
Wow, this was a tough assignment! I had used GarageBand before which is helpful, but I still wasn't prepared for the the layers of details required to pull together an audio production. I knew what I wanted to communicate to the listener, and I felt confident in scripting that out and rehearsing it. I had NO idea how long that script would turn out to be when it was recorded-- more than three times the required length! I spliced down the People Detection recording, narrowing the pauses between speech, and cuttting some segments entirely, but that was insufficient. Considering the simulated audience for the project is comprised of folks with no or low-vision, speaking more quickly or leaving out detail didn't seem an effective tactic to shorten the project's length. In the end I opted to do a shorter 'part one,' of what I would put out if I were creating this project for the target audience.
This portion of the project will do a reasonable job supporting the Learning Objectives established at the beginning of the lesson. To do so more fully, the project would need to inclued a lengthier version of the audio. Honestly, if audio weren't its own section for the sake of learning audio skills, I would probably just have included the whole audio portion as background to the video section coming up rather than forcing them into two seperate modules.
I did learn some things along the way. I wanted to add a quick clip that signaled 'closure' to the audience, so I recored that in (See the Thanks for Listening section) after the fact. The changes I'd made in settings along the way made for a much cleaner, brighter recording of the last track. I don't like the disparity between them, but... I'll know better next time. Also, just as it's important to exaggerate facial expression when teaching, so students in the back correctly 'read' you, it's also important to exaggerate vocal patterns a little too, for the same reason.