If there’s anything we’ve learned from COVID-19 it’s this: when there are obstacles to leaving your home, equal access to digital content is not optional. Unfortunately, for the billions of people across the globe with disabilities, this is not a new revelation. When thinking about accessibility, physical accommodations are often the first ones that come to mind: it’s easy enough for anyone to grasp why a person in a wheelchair needs a ramp to enter buildings, or cutouts in the sidewalk to cross the street, while difficulties navigating online content, however, are sometimes forgotten.
As countries across the globe pivot their daily operations due to the pandemic, people are relying on digital content more than ever. Surveys show that, worldwide, people are watching 67% more news coverage and 51% more shows/movies on streaming services. Additionally, 14% more people are creating and uploading videos.
Between streaming news conferences for constantly changing health updates, taking classes, attending meetings, and binge watching every tv show imaginable to pass the time during quarantine, everything is happening online. According to the World Health Organization, there are currently around 466 million people worldwide with some level of hearing loss. Many countries legally require government officials to include captioning, but all captions are not created equal.
Additionally, the lack of sign language interpreters during government briefings has been a huge issue. A twitter campaign with the hashtag #WhereIsTheInterpreter has become a class action lawsuit in the UK. In the US, the National Association of the Deaf sent a letter to the White House pointing out that nearly all 50 states’ governors have ASL interpreters by their side but not the president. In New Zealand, on the other hand, there is a new TV channel that features interpreters as the main focus on the screen rather than off to the side.
So what can you do to help ensure your content is accessible? You can add sign language interpreters to your videos, for one. Second, you can make sure to provide accurate and quality captions.
Here we’ll talk about the different methods of captioning and how they are being used during the pandemic: CART, auto-generated, and manual.
What is CART?
When live streaming an event or conference, the best case scenario is to have real-time captioning by a live captioner – also known as Communication Access Realtime Translation (CART) services. With CART, a professional is able to caption as conversations occur, usually with only a slight delay. Captioners are often given materials beforehand to prepare important names and other details that may come up so that these are programmed as shortcuts and ready to go. The captioner will be sure to note when the speaker changes so that everyone can follow along with who is speaking and when. After the event, a complete transcript will be available. This transcript can then be edited for accuracy and uploaded as a caption file with the video recording.
Where might you see CART services being used now?
3rd party integrations with Zoom is one place. Another is during press conferences. With health updates and regulations changing so rapidly, it’s imperative that everyone is able to stay up to date on the latest news. Live captioning during these press briefings is a great way to do this. What could be more important during a global pandemic?
What is auto-generated captioning?
Auto-generated, computer generated, AI – whatever you choose to call it, it’s all talking about the same thing: captions that are generated by voice recognition and computer algorithms. One of the main benefits to this type of captioning is that, for the most part, it’s easy and free. A separate budget line for a captioner isn’t needed, and there isn’t much of a learning curve, either: you just hit a button and words start to be transcribed right before your eyes! Magical, right? Sure, but if you’re hoping to reach certain standards of accessibility, it’s not enough. The accuracy level of the captions depend entirely on the audio quality, the speaker’s accent, speed of conversation, and other variables. Auto-generated captions do not know what the speaker’s name is, or when someone else starts talking. So while it’s a definite improvement over not having any captions, it still leaves much to be desired. You can try it, here: Automatic Subtitle Generator
Where might you see auto-generated captions popping up now?
Besides YouTube and Facebook, Google Meets is a big one. Captions are available for free during any video call by simply toggling the setting on or off throughout the meeting. Perfect? Absolutely not, but it's still been a helpful starting place for teachers to better reach students with hearing loss or learning disabilities. Captions generated by Google Meets cannot be saved for later use, however, without a 3rd party extension or work around.
What is manual captioning?
One of the main methods of adding captions to pre-recorded content – and doing so with great accuracy! By listening to the content and transcribing as you go, you have the ability to pause and backtrack as needed to ensure you are capturing exactly what is being said and how it is being said by adjusting the line breaks and timing. That being said, it is certainly a more time-consuming process than auto-generating captions. For longer content, it’s often useful to begin with auto-generating captions and use those as a starting place to edit and improve from to save time.
Where might you see these types of captions?
Everywhere! From video clips shown during remote learning, to the ones on your social media feed. Chances are that if you’re watching a video and the captions are accurate, it’s been (at least partially) manually captioned.
For more on crafting quality captions, check out this article.