Braille Monitor                          August/September 2018

(back) (contents) (next)

Directing Big Data and Technological Innovation: Perspectives on the Importance of Leadership by the Blind

by Chancey Fleet

Chancey FleetFrom the Editor: Our history dictates much of what we believe, is the glue that holds us together, and represents a significant force guiding our policies and priorities: the shared experiences that come to make up the philosophy of the National Federation of the Blind. But the future isn't just history repeating itself. We are challenged to supplement our history with the changing demands and opportunities of today and to have those policies influenced for as far into the future as we can meaningfully speculate.

In this address, delivered on July 6, 2018, Chancey speaks to the proven techniques that have been a part of our independence, to the new technology that can either expand or limit it, and to the necessity of us seeing that it does the former rather than the latter. With her firm philosophical understanding, her impressive grasp of current technology, and the gift she has for blending all of this into something that speaks to us all, here is what she said:

We come to convention every year to gather the wisdom we’ll need to direct the course of the year to come: in our own lives, in the support we give to each other, and in the guidance and mentorship we offer to those who would like to be trusted as our allies. That’s a big job to fit into a week.

Twenty-five years ago today, I believe, in Dallas, Texas, Dr. Kenneth Jernigan spoke to our convention about “The Nature of Independence.” Some students had written to him asking why, as a leader in the Federation and a proponent of cane travel skills, he had been noticed moving through the hotel with a guide. Dr. Jernigan explained that, while skills like cane travel and fluency in Braille are key, the core meaning of independence is the ability to go where you want when you want without inconvenience to yourself or others and to get the information you need with the minimum of inconvenience or expense. Our independence comes from within, he said, and it depends on our self-respect, confidence, our will, and our ability to make choices. As we consider how technologies shape our lives and how we might shape technology through research, training, and advocacy, Dr. Jernigan’s conception of the nature of independence gives us points of reference that are useful whether we’re talking about canes, guides, or artificial intelligence.

As a technology educator I know it’s not enough to teach someone how a particular tool works: you can understand the layout of a screen or all the features of a recorder, but unless you have a sense of how and when to use each tool and how it’s valuable, that knowledge isn’t worth much.

Dieter Bohn, who writes for an online tech publication called The Verge, suggests that we think of new tech as instruments. “When you use an instrument,” he explains, “you have an expectation that it is going to take effort to use it well. It takes practice. You form a relationship with it. It becomes part of your identity when you make something with it. You tune it.” I would like to venture to add that, no matter how familiar and comfortable an instrument becomes, you can’t depend on it completely. You have to know that, if your favorite instruments break or you have to travel without them, you can still make your own music.

Mainstream media and the technology sector often talk about consumer technologies as though the instrument does all the work. But our successes and mistakes, no matter how dramatic, are made in the interplay among our instruments, skills, choices, and thoughts. Right now our community is grappling with the rise of visual interpreters: apps and hardware that supply visual information to blind people using computer vision, verbal description from a human being, or some combination of the two. Sighted assistance on demand is, potentially, a distraction from the cultivation of skills: having an interpreter on hand might make us less likely to check for Braille signage, label stuff, or notice landmarks as we travel. Even people who are comfortable with nonvisual techniques can be distracted by high-tech solutions because our attention is a finite commodity. I spent a couple of minutes my first day here using an app that shall remain nameless to sort shampoo and lotion bottles in my hotel room, only to discover in the middle of my shower that there were Braille characters marching around the cap on each one. I’ve noticed that sometimes, when a person is using an app to navigate, he or she (or okay, I) might start to have a cane arc that’s not so even and wide anymore. My husband says it looks like you’re plowing the fields, and we might attend less to the information that comes from the textures, patterns, and sounds around us. There’s something else competing for our attention. That doesn’t mean that we shame people any more than we shame people for bad typing technique. We give people options, and we give people tools. It’s hard to play more than one instrument at the same time. It’s not impossible, but blindness training must include strategies for using tech with mindfulness and self-awareness so that technology enriches rather than flattens our perceptions. [applause]

Technology meant to make our lives more convenient (or to borrow a buzzword that I hate—frictionless) is a buffet of unlimited enablement’s for anyone blind or sighted who can afford to partake. If you don’t want to walk five blocks, take a Lyft. Are you hungry, are you out of groceries and you can’t bother to talk to a human? Try GrubHub. Choosing between the convenience of a few taps and a less predictable adventure in the real world can be a struggle, and it’s an easy slide from some occasional use of an app to conjure up a pizza party or a late-night ride to a bleak procession of lunches ordered to your desk and long expensive rides that save twenty minutes on the train. Maybe developers have a responsibility to build in tools that help us notice and alter new patterns as they emerge. Maybe self-discipline is best managed within the self. Either way, your daily decisions add up to the lives we choose to live: when you walk in your city, are you always waiting for the next piece of advice, turn after turn, or do you sometimes pick a direction and just go? Do you ever get lost and take joy in the confusion, the clues along the way, and that feeling that you get when you’re not lost anymore?

Supporting and sometimes challenging each person’s ability to develop and his or her approach to independence is sometimes difficult and always worthwhile. Because of this Structured Discovery I received, informally through my mentors and formally through the Colorado Center for the Blind, I use technology to enhance and not replace the skills that I carry with me in my brain and my body. I love that I can discover a new coffee shop or get walking directions, but I know that if my phone dies, I’ll live. And I can always problem-solve, use what I know about urban and rural geography, and check out ambient clues in my environment to get where I am going.

Two years ago I went sea kayaking on Tomales Bay with the San Francisco LightHouse. [applause from the California delegation] I can easily use some ambient clues in my environment to know where the California delegation is sitting in the session. So we camped on Tomales Bay. There are no roads, no bars on my phone, no digital guides of any kind, but because of my fundamental trust in my nonvisual skills, that was a peaceful respite and not a scary time.

I grew up using Braille and a screen reader both since kindergarten, and my Braille teacher was petrified that if I learned to use a computer too early, I would abandon Braille. She was not wrong to worry, but instead of fearing the computer, she ultimately supported me using both and became the advocate in my life who would ensure that one day I would switch back and forth between Braille and speech and sometimes both all the time. [applause]

As we explore technologies for visual interpretation, we’ve got to practice mindfulness, and we’ve got to practice art. There is an art to effectively using a cane, taking notes with the slate, communicating efficiently when you’re working with a reader or a shopping assistant. The prevailing narrative in marketing materials and mainstream media sometimes is so reductive as to suggest that these technologies for the blind are mostly powered by magic, bouncy music, and positive thinking, but using machine vision and visual interpretation is an art that you learn.

I love learning new origami models. There is something really cool about following lots of little steps to transform a plain square paper into a flower or a fish, but most instructions online are chock full of pictures and diagrams and videos featuring several minutes of totally silent moving hands. I access this content using a visual interpreter, specifically sometimes Aira. But I am not a passive recipient of description. Here are a few things I had to learn first before I could actually learn how to fold a Koi fish with the help of an Aira agent: I learned that I needed a well-lighted workspace, and my phone camera works best [instead of the glasses provided]. I learned that the instructions that I think are good online because they have the most text are the most ad ridden and cluttered from a visual perspective, so believe it or not, the silent YouTube videos sometimes work the best. I learned that vocabulary is important. “Fold in half” can mean six different things, so it is better to say “bring the top right corner to the lower left corner and then crease.” Not every interpreter will be familiar with the vocabulary I prefer for this or any task, so I need to learn how to make suggestions that are clear, kind, and consistent. I’ve got to choose a time for skill building where I’m not feeling rushed so that I can give myself time to negotiate communication, work through mistakes, and not get frustrated. Last but not least, I have to write those steps down so I won’t need an interpreter next time, and at the end of the session I have not one fish but fish for a lifetime. [applause]

Directing technological innovation begins with the way we direct ourselves and one another. I believe that we best serve our community when we actively engage with the full range of instruments available. When we teach nonvisual skills, we introduce sleep shades as a tool to limit the distractions of visual information and help learners develop proficiency and trust in nonvisual techniques. We take other tools out of play to achieve instructional goals: sighted assistants, Perkins Braillers, GPS—you name it, you’re not getting it during training. Constraint is a powerful tool for focusing attention, directing effort, and building confidence. But I would encourage our mentors and professionals to practice the fine art of pursuing excellence through constraint while still supporting at appropriate times the exploration of high-tech tools as part of blindness training.

Our professionals around the country and informal mentors offer an approach that’s free from sales and marketing hype and grounded in the belief that any blind person can achieve the goal given the right skills and opportunities. We need a framework for exploring Structured Discovery and technology together to build confidence across unfamiliar situations, active goalsetting, problem-solving, task analysis, self-awareness, self-confidence—like structure discovery always does. High-tech tools work better when you’ve got solid Structured Discovery training, so let’s spark conversations and learning opportunities to use both, and let’s invite developers to design ways for users to experience and explore the structure discovery mindset. [applause]

We must ensure with the full power of collective advocacy that technology does not create collective harm. When I bought the first version of KNFB, it didn’t connect to the internet, so we didn’t update it much, and it was harder to share documents. But you know what—my software didn’t change unless I told it to. We’re living in an era of accessibility as a service. If you’ve ever updated to a newer operating system or an app and lost some abilities to read controls and screens, if you’ve felt like an unwilling test dummy when you’ve told a developer that you can’t do something anymore and gotten a form letter that thanks you for your feedback, you know why this is bad.

We live under the constant shadow of digital precarity: software can change at any time in ways that make our instruments more frustrating and less useful. Not only must we insist that new technologies be born accessible, but we must pursue with equal vigor acceptance of the proposition that accessibility should not be breached, that software testing includes performance with accessibility features to the degree that unusable features are not shipped and developers who, through errors or inaction, distribute software with accessibility breaches in it provide explanations and concrete action plans as they would when a security breach or service downtime affects the general population. [applause]

Algorithms are pieces of code that make decisions based on information they receive. They may check to see that you are running a screen reader and without your explicit consent send you to a special subprime version of a webpage. Algorithms can parse images and texts like the ones you might find in a social media context and decide what information gets exposed to your screen reader. If I take the liberty of defining algorithm broadly as anything that digitally mediates information and decision, an algorithm decides whether people uploading images to Twitter are prompted to describe them by default or are expected to find that accessibility tool buried in a secondary setting screen. Algorithms create the ads you see, the music you hear, your transit directions. They automate the information that cloud-based services collect about you, how long it’s kept, who can access it, and how your identity is protected or not protected. Algorithms surface what you find when you look for information about yourself. If you’ve ever done a search for the word “blind” and were disappointed by links for window shades and medical cures, you know what it’s like to work with an algorithm that was not designed with you in mind.

Algorithmic accountability is the process of assigning responsibility for harm when algorithmic decision-making results in discriminatory and inequitable outcomes. In the era of accessibility as a service, it’s time we hold developers accountable. When code denies assistive technology users access to cloud-based platforms or directs us to separate and subprime user experiences, the outcomes are discriminatory and inequitable. Holding developers accountable for creating platforms that support consistent, well tested, integrated accessibility is just the beginning. Design decisions matter. Developers can and should place image description and tools prominently enough to convey the expectation that users should employ them, not just on special occasions but always. Machine vision and interpretation apps should be designed with more than one path to communication. Audio works well for some of us some of the time, but whether we are deafblind or simply unable to talk or listen in a business environment or a loud concert, we need the option to use other methods. Transparency matters.

Developers should craft privacy policies that use plain language to explain how our data is used. We expose our personal documents, environments, colleagues, and daily lives to machine vision and interpretation, so it is critical that we know what data that we’re handing over. In many cases apps keep our voices, camera feeds, and location data long after we’re done using them, and we deserve to know how this information is stored, for how long, who can access it, how it’s being used, and how we can opt out. If a company gets acquired, we need to know whether a buyer we may or may not know or trust can inherit our legacy data without our explicit consent. Ownership and access matter.

We should be able to examine the data we contributed to a developer’s cloud at any time, collected for our own records, and deleted at will directly rather than through a process of faith. If the practice of locking a user out of his or her own data and history seems somehow more justifiable in the case of cloud-based vision apps than those we already use for cloud collaborations and social media, we need to have a lively and public discussion about why that is. Even when developers make design decisions informed by the community and manage data in ways that are consensual and transparent, collective harm can still happen.

Some of us are on the greener side of the digital divide, equipped with the training, infrastructure, and money it takes to use the latest technologies. Many of us don’t have the training or the funding, and any of us can find ourselves in a part of the city or the world without infrastructure that supports cloud services. We should approach this problem by pursuing partnerships and models that will bring more people into contact with high-quality training, reliable infrastructure, and sometimes direct funding. This is in line with our existing efforts to bridge the digital chasm that blind people must cross to participate fully in education, employment, and civil society. When we encounter an inaccessible place, product, or service though, and we solve it using technology, we must be careful not to let our possession of a personal bridge to access distract us from the important labor of building a more accessible world for everyone. [applause]

Last year I presented at an online conference that had a thoroughly inaccessible web platform, ON24. With the help of an Aira agent who remoted into my computer, I controlled my slide deck, read questions from the audience, and avoided having to ask the event organizers to solve the access problem for me. Although I documented the inaccessible nature of the platform and followed up by email with the conference organizers, part of me wonders whether I was as persistent as I would’ve been if persistence were my only option, and whether the presence of a competent interpreter made my access request seem a little less pressing.

It’s wonderful that we can use all of these apps to accessify everything from vacation photographs to museum exhibits to flat screen appliances, but it would be better still if we could touch the composition of every photo, rely on museum exhibits to engage all of our senses, and to expect every flat screen to come with accessibility options. It’s hard to play more than one instrument at the same time, but it’s not impossible. We can improvise with technology, perform our own access when we need to, and teach our fellows how that’s done. But let’s keep the beat of the drums of freedom: careful cultivation of embodied skills that don’t rely on technology but rely on our self-trust and self-respect, collective action that shapes a more accessible world from the status box to the ballot box—from those tiny Braille labels to the vivid tactile footprints of an astronaut’s indelible first steps on the moon—and tireless advocacy to help each blind person discover the tools, methods, and self-belief that he or she needs to find and live the life that he or she wants. Thank you.

Media Share

Facebook Share

(back) (contents) (next)