Progress staged the European leg of its developer conference programme in the Bulgarian capital Sofia this month. This two-day deep-dive was designed to showcase the company’s platform and also open up opportunities for key technology partners to explain how they work in terms of the total toolchains on offer. With just a brief welcome offered by Alyssa Nicoll, senior developer advocate at Progress, the audience were handed straight over to guest speakers to provide a more holistic wide-angle on key developer tools and technologies.
A session devoted to computer ethics for technologists was presented by Selam Moges in her role as software engineer at Apella, a company known for its medical software and hardware data security products that are used for mission-critical applications. Moges lists herself as a champion of diversity, accessibility and inclusivity in the technology space.
Treading murky waters
Firstly looking at why we should care about computer ethics in the first place, Moges admitted that ethics ‘treads through murky waters’ and said that it’s rarely productive to state ‘what is right and what is wrong’ – that being said, ethics is a set of agreed principles that govern the use of technology today.
Given the fact that ethics often manifests itself through policy, it’s important to remember that not all policies fit all use cases. If we consider the difference between a traditional ‘art’ artist and an AI artist that uses computer-aided design tools, if an AI artist creates something in a space where very few policies exist, then what should they do with the end product? Do they release their AI art out into the world at which point it will be lost with no copyright, or do they go for it and risk infringing some existing real world art copyright? Policy formation around AI art actually came around about two years after the tools were created that work in this space – and this is a latency scenario that crops up many times.
If a technologist really cares about speed and experimentation, Moges says that ethics should be focused on the risk of long-term harm (in terms of sustainability, or even personal harm) resulting from the development. If any given technologist is heavily focused on business benefits, then ethics should govern the need to provide a positive influence on an industry as a whole. If a technologist just cares about what’s right, then ethics should really be there to help create a positive technological landscape around a product.
“Think about crash test dummies as an example of a failed design. Most crash test dummies are based on the wrong data i.e. the average weight of a man in the 1970s, which was 170 lb back then, but is now closer to 200 lb. The position of women drivers is also assumed to be in the passenger seat, whereas now there are slightly more female drivers in the USA than males,” said Moges, providing a real world working example that we can all envisage.
She recommends that we need to be able to update designs and the way we test them to update to the evolving needs of humans. Because poorly designed products make it hard for users to use products in the first place, we need to remember that ethics is all about principles and moral guidelines, but there is an intersection point where ethics should move towards the middle ground between ethics and design (which is just focused on building and creation) where we start to think about moral considerations.
Brainstorming bias
We know that bias can be introduced in a design and brainstorming phase if we have a homogenous group of people that naturally gravitates its focus around functionality for just one kind of user. Even through testing and development, Moges says that if that same group of people is behind the product development, the same bias will pervade. Refactoring any product or service to suit a more diverse range of people or users after the initial development is never as efficient as getting it right in the first place.
There’s also a key opportunity here to think about technology design that starts with one functionality, but perhaps through serendipity, ultimately extends and evolves. When we think about the ‘curb-cut effect’ (lowered curbs that allow people on wheels to move off a road onto a pavement or sidewalk) that was engineered into streets for wheelchair users, the design actually has ended up helping people pushing children in prams, cyclists and so many others. When we consider that a huge number of people watch YouTube videos with the sound off, even though we have designed close caption subtitles for people with hearing impairment, the benefit has been enjoyed by many more users than initially intended.
So then, we need to ‘bake in’ inclusivity in order to launch better products in the first place.
As technology continues to advance at an unprecedented rate, it is becoming increasingly important for those in the tech industry to understand the ethical implications of their work. The decisions that engineers and tech leads make can have far-reaching consequences for individuals, communities, and even entire societies. Without a solid understanding of computer ethics, it is all too easy for technology to be developed and deployed in ways that harm rather than help.
Non-traditional experts
Improving the user experience for all users with a wider ethical embrace should take precedence over generic design principles. In this way says Moges, we can contribute to society and foster human well-being. Techniques that can work well in this space include the recruitment of non-traditional experts into development teams – traditionally when we have done this in the past they have only been used in the post-development testing phase, but when these people are brought forward into the design phase the end results are more productive.
“It is crucial that engineers and tech leads receive training in this area. Engineers and tech leads can integrate ethical considerations into their work,” said Moges. “As technology continues to evolve and become more intertwined with our daily lives, it is essential that we ensure that those responsible for designing and developing these technologies understand the ramifications of their work.”
These approaches can help us tackle everything from E-waste to automation and job displacement.. and onward to data privacy and security concerns. Moges advocates use of a pre-mortem phase that sees team leaders design through a ‘what if’ scenarios to design around how a project could go wrong. By using techniques including empathy mapping, we can bring ethics forward into the way all products and services are created.