As software’s role in business proliferates, the ethical implications of decisions made by developers only become more profound. In Big Questions, we outline the biggest concerns facing development teams and look at how companies are grappling with them.
In our increasingly digital world, software has a fundamental role to play in almost everything we do. It’s the backbone of how our global workforce collaborates, it underpins much of our industrial operations, and even ensures the smooth-running of our cities.
A failure of these critical software systems, whether intentional or accidental, can result in damaging, even fatal, consequences. Unfortunately, these failures are all too common. It seems that every week we hear of another privacy infringement, security breach, or technology misuse in the news. With all this in mind, it’s never been more important for software developers to be aware of their ethical responsibilities.
Building a more secure future
The information means that a company’s data and infrastructure are its greatest assets, and weaknesses. According to Mandiant in their 2020 “M-Trends Report,” 22% of attacks on companies were for IP theft or corporate espionage, while a whopping 29% of attacks were for direct financial gain. Interestingly, only 4% of attacks covered by the report were to weaken the target’s systems for another hit later. This is worth knowing because 31% of customers covered in the report who suffered one attack saw another within the following 12 months!
The Internet of Things only makes the rise of attacks on companies more likely and more challenging to deal with as it continues to grow; more than 20 billion new devices are forecast to connect to the internet this year alone. Malware creators are ready and waiting to infiltrate the software underpinning these devices. This mounting threat landscape is something that developers cannot afford to ignore.
“The greatest ethical challenge facing software developers today is how to secure information in order to reduce the risk of intellectual property loss to external threats (like hackers) in shared environments without affecting ease of use for the end user,” said Thomas Holt, a professor in the school of criminal justice at Michigan State University, whose research focuses on computer hacking and malware.
“If encryption or security becomes too cumbersome, employees will be less likely to use them or create workarounds, so there must be thought given as to how to protect vulnerable information in a distributed and networked environment.”
A Code of Ethics developed by the Association for Computing Machinery (ACM) states that computing professionals should “design and implement systems that are robustly and usably secure.” They should do this by integrating mitigation techniques and policies, such as monitoring, patching, and vulnerability reporting. It’s also important that developers take steps to ensure parties affected by data breaches are notified in a timely and clear manner.
The inspiration to consider the repercussions of a product team’s actions can come from any source:
“As a practicing security professional in tech, I’m grateful to see that many modern engineers were influenced by Isaac Asimov, inventor of the Three Laws of Robotics,” says Ty Sbano, Sisense Chief Security and Trust Officer. “Rule number one is ‘A robot may not injure a human being or, through inaction, allow a human being to come to harm.’ Even though the term ‘injure’ may be severe, if you take a step back, the mandate for product teams remains the same when it comes to acting ethically: In the case of privacy and ethics, or any other impact your creation could have on users and the world, you really need to think these things through.”
“Where machines learn like humans and from humans, unconscious bias is as much a threat as with humans.”Lothar Determann
User acceptance testing and other best practices can help developers avoid implementing security precautions that are too confusing, are situationally inappropriate, or otherwise inhibit legitimate use. ACM is clear that there should be no compromise here: in cases where misuse or harm are predictable or unavoidable, it says the best option may be to not implement the system.
Give your customers (and everyone else) privacy
Software developers are often asked to create solutions that enable the collection, monitoring, and exchange of personal information. Our increasingly global workforce collaborates via a host of digital technologies spanning multiple countries and territorial borders. Employers can use many of these technologies to monitor employees, but at what point is it considered an invasion of privacy? How far do software developers integrate this ability to monitor into their solutions?
“Respect privacy” is featured high up in ACM’s Code of Ethics, which says that software developers should only use personal information “for legitimate ends, and without violating the rights of individuals and groups.” This means taking precautions to prevent the re-identification of anonymized data or unauthorized data collection, ensuring the accuracy of data, understanding the provenance of the data, and protecting it from unauthorized access and accidental disclosure. Personal information gathered for a specific purpose should not be used for other purposes without the person’s consent.
Fight algorithmic bias to deliver better products
Algorithmic bias describes systematic and repeatable errors in a computer system that create unfair outcomes. This danger looms greater than ever today thanks to machine learning techniques that are fundamentally changing the way software is made.
“Instead of coding instructions for machines from A-Z, engineers provide massive data sets to programs with high-level instructions to figure out solutions on their own,” said Lothar Determann, partner at multinational law firm Baker McKenzie, and professor of law at the Free University of Berlin. “Where machines learn like humans and from humans, unconscious bias is as much a threat as with humans.”
ACM’s Code of Ethics states to “be fair and take action not to discriminate.” It says that technologies and practices should be as inclusive and accessible as possible and software developers should take action to avoid creating systems or technologies that disenfranchise or oppress people. Failure to design for inclusiveness and accessibility may constitute unfair discrimination. Determann believes that success requires developers to code prohibitions into algorithms.
“We must not rely on machines learning on their own what is and isn’t prohibited,” he said. “We need to do our best to avoid replicating unconscious human biases by training machines with insufficient data (e.g. outdated history books), supervising teams (e.g. lacking diversity), and procedures (e.g. failing to flag for machines where data sets are known to be incomplete). We have to develop countermeasures to reduce the risk of replicating human unconscious bias in AI. Diverse teams and constant validation and questioning should be part of the solution.”
Think before you build
Software developers are the first, and last, lines of defense against the misuse of technology. Our current era of software culture may have been dominated by Facebook founder Mark Zuckerberg’s now-famous motto “move fast and break things,” but that’s not what users want from their software anymore. However, inside software companies, there can still be pressure on developers to get software to market quickly, making it tempting to skip rigorous testing.
“Software developers are often infatuated with the technology itself — what it can do and how we can apply it,” said Michael S. Kirkpatrick, associate professor in the department of computer science at James Madison University in Virginia, and education coordinator for ACM’s Committee on Professional Ethics.
Kirkpatrick argues that it’s the responsibility of developers to ask more questions about how their technology might be used. “All too often, developers are given a task to develop a piece of code, without information on the context in which it might be used. This needs to change. Developers need to be more proactive in finding out how the code is going to be used and anticipating how it might be misused. They should also question what policies and procedures can be put in place to prevent misuse.”
Kirkpatrick believes that multidisciplinary development teams are crucial to success here. “Software developers tend to have a very narrow focus on technology, so it’s really important that they work with ethicists, anthropologists, sociologists, and other people who bring a different perspective on the context of how a piece of technology might be used, and how misuse can be prevented.”
Developing a better world
It’s clear that a lot of responsibility rests in the hands of software developers. The ACM’s Code of Ethics is a useful framework for helping software engineers to live up to their ethical obligations, but it doesn’t provide a solution for solving ethical problems; rather it serves as the foundation of ethical decision-making.
Ultimately, it is up to software developers to act responsibly, taking the time to consider the wider impacts of their work, which should consistently support the public good.
“Technology amplifies power,” concludes Kirkpatrick. “Governments, large corporations, and other actors can use it to extend their influence in all aspects of our lives. It’s no exaggeration to say that the unquestioning adoption of technology poses risks to fundamental human freedoms and civic rights. So it’s really important for software developers to listen to different voices, to ask more questions and to think more proactively about how their software can be used and, more importantly, how it might be misused.”
Lindsay James is a journalist and writer with over 20 years’ experience creating compelling copy for some of the world’s biggest brands including Microsoft, Dassault Systemes, Exasol, and BAA. Her work has appeared in The Record (a magazine for the Microsoft partner community), Compass, and IT Pro.