While working on Unilever’s supply chain digital transformation roadmap, I became passionate about digital ethics. With the increasing speed of digitization, an exponential number of questions are emerging. How do we balance personal privacy? How do we ensure unconscious bias is eliminated from algorithm creation? With technological innovation well ahead of public policy, to what standards should the private sector be held accountable?
I strongly believe that digital ethics will be the next evolution of Corporate Social Responsibility. As the importance of environmental issues increased in recent years, customers and consumers began holding companies to higher standards than government regulations. Facing the rapid expansion of technology in every aspect of our lives, a similar reputational risk will emerge around digitization. How does an organization assess their use of technology? Are the boundaries clear so that large organizations can consistently operate accordingly? Will ethical use of data and technology be a competitive advantage in the future?
My research would focus on defining how organizations are currently addressing this topic or potentially other historical ethical dilemmas. Are there policies and processes being consistently utilized to drive choices? Is the current approach effective? My aspiration, ultimately, is to craft a digital ethics framework for organizations which introduces a common language defining digital ethics, expands organizational understanding on how to apply in daily operations, and ensures decisions are informed before implementation. Based on my experience, most organizations are equipped to apply the 1964 Supreme Court threshold on obscenity – “I know it when I see it”. The reputational risk of discovering that a digital ethical line was crossed after the fact could be catastrophic to an organization’s health.
This is a huge topic, one you’ll need to focus. Start first with the scope of your analysis — a single company? A group of companies? Unilever itself (which may cause conflicts of interest)? Then consider defining terms. What do you mean by “ethical”? By “digital ethics”? Are you focused on private data protection, or on corporate intellectual property? Are both governed by similar standards and expectations? Or do user data and corporate intellectual property have separate ethical standards for protection?
To develop a digital ethics framework is a tall order. Research may dictate that “digital ethics” isn’t a coherent enough field or subject for a framework. Seek first to understand the field of digital ethics itself, along with its relationship with corporate and individual/personal data protection. You may discover that researching and defining “digital ethics” may well be your entire project, since the term and concept are likely contested and nascent, ill defined across multiple industries.