Chapter 5: Considerations in Leveraging AI to Augment Humanity
Editors Note: This post is a part of a series on Ethics of Artificial Intelligence.
With the impact of AI on the economy, business output, and life being substantial many organizations have formed to propose principles and frameworks to engaging. The following is a set of proposals that aggregate the various groups that describe the relevant principles. The most difficult challenge with the design and implementation of AI principles is that humanity is very poor at implementing them ourselves and we must challenge ourselves to hold our creation to a standard we do not yet hold. This is why in the introduction I said, “how do we embody our use of artificial intelligence with ethics which reflect humanity’s best intentions for all people corporately, each person individually, and for the relationship between person and intelligent machine?” With this as a starting point, here are the aggregate AI principles, starting with some great ones that Microsoft put together, which I believe are a very good approximation of the overall market’s statement on this:
- Inclusivity (Microsoft). AI platforms should provide broad value to the whole of humanity, respectful of who they are and their uniqueness. At the risk of accepting it as a buzzword, inclusivity is not just a word, but a motion to realize the beauty in every person. AI systems should not be built to exclude individuals but to broadly benefit humanity’s many facets. This is something that humanity has proven to be very poor at and potentially something that AI can mitigate if leveraged properly. AI can potentially help us in achieving inclusivity vs. being a motion to continue the current segregation and unfair practices that are so common in the workplace.
- Fairness (Microsoft, Public Voice). AI platforms should be built to apply an approach which treats all persons equally and mitigates the impact of data-driven bias. An AI platform should be more fair than a human equivalent doing the same job. The challenge of this is going to be answering the question “what does fair mean?” Does it mean that the rules are the same or that the rules are applied differently based on circumstances? For example, an AI algorithm might rightly make adjustments based on data, but we then find that it has excluded huge portions of the population because of what it has “learned”. Also, perhaps we wanted those portions of the population included because the right thing to do is to account for that disproportion to achieve a greater good.
- Privacy & Security. AI platforms should include controls surrounding when data is captured, how it is captured, how it is stored, how it is used, and when it is shared. The responsibility of every business is to consider if the AI system is operating in a private space or public space and the accountability it has to use that information in a situationally appropriate way. The delineation of circumstances will be tremendous, with much of the impact being not just the capturing of the data, but also how that data was applied and if it was used with appropriate consent, given a reasonable existence or lack of privacy. The integrated issue of security and how information tied to an identified person is made public. For instance, let’s say a major retailer has developed an AI system for determining what products I should buy. I have a reasonable level of privacy and agreement with the retailer that they will use certain data from me for this purpose. However, if that retailer shared personal-identifiable data with another retailer, the data is being shared without my consent and might be inappropriate. If added into this are the security concerns regarding intentional inappropriate use of the data and the picture now moves into not just the how it was captured and the business intent, but now the exposure of information that was never meant to be shared even by the business. All of these issues bring with significant litigation effects downstream, as well as the clear community impact.
- Reliability & Safety. The presence of AI in highly dangerous and sophisticated automated systems, such as manufacturing machinery, self-driving cars, financial processing, and customer support leads us to think about how we ensure that AI systems are implemented in a way that protects the driver, worker, finances, or relationships. In a sense, we are trusting our people, livelihood, and customers to a system that we don’t entirely understand, but know we need to implement to compete. The companies implementing AI in these spaces need to understand how they implement guard rails not just for the sophisticated AI “self-learning” disaster, but immediately for the basic operation of the system. The implementation of guard rails needs to be a default part of building these systems and controls need to be in place to detect problems as they occur.
- Transparency. Any AI system implemented needs to be auditable so that each decision made can be tracked to the originating decision tree and activity. A person impacted by such a system should have a reasonable ability to affect its outcomes based on fed information and the organization responsible should be able to understand why outcomes happened. There will be mistakes with AI, there will be results which don’t match spec and if the system acts on a typical basis as a “black box” without any boundary controls, we are opening ourselves up for inappropriate use or out of control results.
- Accountability. The organizations that leverage AI need to be accountable for the results of the AI in the lives of humans impacted. If an AI system is used which causes significant financial, personal, or professional impact which is deemed to be unethical or egregious, the organization should be held accountable, similarly to if a person working for the business did a similar thing. The undeniable benefit and competitive need that AI represents comes with the need to use it responsibly. We’ve shown time and time again that competitive pressure can cause serious innovation, but also risk taking that if not checked will cause major downstream impact that is un-mitigated by the natural controls of market forces.
I’ve thought of a few extras I think should be understood:
- Anthropocentric. We spent a fair amount of time talking about the “Human Difference” because we need to remember that any AI system needs to be designed to benefit humanity, vs. looking at humanity as a means to an end. The appropriate use of AI is to aid in the furthering of the “human person fully alive”, not only because it is the right thing to do, but because its objective outcomes will be better. Any generation that has looked at humanity as a tool to achieve an outcome, rather than the outcome itself has caused immense harm to families, communities, and inequality between haves and have-nots. We should endeavor to use AI to make humanity better, not make a few of humanity better.
- Global Standards. There are certain capabilities or AI, such as weaponization, where the only plausible way to avoid a mass transformation of our military forces is to create ground rules that all major countries choose to align to. We’ve seen similar agreements surrounding chemical weapons, nuclear weapons, and treatment of prisoners of war. If a global agreement is not defined, the natural competitiveness of military domains will take precedence and potentially lead to increased likelihood of conflict, especially due to the lesser perceived impact of machines fighting machines vs. humans fighting humans.
To learn more about responsible AI principals, see here from Microsoft, which has done an excellent job on this topic and bringing it to the practitioner level, which is how I approach it as well. This is about bringing principals to a real level, but first we need to understand the principals themselves. Up next, we’ll make it real…