top of page

Artificial Intelligence:
Our Responsibility

Why we need to advocate for the responsible use of emerging technologies instead of relying on big business or government to be responsible.

AI and the Department of Defense

If you had to guess who was leading the charge for responsible use of AI, who would you guess?

 

The UN? The Center for Humane Technology?

 

What about the Department of Defense?

 

That’s right, the department that awards billions of dollars in defense contracts to companies like Lockheed Martin and Raytheon is pushing for responsible AI development.

 

While many of us have only been exposed to AI within the past year, the Department of Defense (DoD) released their first DoD AI Strategy all the way back in 2018. It was updated in 2020, and now again in 2023. On November 2nd, Deputy Defense Secretary Kathleen Hicks unveiled the updated plan which emphasizes the DoD’s agile approach, which can best be summarized in the following quote.

"Technologies evolve. Things are going to change next week, next year, next decade. And what wins today might not win tomorrow," said DOD Chief Digital and AI Officer Craig Martell. 

"Rather than identify a handful of AI-enabled warfighting capabilities that will beat our adversaries, our strategy outlines the approach to strengthening the organizational environment within which our people can continuously deploy data analytics and AI capabilities for enduring decision advantage,"

Kathleen Hicks giving a speech.

Deputy Defense Secretary Kathleen H. Hicks conducts a press briefing at the Pentagon, Nov. 2, 2023.

Photo Credit: Air Force Senior Airman Cesar J. Navarro

It’s a sound strategy to be sure, but how can the DoD guide the responsible use of AI when so much of the development happens in the private sector? 

 

Well, at a roundtable hosted by Booz Allen Hamilton (a giant consulting firm centered around advising military, government, and business on tech), Matthew K. Johnson announced an upcoming web app that will be open to the public. The app will help guide users through responsible decision making when it comes to the use of AI, as well as inform them of the DoD’s stance on how to responsibly use AI.

 

While discussing how they’ll encourage businesses to use this framework, Johnson said,

“One of the things our team thinks about a lot is, how do we incentivize responsible AI?” Johnson said. “We’ve been thinking primarily in terms of carrots rather than sticks, [and] one of the big carrots we have with DoD and a $900 billion a year budget is funding.”

Matthew Johnson is a senior advisor at the Pentagon’s Chief Digital and AI Office (CDAO).

The Show Must Go On

After a year’s worth of articles revolving around leaders in AI warning about economic disruption, the erosion of democracy, and even the extinction of humanity itself, there are no signs of slowing the development of AI.

 

Allow me to reiterate.


Hundreds of leaders in the AI space are warning about the potentially catastrophic effects AI could have on our world, yet all of them are continuing their work.

So we’ve established the danger AI poses to life as we know it, but that isn’t enough to stop its development. The reasoning they often cite is simple enough. If the responsible people stop developing, that won’t stop the “bad actors” from continuing their work. This would mean that the more overtly authoritarian powers of the world would have a significant leg up, putting us in a precarious position. It was the same justification for the atomic bomb, and its not completely unfounded.

 

Regardless of our opinions on this, our current models of governance and economics mandate progress at all costs. So, in an effort to integrate AI into our world responsibly, more focus needs to go on the ethics of AI.

Great Power = Great Responsibility

A benefit of our newfound interconnectedness is the ability to police the actions of government and big business. The internet has afforded us eyes and ears around the world, as well as a means to express our thoughts and feelings on the things we observe. This in itself is a great power, and many have not recognized the responsibility that comes with it. As much as I’d like to make the one-sided argument that the public should police the powers that be, the reality is two-fold.

 

First, we have to take steps to make the internet a much more humane place where we treat others with respect. It has to be a place where people are allowed to voice their opinions without fear, and where healthy disagreements can occur without reducing the other person down to a bigot or imbecile. The internet covers the entire world, and the expectation that people with an entirely different set of circumstances than you should think the same as you is ridiculous.

 

Rather than argue semantics and point out our minor differences, we need to recognize the underlying humanity we all share. We all need to eat, to sleep, to form relationships, to feel accepted, to find meaning. We need to define our relations through our similarities, not our differences. This is the only way we can effectively watchdog the powerful in our societies.

Catching Up

Social media isn't a toy. It has immense power that can hurt or benefit us. We need to use it responsibly.

Secondly, the businesses creating and driving the implementation of these AI into our lives need to do so responsibly. It’s easy to see a corporation as its own, self-regulating entity, and there is truth to that, but every business is made up of people. People have the power to change things at their place of work. Roles and teams need to be created that are primarily concerned with the ethical integration of AI. We can’t let profit be the only driving force behind these changes, or else there may not be anyone or anything to profit off of in the long term.

 

The same way workers advocated for labor rights, for weekends, for getting children out of the workforce, we need to pressure our employers to act responsibly. This isn’t an issue of worker abuse, it is an issue we can’t clearly define or even wrap our heads around. These technologies have the power to change society on a fundamental level. Just look at how much social media has changed life in the past 10 years. AI won’t move so slowly.

The Need for Ethics Committees

Very often, rich people will get together to discuss the future of the world and how to shape it. Some of them are truly concerned over the potential dangers of the technologies their companies produce but, in this system, there is no incentive for morality. Any company who is concerned with the ethics of a given decision will be outperformed by a company who isn’t. That’s why we see tech leaders express their concerns, then continue on as if there is no problem.

 

I believe one potential way to sway decision making in favor of responsible choices is to advocate for them ourselves.

 

By pushing for thoughtful implementation of AI we can create the space to stop, think, and imagine the repercussions of our actions. We can’t rely on government or CEOs to think of our future, we need to be our own advocates and push decision makers towards more responsible choices.

Conclusion

Sure, the measures taken by the Department of Defense sound like a step in the right direction, but are we going to accept that as our only safeguard?

 

It is said that humans have a hard time with long-term planning, that we have difficulty finding solutions for problems like climate change because the danger seems so far into the future. Maybe one of the benefits of the rapid acceleration of AI is that we’ll be able to conceptualize the potential dangers more easily. If that is the case, I hope that we can take steps to protect ourselves, because things won’t turn out ok unless we ensure that they do.

Thank you for your time. I hope you have a good day.

bottom of page