Moving beyond the hype and using AI responsibly in 2024...

Arion Das
4 min readDec 31, 2023

--

AI ETHICS

These days, it’s not uncommon for anyone using social media to come across the term AI. Even non-techies come across hundreds of articles and products using automation technologies. If you’re reading this, you know what I said was not untrue.

But, like every other new product introduced commercially, even AI and automation have their side effects.

What exactly is responsible AI?

RESPONSIBLE USE OF AI

For us humans, the motivation to build new products and advancements is to make the lives of everyone around the world more comfortable.

Whenever we build a new tool, we first have to understand how people will use it, in what way it’ll impact people, and how it will be interpreted by the common mass (who get tempted and motivated by the frontend and sequence of activities of the new product). It is essential to know how people see themselves reflected in it.

Responsible AI is artificial intelligence built using a human-centered design approach and built for fairness, privacy, and safety.

Countries have already been pushing for the transparent use of AI: India, China, United States, etc.

How is responsible AI built?

AI

We have to first understand the current landscape of potential harms so that we can better identify and minimize them throughout the product life cycle.

Making a product open-source or public enables it to be judged by thousands of developers and customers, which helps in extracting and eliminating the errors present.

So, we identify potential harm in algorithmic technologies through research and community engagement.

The risks discovered are now mapped into something called the taxonomy of harm.

Harms of AI

This concept helps us build responsibly, which is essential for people who are typically not centered on tech development.

For instance, we have to think about how the tools being built could generate harmful stereotypes or have performance disparities for different groups of people.

Content is filtered using:

  • Safety Classifiers
  • Datasets
  • Policies
  • Design Principles

TESTING AI

This step is as important as any other in building responsible products. We use the process of Red Teaming to test the capabilities of our product.

RED TEAMING

RED TEAMING AI

It forces us to rethink how to test to begin with.

Initially, we are designing first-of-its-kind testing approaches to realize what our model is actually capable of.

Secondly, we recognize the positive cases as well as the risks involved.

Thirdly, we classify the cases into categories of high & low significance.

Get to know about the OPENAI Red Team Network.

RELEASING AI PRODUCTS RESPONSIBLY

RESPONSIBLE USE OF AI

It’s essential to think through the use cases and intentions of our AI products so our users can make meaningful decisions about how to interact with them.

So, the users can have more control over their data through an understanding of how the data is used and for what purpose.

We can implement these steps to reach out to more people:

  • Conduct workshops to make people aware
  • Taking their feedback about modifications or adding new features
  • Not only will their feedback go onto our product, but also onto user guidance and education materials.

There’s huge potential to these tools, so people should have access to them but also ensure that our next steps are safe.

Let’s connect and build a project together: 🐈‍⬛

The Rotation of the Earth really makes my day

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Arion Das
Arion Das

Written by Arion Das

AI Eng intern @CareerCafe || Ex NLP Research Intern @Oracle | Gen-AI Research | LLMs | NLP | Deep Learning | LinkedIn: https://www.linkedin.com/in/arion-das/

No responses yet

Write a response