JOURNAL AVAIL+ PODCAST BLOG STORE CONTACT LOGIN

The Age of AI

blog Dec 07, 2023

By Natalie Born

There is no doubt that Artificial Intelligence (AI) has taken the masses by storm. It’s literally changing the landscape of our world just like the internet did when it was released to the public in 1993. AI can significantly accelerate tasks and processes across every domain it touches.

  • Perhaps you have personally experienced some of these AI examples:
  • Writing a speech or sermon in 15-20 minutes instead of several hours.
  • Using chatbots and virtual assistants to solve customer problems without ever engaging a human in the moment, freeing up customer service agents to focus on more complex tasks.
  • Leveraging AI-powered data analytics tools to process vast amounts of data, identifying patterns and trends in seconds.
  • Translating a sermon delivered in English, dubbed in Spanish with the person’s voice, tone and mouth in perfect sync.
  • Rapidly analyzing pictures and video, making searching, filtering and organizing large sets of information possible.
  • In the medical field, AI-powered medical imaging systems assessing X-rays, MRIs and other medical images faster than a radiologists can, while maintaining a high percentage of accuracy.

But with all of those capabilities, we know that this area is largely unregulated, and there’s also a dark side to AI. So, the question is how do we ensure ethical boundaries are in place and information is not misused? Here are five things to consider when deciding to use AI.

  1. Data Security: Protecting sensitive data is crucial. Never use confidential information in Open AI models.
  2. Bias: AI systems have biases from the data that was used to train them. So, when we are using AI, we need to ensure we do our own research as well and not just take the information given to us at face value.
  3. Transparency: Users of these systems, and those that train them, should understand what data is being used to train these models.
  4. Deepfake: The scariest part of AI is that it can be used to place people where they never were, saying things they never said and doing things they have never done. How do we know the real from the fake? Additionally, how might deepfakes sway public opinion or even elections?
  5. Human Intervention: AI shouldn’t be left to make key decisions; it should always have human oversight.

Even though the government may be slow to decide on what capabilities to allow and disallow with AI, it would be wise for you to create some guardrails for your own employees and organization, ensuring that data is protected and the integrity of your team’s work is upheld. If you are the leader of a team or organization, I would encourage you to do three things today:

  1. Find out who is using AI on your team. What use cases are they using AI for?
  2. Write a one-page brief delivered to your organization giving them some rules of the road on what to do and not do with AI (e.g. not entering personal data, customer information, or sensitive company information like financials into open AI products).
  3. As a management team, stay up to date with rules, regulations and practices in the area of AI. Don’t be left in the dark or the dust when it comes to AI.

There is no doubt that AI has several huge benefits for teams and organizations, including economies of scale, massive shortcuts and automation; but there are also pitfalls and challenges that organizations face. Balancing the benefits with the downsides of AI is what leadership is all about. Don’t allow the cons of AI to keep you from leveraging it to save time and money, and to scale your organization.

 

Order Natalie Born's book, Set it on Fire, from the AVAIL Store!

Stay up-to-date with all our upcoming releases!

Join our mailing list to receive the latest news and updates from us. Your information will not be shared.

Close

50% Complete

Two Step

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.