More on News
Cloudera, a leading data company specializing in trusted enterprise artificial intelligence (AI), has conducted research that sheds light on the increasing utilization of generative AI technology in the United States. According to the findings, 53% of US organizations currently employ generative AI, with an additional 36% in the early stages of exploring AI applications for implementation within the next year. However, the study also unveiled a significant concern among decision-makers responsible for data strategy and management, with 84% expressing reservations about sharing data with third parties for the training and fine-tuning of generative AI models.
These findings underscore the ongoing challenges surrounding data privacy, security, and compliance in an environment often likened to a “Wild West.” Nearly all survey respondents (95%) emphasized maintaining complete control over data during AI model training to establish trust in AI outputs.
The research, conducted by Coleman Parkes Research, involved surveying 500 IT decision-makers and data analysts in the US. Respondents were drawn from organizations with over 1,000 employees across various industries, including finance, banking, insurance, manufacturing, telecommunications, retail and e-commerce, government and public sector, healthcare and life sciences, technology and software, energy and utilities, education, and media and entertainment. The survey took place between June and August 2023.
Abhas, Chief Strategy Officer at Cloudera, noted, “Generative AI has taken center stage in boardroom discussions – Whilst analytical AI products have been worked on for decades, ChatGPT has accelerated Gen AI innovation and the road to human-level performance has shortened across every industry.”
Abhas also highlighted concerns about trust, compliance, authorization, and intellectual property. He emphasized that organizations are cautious about the potential exposure of training models using publicly available data and the risk of receiving erroneous responses from AI models lacking relevant enterprise context. Successful creation of trusted and secure data sources is a strategic advantage for higher fidelity outputs with Generative AI applications.
“Yet there are concerns regarding trust, compliance, authorization, and intellectual property. Organizations are apprehensive about the potential exposure of training models using publicly available data and/or receiving erroneous responses from AI models that have NOT been trained with relevant enterprise context. Our survey results confirm our understanding that data moats are real and organizations who have successfully created trusted and secure data sources will have an advantage in producing higher fidelity outputs with Generative AI applications,” Abhas added.
The survey highlighted that generative AI is being leveraged for various use cases within organizations. The most relevant use cases include:
“The success of these initial use cases, such as chat Q&A, text summarization, and co-pilot productivity enhancements, relies on bringing the models to the data, at the point of its creation and origination, and not the data to the models! For example, a large financial institution is currently making 4 million decisions a day by processing all data through their trusted AI Lakehouse,” said Abhas.
The survey sheds light on the evolving landscape of AI adoption and the importance of data privacy and control in the AI ecosystem as organizations continue to explore the potential of Generative AI technology.